MANUFACTURING RESEARCHAND TECHNOLOGY24
PLANNING, DESIGN, AND ANALYSIS OF CELLULAR MANUFACTURING SYSTEMS
MANUFACTURING RESEARCH AND TECHNOLOGY
Volume4.
Volume 5.
Volume 6. Volume 7A. Volume 7B. Volume 8.
Volume 9.
Volume 10. Volume 11. Volume 12. Volume 13. Volume 14. Volume 15. Volume 16. Volume 17. Volume 18. Volume 19. Volume 20. Volume21. Volume 22. Volume 23. Volume 24.
Flexible Manufacturing: Integrating technological and social innovation. (P. T. Bolwijn, J. Boorsma, Q. H. van Breukelen, S. Brinkman and T. Kumpe) Proceedings of the Second ORSA/TIMS Conference on Flexible Manufacturing Systems: Operations research models ~nd applications (edited by K. E. Stecke and R. Suri) Recent Developments in Production Research (edited by A. Mital) Intelligent Manufacturing Systems I (edited by V. R. Mila~:i~) Intelligent Manufacturing Systems II (edited by V. R. Mila~i~) Proceedings of the Third ORSA/TIMS Conference on Flexible Manufacturing Systems: Operations research models and applications (edited by K. E. Stecke and R. Suri) Justification Methods for Computer Integrated Manufacturing Systems: Planning, design justification, and costing (edited by H. R. Parsaei, T. L. Ward and W. Karwowski) Manufacturing Planning and ControI-A Reference Model (F. R M. Biemans) Production Control- A Structural and Design Oriented Approach (J.W.M. Bertrand, J. C. Wortmann and J. Wijngaard) Just-in-Time Manufacturing Systems-Operational planning and control issues (edited by A. $atlr) Modelling Product Structures by Generic Bills-of-Materials (E. A. van Veen) Economic and Financial Justification of Advanced Manufacturing Technologies (edited by H.R. Parsaei, T.R. Hanley and W.G. Sullivan) Integrated Discrete Production Control: Analysis and SynthesisA View based on GRAI-Nets (L. Pun) Advances in Factories ofthe Future, CIM and Robotics (edited by M. Cotsaftis and F. Vernadat) Global Manufacturing Practices- A Worldwide Survey of Practices in Production Planning and Control (edited by D.C. Whybark and G. Vastag) Modern Tools for Manufacturing Systems (edited by R. Zurawski and T. S. Dillon) Solid Freeform Manufacturing-Advanced Rapid Prototyping (D. Kochan) Advances in Feature Based Manufacturing (edited by J. J. Shah, M. M~ntyl~ and D. S. Nau) Computer Integrated Manufacturing (CIM) in Japan (V. Sandoval) Advances in Manufacturing Systems: Design, Modeling and Analysis (edited by R. S. Sodhi) Flexible Manufacturing Systems: Recent Developments (edited by A. Raouf and M. Ben-Daya) Planning, Design, and Analysis of Cellular Manufacturing Systems (edited by A.K. Kamrani, H.R. Parsaei and D.H. Liles)
MANUFACTURING RESEARCH AND TECHNOLOGY 24
Planning, Design, and Analysis of Cellular Manufacturing
Systems
edited by
Ali K. Kamrani
University of Michigan Dearborn, MI, U.S.A.
Hamid R. Parsaei
University of Louisville Louisville, KY, U.S.A.
Donald H, Liles
University of Texas Arlington, TX, U.S.A.
ELSEVIER 1995 A m s t e r d a m - Lausanne- New York- Oxford-Shannon-Tokyo
ELSEVIER SCIENCE B.V. Sara Burgerhartstraat 25 P.O. Box 211, 1000 AE Amsterdam, The Netherlands
ISBN: 0 444 81815 4
91995 Elsevier Science B.V. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Elsevier Science B.V., Copyright & Permissions Department, P.O. Box 521, 1000 AM Amsterdam, The Netherlands. Special regulations for readers in the U.S.A. - This publication has been registered with the Copyright Clearance Center Inc. (CCC), 222 Rosewood Drive, Danvers, Ma 01923. Information can be obtained from the CCC about conditions under which photocopies of parts of this publication may be made in the U.S.A. All other copyright questions, including photocopying outside of the U.S.A., should be referred to the copyright owner, Elsevier Science B.V., unless otherwise specified. No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. This book is printed on acid-free paper. Printed in The Netherlands
TO OUR PARENTS Azizollah Khosravi-Kamrani and Fataemah(Mahin) Arjasebi Abolfazl Parsaei and Barat Atabaki Harold and Marilyn Liles
This Page Intentionally Left Blank
vii
Contents Contributors Acknowledgements Preface
ix xiii xv
Part One DESIGN AND MODELING TECHNIQUES
Recent Advances in Mathematical Programming for Cell Formation Chao-Hsien Chu 0
Q
0
An Industrial Application of Network-Flow Models in Cellular Manufacturing Planning Alberto Garcia-Diaz and Hongchul Lee 47 Design Quality: The Untapped Potential of Group Technology Charles T. Mosier and Farzad Mahmoodi
63
Partitioning Techniques for Cellular Manufacturing Soha Eid Moussa and Mohamed S. Kamel
73
Manufacturing Cell Loading Rules and Algorithms for Connected Cells Gursel A. Siier, Miguel Saiz, Cihan Dagli, and William Gonzalez
97
Cellular Manufacturing Design: A Holistic Approach Lorace L. Massay, Colin O. Benjamin, and Yildirim (Bill) Omurtag
129
Part Two P E R F O R M A N C E MEASURE AND ANALYSIS
145
Measuring Cellular Manufacturing Performance David F. Rogers and Scott M. Sharer
147
viii Performance of Manufacturing Cells for Group Technology: A Parametric Analysis Atul Agarwal, Faizul Huq, and Joseph Sarkis 167
10. 11
Design of a Manufacturing Cell in Consideration of Multiple Objective Performance Measures Towhee Park and Hassock Lee
181
Machine Sharing in Cellular Manufacturing Systems Saifallah Benjaafar
203
Integration of Flow Analysis Results with a Cross Clustering Method Marc Barth and Roland De Guio
229
Part Three ARTIFICIAL INTELLIGENCE AND COMPUTER TOOLS
249
12.
Adaptive Clustering Algorithm for Group Technology: An Application of the Fuzzy ART Neural Network Soheyla Kamal 251
13.
Intelligent Cost Estimation of Die-Castings Through Application of Group Technology Raj Veeramani
283
14.
Production Flow Analysis Using STORM Sharokh A. Irani and R. Ramakrishnan
299
15.
A Simulation Approach for Cellular Manufacturing System Design and Analysis Ali K. Kamrani, Hamid R. Parsaei, and Herman R. Leep
351
Subject Index
383
ix
Contributors Atul Agarwal, Department of Information Systems and Management Sciences, University of
Texas at Arlington, Arlington, Texas 76019, USA. Marc Barth, Laboratoire de Recherche en Productique de Strasbourg, Ecole Nationale Superieure des Arts et Industries de Strasbourg, 24, bd. de la Victoire, F-67084 Strasbourg, FRANCE. Saifailah Benjaafar, Department of Mechanical Engineering, University of Minnesota, Minneapolis, Minnesota 55455, USA. Colin O. Benjamin, Department of Engineering Management, University of Missouri-Rolla, Rolla, Missouri 65401, USA.
Chao-Hsien Chu, Department of Management, Iowa State University, Ames, Iowa 50011, USA. Cihan Dagli, Department of Engineering Management, University of Missouri-Rolla, Rolla, Missouri 65401, USA. Alberto Garcia-Diaz, Department of Industrial Engineering, Texas A&M University, College
Station, Texas 77843-3131, USA. William Gonzalez, Avon Lomalinda Incorporated, San Sebastian, Puerto Rico 00755, USA. Roland De Guio, Laboratoire de Recherche en Productique de Strasbourg, Ecole Nationale Superieure des Arts et Industries de Strasbourg, 24, bd. de la Victoire, F-67084 Strasbourg, FRANCE. Faizul Huq, Department of Information Systems and Management Sciences, University of Texas at Arlington, Arlington, Texas 76019, USA. Shahrokh A. Irani, Department of Mechanical Engineering, University of Minnesota, Minneapolis, Minnesota 55455, USA. Soheyla Kamal, 5530 Heather Lane, Orefield, Pennsylvania 18069, USA.
Mohamed S. Kamel, Department of Systems Design Engineering, University of Waterloo,
Waterloo, Ontario, CANADA N2L 3G1. Ali K. Kamrani, Department of Industrial and Manufacturing Engineering, University of Michigan-Dearborn, Dearborn, Michigan 48128-1491, USA. Heeseok Lee, Department of Management Information Systems, Korea Advanced Institute of
Science and Technology, 207-43 Cheongryangridong, Dongdaemoongu, Seoul, Korea. Hongehul Lee, Department of Industrial Engineering, Korea University, Seoul, Korea. Herman R. Leep, Department of Industrial Engineering, University of Louisville, Louisville, Kentucky 40292, USA. Farzad Mahmoodi, Department of Management, Clarkson University, Potsdam, New York 13699-5790, USA. Loraee L. Massay, Department of Industrial Engineering, North Carolina A&T State
University, Greensboro, North Carolina 2741 l, USA. Charles T. Mosier, Department of Management, Clarkson University, Potsdam, New York 13699-5790, USA. Soha Eid Moussa, Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, CANADA N2L 3G1. Yildirim (Bill) Omurtag, Department of Engineering Management, University of MissouriRolla, Rolla, Missouri 65401, USA. Taeho Park, Department of Organization and Management, San Jose State University, San Jose,
California 95192-0070, USA. Hamid R. Parsaei, Department of Industrial Engineering, University of Louisville, Louisville,
Kentucky 40292, USA. R. Ramakrishnan, Department of Mechanical Engineering, University of Minnesota,
Minneapolis, Minnesota 55455, USA. David F. Rogers, Department of Qualitative Analysis and Operations Management, University of Cincinnati, 531 Carl H. Linder Hall, Cincinnati, Ohio 45221-0130, USA. Miguei Saiz, Department of Industrial Engineering, University of Puerto Rico-Mayaguez, P.O. Box 5000, Mayaguez, Puerto Rico 00681-5000, USA.
xi Joseph Sarkis, Department of Information Systems and Management Sciences, University of Texas at Arlington, Arlington, Texas 76019, USA Scott M. Sharer, Department of Management, University of Miami, 414 Jenkins Building, Coral
Gables, Florida 33124-9145, USA. Gursel A. Suer, Department of Industrial Engineering, University of Puerto Rico-Mayaguez, P.O. Box 5000, Mayaguez, Puerto Rico 00681-5000, USA. Raj Veeramani, Department of Industrial Engineering, 1513 University Avenue, University of Wisconsin, Madison, Wisconsin 53706, USA.
This Page Intentionally Left Blank
xiii
Acknowledgments We would like to take this opportunity to express our sincere gratitude to many of our colleagues who provided us with invaluable assistance during the course of this project. This volume would have not been possible without the outstanding contributions of our authors and those who assisted us in reviewing the contents. The following individuals made significant contributions to this work either by their submissions or assistance in the review process. Atul Agarwal, University of Texas at Arlington. Dongke An, University of Louisville. Marc Barth, Ecole Nationale Superieure des Arts et Industries de Strasbourg. David Ben-Arieh, Kansas State University. Saifallah Benjaafar, University of Minnesota. Colin O. Benjamin, University of Missouri-Rolla. Thomas O. Boucher, Rutgers University. Chao-Hsien Chu, Iowa State University. Cihan Dagli, University of Missouri-Rolla. Osama Ettouney, Miami University. Alberto Garcia-Diaz, Texas A&M University. Soha Eid Moussa, University of Waterloo. William Gonzalez, Avon Lomalinda Incorporated. Roland De Guio, Ecole Nationale Superieure des Arts et Industries de Strasbourg. Kevin Hubbard, University of Missouri-Rolla. Faizul Huq, University of Texas at Arlington. Shahrokh A. Irani, University of Minnesota. Sanjay Jagdale, University of Arizona. Soheyla Kamal, 5530 Heather Lane, Orefield, Pennsylvania. Mohamed S. Kamel, University of Waterloo. Dennis E. Kroll, Bradley University. Jerome P. Lavelle, Kansas State University. Heeseok Lee, University of Nebraska-Omaha. Yuan-Shin Lee, Kansas State University. Hongchul Lee, University of Iowa. Herman R. Leep, University of Louisville. Hampton Liggett, Northern Illinois University. Farzad Mahmoodi, Clarkson University.
xiv P. K. Mallik, University of Michigan-Dearborn. Lorace L. Massay, North Carolina A&T State University. Charles T. Mosier, Clarkson University. Yildirim (Bill) Omurtag, University of Missouri-Rolla. Taeho Park, San Jose State University. R. Ramakrishnan, University of Minnesota. Fahimah Rezayat, California State University-DH. David F. Rogers, University of Cincinnati. Miguel Saiz, University of Puerto Rico-Mayaguez. Joseph Sarkis, University of Texas at Arlington. Scott M. Shafer, University of Miami. Alice E. Smith, University of Pittsburgh. Gursel A. Suer, University of Puerto Rico-Mayaguez. Louis Tsui, University of Michigan-Dearborn. Raj Veeramani, University of Wisconsin-Madison. David H. H. Yoon, University of Michigan- Dearborn.
XV
Preface The introduction of computers into manufacturing in the late 1950's has brought new challenges to manufacturing companies in the United States and aborad. Computer-Aided Design, Computer-Aided Manufacturing, Cellular Manufacturing, Group Technology, Computer Integrated Manufacturing, and so forth, are considered as viable technologies and philosophies for increasing productivity, enhancing product quality, and reducing direct and indirect manufacturing costs. Cellular Manufacturing (CM) is one of the major concepts used in the design of flexible manufacturing systems. CM, also known as group production or family programming, can be described as a manufacturing technique that produces families of parts within a single line or cell of machines. The objective of Planning, Design, and Analysis of Cellular Manufacturing Systems is to report the latest developments and address the central issues in the design and implementation of cellular manufacturing systems. This book consists of 15 refereed chapters, written by leading researchers from academia and industry, that are organized in three parts. Part One, Design and Modeling Techniques, includes six chapters. In the first chapter, Chu presents a state-of-the-art review based on a systematic survey of the literature. In the second chapter, Garcia-Diaz and Lee, develop a network flow methodology for grouping machines into cells and forming part families in cellular manufacturing. The third chapter, by Mosier and Mahmoodi, expands the application domain of group technology oriented coding and retrieval systems to address the problem of design quality. Moussa and Kamel address the partitioning problem in cellular manufacturing systems in chapter 4. They review various partitioning techniques and demonstrate the effectiveness of these techniques by some sample results. In the fifth chapter, Suer et al. review and discuss several manufacturing cell loading rules and algorithms for connected cells. The last paper in this section, by Massay et al., presents a systematic method to the design of the cellular manufacturing systems. The methodology utilizes a holistic system design approach that facilitates the evaluation of the total system being developed. Part Two of the book is concemed with Performance Measure and Analysis. Five papers on this topic are included in this part. The first paper, by Rogers and Shafer, identifies several design objectives associated with cellular manufacturing. Then, based upon these design objectives, appropriate performance measures are discussed and compared. In the second
xvi article, Agarwal et al. use an analytical model to investigate the relative performance of a partitioned system compared to an unpartitioned system as a function of the ratio between setup time and processing time per unit, varying over a large range of domain vaiues. The third article in this part, by Park and Lee, presents a new approach to the design of a manufacturing cell with multiple performance objectives via the simulation-based design of experiments and compromise programming. The effects of the machine sharing on the performance of traditional cellular manufacturing systems is demonstrated by Benjaafar in the fourth article. The last article in this part, by Barth and De Guio, involves the integration of flow analysis results with a cross clustering method. Finally, Part Three presents the applications of artificial intelligence and computer tools in the design and analysis of cellular manufacturing systems. Four articles are included in this part. The first article, by Kamal, presents a new clustering algorithm based on the neural network techniques and fuzzy logic concepts. Veeramani, in the second article, describes ongoing work with the die-casting industry on the application of group technology in developing a computerintegrated system that will assist cost estimators in developing quotes for die-casting parts in a consistent, accurate, and timely manner. The third article, by Irani and Ramakrishnan, demonstrates a step-by-step implementation of the first three phases in production flow analysis including factory flow analysis, group analysis, and line analysis using standard algorithms available in the STORM package. Kamrani et al., in the last article in this section, present the application of linear programming to develop a methodology that uses design and manufacturing attributes to form machining cells. We are indebted to our authors and reviewers for their outstanding contribution and assistance in preparing this volume. We would also like to thank Dr. Herman R. Leep of the University of Louisville for his invaluable support and advice. Special word of thanks are due to Dongke An for providing exceptional help to make this endeavor possible. Finally, we would like to express our deepest gratitude to Drs. Amanda Shipperbottom and Eefke Smit of Elsevier Science Publishers for giving us the opportunity to initiate this project.
Ali K. Kamrani Hamid R. Parsaei Donald H. Liles December, 1994
PART ONE
DESIGN AND MODELING TECHNIQUES
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
R e c e n t A d v a n c e s in M a t h e m a t i c a l P r o g r a m m i n g for Cell F o r m a t i o n Chao-Hsien Chu Department of Management, College of Business, Iowa State University of Science and Technology, 300 Carver Hall, Ames, Iowa 50011, USA In the past decade, cellular manufacturing has received considerable interest from practitioners and academicians. Cell formation, one major problem with cellular manufacturing, involves the process of grouping the parts with similar design features or processing requirements into part families and the corresponding machines into machine cells. Numerous analytical approaches to solving the problem have been introduced, among which mathematical programming models and heuristic procedures constitute the greatest part of the literature. But as yet no comprehensive study has synthesized the literature pertaining to the use of mathematical programming in cell formation. This chapter presents a state-ofthe-art review based on a systematic survey of the literature. Survey results should help answer or clarify many related questions for the cellular manufacturing community. Examples have been provided to help the interested reader use earlier studies to develop mathematical programming models.
1. I N T R O D U C T I O N
During the past decade, there has been a major shift in the design of manufacturing planning and control systems using such innovative concepts as just-intime (JIT) production, optimization production technology (OPT) (recently named the theory of constraints), flexible manufacturing systems (FMS), cellular manufacturing (CM), and group technology (GT). Cellular manufacturing in particular has received considerable interest from both practitioners and academicians because it allows small, batch-type, production to gain an economic advantage similar to that of mass production and still retaining the high degree of flexibility associated with job-shop production. The design of a CM system is quite challenging because so many strategic issues, e.g. the selection of part types suitable for manufacturing on a group of machines, the level of machine flexibility, the layout of cells, the types of material handling equipment, and the types and numbers of tools and fixtures, must be considered during design [74]. Furthermore, any meaningful cell design must be compatible with corporate tactical/operational goals such as high production rate, high product quality, high on-time delivery,
low work-in-process, low queue length at each work station, and high machine utilization. One of the first and most important problems, i.e., cell formation (CF), faced in CM practice involves the decisions surrounding the decomposition of manufacturing systems into cells. Part families and machine cells are identified such that (1) parts with similar design features, functions, materials, or processing requirements are produced in a cell sharing common resources such as machines, tools, and labor; (2) each part can be processed fully within a cell without the need for movement across cells; and (3) capital investment in resources is maintained at a level compatible with corporate strategy. Manufacturing cells can capture the inherent advantages of both mass and job-shop productions, such as reduced setup times, improved process planning, decreased lead time, reduced tool requirements, improved productivity, increased overall operational control, improved product quality, and reduced material handling costs. Common disadvantages such as lower machine and labor utilization rates and higher capital investment due to duplication of machines and tools exist, however [61]. Much effort has been directed at the cell formation problem. As a result, many procedures have been developed, among which mathematical programming models and heuristic procedures are most discussed in the literature. Notwithstanding, the cell formation problem has been proved nonpolynomial (NP); that is, finding an optimal solution becomes increasingly unlikely as problem size grows. Optimal solutions are worth pursuing for at least two reasons [71,72]: (1) they can serve as a benchmark against which to evaluate heuristics and (2) optimal algorithms and heuristics can work together. On one hand, the logic from an optimal algorithm can lead to an efficient heuristic; on the other, a heuristic solution can serve as a starting point from which to reduce computational time in the optimizing search. Although a number of studies [10,42,56] have attempted to synthesize the literature concerning the use of mathematical programming in cell formation, the scale of these studies has been rather small and the literature cited somewhat outdated. And although a number of studies [14,56,61,65,73] have provided state-of-the-art reviews of cell formation issues, problems, and techniques, the scopes of these reviews generally has seemed too broad, that is, has covered in insufficient detail the use of mathematical programming in cell formation. A comprehensive study of this topic is needed to answer many of the questions frequently asked by the CM community: 9 What kinds of mathematical programming models have been used for cell formation? 9 Which models are most popular? 9 What kinds of objective functions concerning cell formation can be modeled through mathematical programming? What have been the popular measures used in previous studies? Do these measures reflect CM practice? 9 What kinds of constraints related to cell formation can be represented by mathematical programming? What are the popular constraints used in prior research? Do they capture manufacturing reality?
9 What kinds of solution procedures or strategies have been applied to solve the mathematical models? Are they efficient or powerful enough to deal with real-world problems? 9 What types of data are needed to model mathematical programming models? Can these data be obtained easily from the shop floor? 9 What unique features are considered in current cell formation studies? Have they addressed the issues and concerns raised by earlier researchers and practitioners? 9 What kinds of computer systems and software have been used in cell formation research? The purpose of this study is twofold: (1) to examine the state of the art of mathematical programming's use in cell formation (Results from this study would help answer many of the aforementioned questions.) and (2) to illustrate how a variety of cell formation problems can be formulated by means of mathematical programming. Five examples with different objectives, constraints, and structures are provided for illustration. These examples not only represent typical cell formation problems but also can be used to demonstrate how the same scenario can be modeled through either objectives or constraints. The chapter is organized as follows. In section 2, approaches to cell formation are summarized. Section 3 discusses the most recent results concerning the use of mathematical programming models in cell formation. The review proceeds according to the questions just outlined. In section 4, examples of typical cell formation scenarios are provided. Conclusions are given in section 5, which is followed by appendices and references.
2. APPROACHES TO CELL FORMATION Extensive work has been done in the area of cell formation, and numerous approaches have been developed [61]:
* Classification and c o d i n g systems. Under this approach, users first examine the design features or manufacturing attributes of parts from blueprints and use a coding system to assign symbols (or codes) to the parts. H u m a n eyes, statistical clustering algorithms [31], or mathematical programming models [29,31,32] are then used to scrutinize the codes for similarities and part families are formed. The process of assigning codes to parts is tedious and time consuming and sometimes subjective inasmuch as it depends on h u m a n experience and judgment. These methods can be used only to identify part families. * Array based clustering methods. This approach differs from the former in t h a t it is based upon a production flow analysis [7] which uses routing sheet or process plans. A common feature of this approach is t h a t it sequentially rearranges columns and rows of the machine/part matrix according to
9
9
9
9
9
an index until diagonal blocks are generated [15]. Methods of this type have received much attention because of their simplicity. Popular methods include rank order clustering [33,34,35], direct clustering [8] and bond energy algorithm [15,24]. Common criticisms of the methods are that (1) identification of exclusive groups in a block diagram sometimes requires subjective judgment; (2) most methods consider only binary routing information and neglect other important cost and operational factors; and (3) in most cases, bottleneck machines must be removed before any machine/part groups can be identified dearly [8,15,35]. Statistical clustering algorithms. Statistical cluster algorithms have been used quite often in the decomposition of manufacturing cells [14]. In particular, use of hierarchical clustering methods such as the single and the complete linkage methods has been studied extensively [12,45]. This approach requires a calculation of similarity coefficients between each pair of parts or machines. Parts or machines with close similarity coefficients then are arranged in the same group. One study also has used a nonhierarchical clustering scheme [9]. Several problems associated with this approach remain to be solved [12,14], for instance, the selection of clustering criteria, the selection of performance measure, and the determination of the number of part families. Graphic theoretical approaches. A number of papers based upon graph theory have been published [5,14,61]. The methods described represent vertices of graphs as machines or parts and weights of arcs as similarity coefficients. The major drawback inherent in this approach is that practical issues such as production volume and alternative plans are not addressed [61]. Mathematical programming and heuristic approaches. Numerous studies of cell formation have been conducted that employ mathematical programming and heuristics to improve clustering effectiveness. These approaches are flexible enough to incorporate most objective functions and constraints in a precise format; they suffer, however, in that they consider the problem only in a static sense for purely stable manufacturing environments [61]. Additionally, none of the methods considers uncertainty or vagueness, both of which normally are presented in the information required by the models. K n o w l e d g e based and pattern recognition methods. Emerging from artificial intelligence and pattern recognition techniques, expert systems offer many new opportunities for manufacturing systems analysis and design. Yet very few papers have applied these techniques to the cell formation problem [17,61]. Developing expert systems which can capture pattern recognition, optimization, and expert cognition processes to form manufacturing cells is a promising area for exploration [61]. F u z z y clustering and modeling approaches. Most early cell formation research assumes that the information used for cell formation, such as production cost, demand, and processing time, is certain and that the objectives
and constraints considered can be formulated precisely. This early research also assumes that each part can belong to only one part family, yet parts may exist whose membership is much less evident. Only a few researchers have addressed the issues of vagueness and uncertainty in the cell formation problem [16,18]. Fuzzy modeling and clustering approaches may provide a solution in such cases. For instance, a fuzzy c-mean clustering method was used in [16] to form part families (or machine cells) such that a part (or machine) could belong to more than one family (or cell), with different degrees of membership. Recently, a fuzzy mathematical programming approach [18] has been proposed to deal with the imprecise nature of objectives and constraints. N e u r a l network approaches. Neural network is an emerging algorithmic approach that has been the subject of intent study by mathematicians, statisticians, physicists, engineers, and computer scientists. The number of studies utilizing the rapid parallel processing capability of neural networks to solve the cell formation problem has been increasing significantly [19]. Networks such as backpropagation, self-organizing map (SOM), competitive learning, adaptive resonance theory (ART), interactive activation and competition learning, and fuzzy ART, have been applied successfully to the decomposition of manufacturing cells [19].
3. STATE-OF-THE-ART REVIEWS The cell formation problem can be formulated differently depending on the mathematical programming model used, the objective functions chosen, and the constraints considered. There also have been major variations on the solution procedures used, the formation logic applied, the special features considered, and the input data involved. In this section, the state of the art, as characterized by the literature review, is summarized. Detailed information regarding individual studies (models) appears in the appendices.
3.1. Types of mathematical programming models A variety of mathematical programming approaches (See Table 1) have been applied to model and to solve the cell formation problem. The complexities of these models are limited (linear programming and 0-1 integer programming), modest (mixed integer programming and 0-1 nonlinear programming) and very great (mixed integer nonlinear programming and 0-1 nonlinear fractional programming). About 39% of prior studies use the relatively simple 0-1 integer prograrnming models; only about 5% use very complicated models. Thus, even complicated manufacturing design problems such as cell formation can be modeled with mathematical models of limited complexity.
Table 1 Summary of mathematical programming models Rank
1
Model Type
2
0-1 integer programming Mixed integer programming 0-1 integer nonlinear programming
4
Frequency Percentage Used Used (%) 23 § 10
38.98 16.95 16.95
Linear programming
5
8.47
5
Assignment model
3
5.08
5
Network Model
3
5.08 3.39
2
10 ~
7
Mixed integer nonlinear programming
2
8
0-1 nonlinear fractional programming
1
1.69
8
Dynamic programming
1
1.69
8
Branch and bound
1
1.69
Total:
59
+ Including two goal programming models. # Including five goal programming models.
3.2. Objective functions chosen The success of a mathematical programming model depends heavily upon how accurately objectives and constraints can be expressed in precise mathematical relations. Because many objective functions and constraints are considered in the cell formation problem, the challenge is not limited to the construction of equations; a more important issue is the selection of appropriate objectives and constraints that capture and reflect CM reality. There are several ways of classifying objective functions in cell formation. For instance, in [56] objectives were divided into four major categories: (1) reducing the number of setups; (2) producing parts completely within the cell; (3) minimizing investment in new equipment; and (4) maintaining acceptable utilization levels. In [14], according to the nature of objectives, performance measures were classified as either cost or noncost based and subsequently classified as either individual or aggregate. In total, 34 objectives were considered in the prior cell formation studies. In Table 2, these objectives are classified roughly as coefficient based, cost based, or operation related. The purpose of cell formation based on coefficient criteria has been either to maximize total similarity or to minimize total dissimilarity of parts or machines. These coefficients can be computed with design features or with processing requirements. Some studies even have gone so far as to consider tooling requirements between machines and parts [25,26,27,53] and similarities between machines and operators [46]. Two problems may be encountered when these
approaches are used: (1) the coefficients can be involved only one at a time; thus, the model can consider only one objective at most and (2) coefficients are routing based primarily and do not take into account other important factors such as costs and operational issues. Several studies have taken the lead in addressing this deficiency -- for instance, by considering demand, processing time, and even processing sequence in computing the coefficient [12,53], but their impacts have not been testified formally. Recent studies have deviated more and more from this course by considering a variety of costs and operational factors during model formulation. Table 2, which reflects this shift, indicating that about 26% of prior studies focus on minimizing total costs of machine investment, followed by minimizing total costs of intercell movement (24%), minimizing total intercell movement (21%), maximizing machine utilization (12%), and minimizing total processing costs (10%). No coefficient based criterion is among the top five focuses. These objectives coincide with those often used in manufacturing practice [56,74]. Also, the majority of studies (54%) still consider only a single objective during cell formation. Although models with multiple objectives can reflect manufacturing practice with comparative accuracy, they also are more difficult to develop and require longer time to solve [23,53,56,72]. One trade-off is to consider multiple related criteria in an aggregated format. About 34% of prior research has used this approach. In-depth analysis of the data from Appendix B indicates that the following groups of objectives have been used most often by prior researchers: (1) total amount of interand intra- cell movement; (2) total costs of inter- and intra- cell movement, coupled with total machine investment; (3) total costs of intercell movement and machine duplication; (4) total setup cost and inventory holding cost; and (5) total costs of machine investment, tooling investment, and processing.
3.3. Manufacturing constraints Another key component involved in constructing mathematical programming models for cell formation is to define precisely the system constraints. To be practical, constraints should capture the actual restriction (limitation) of a system. Forty-five constraints have been considered in the cell formation literature. Each of these constraints can be placed in one of four categories: (1) logical, (2) cell size, (3) physical, and (4) modeling. Logical constraints prevent models from contradicting common sense, judgment, or theoretic logic. For example, each part, machine, operation, or operator can be assigned into only one cell. Cell size constraints normally are considered to restrict the number of parts, machines, or operators allowed in each cell from exceeding an upper bound because of concerns regarding span of control, and space and capacity limitations. It also makes sense to ensure that the number of parts or machines assigned into each cell exceed a minimum. In this way, the systems are prevented from over division which may result in excessive duplication and thus waste of resources. Physical constraints such as space, budget, capacity, and number of machines available for each machine type capture another type of system restriction. Finally, there is a
10 need for modeling constraints, which provide necessary connections among decision variables, parameters, and objective functions. Table 2 S u m m a r y of objective functions Rank
Obiective Function
Coefficient Based Measures"
Code+
Frequency Percenta~ze usect Used (%)~*
O1 5 8.62 Max. total similarity between parts 02 3 5.17 Max. total similarity between machines 03 3 5.17 Min. total dissimilarity between parts 04 3 5.17 Max. total compatibility between parts and machines O16 2 3.45 9 Min. total distance between parts O16 1 1.72 10 Max. total similarity between machines and operators O16 1 1.72 10 Min. total dissimilarity between machines 1 1.72 10 Min. total distance between classification codes O16 Cost Based Measures: 06 15 25.86 1 Min. total costs of intercell movement 05 14 24.14 2 Min. total costs of machine investment 08 6 10.34 5 .Min. t o t a l p r o c e s s i n g a n d m a c h i n e u t i l i z a tion costs 07 5 8.62 6 Min. total costs ofintracell movement Oll 5 8.62 6 Min. total machine duplication costs 09 3 5.17 8 Min. total costs of idle machine capacity O10 3 5.17 8 Min. tqtal setup costs due to sequence aepenaence 011 3 5.17 8 Min. total tooling and fixture costs 011 2 3.45 9 Min. total inventory or work-in-process (WIP) costs 011 1 1.72 l0 Min. total subcontracting costs 011 1 1.72 l0 Min. total space utilization costs 011 1 1.72 l0 Min. total penalty for late or early production 011 1 1.72 l0 Min. total costs to expand capacity 011 1 1.72 l0 Min. total labor costs Operation Related Measures: O13 12 20.7 3 M i n . t o t a l a m o u n t ( n u m b e r ) of i n t e r c e l l movements O15 7 12.07 4 Max. total machine utilization O14 4 6.9 7 Min. total amount (number) ofintracell movements O12 4 6.9 7 Min. n u m b e r of exceptional elements O16 3 5.17 8 Min. total setup times O16 2 3.45 9 Min. total machining hours O16 2 3.45 9 Min. total within cell load variation O16 1 1.72 l0 Max. total amount of parts produced O16 1 1.72 l0 Match each operator's skill O16 1 1.72 10 Max. average cell utilization O16 1 1.72 l0 Min. intracell load imbalance O16 1 1.72 l0 Min. intercell load imbalance + Corresponding to Appendix B. * Based upon 58 models. (One model [60] did not provide a detailed objective.) 6 8 8 8
ll Table 3 summarizes constraints actually considered in the models. The ten most used constraints are spread evenly over the first three categories and none appear in the fourth group. This is because most modeling constraints can be expressed directly in the objective function. For instance, the required number of intercell movements can be expressed in the objective function instead of as a constraint set. Examples 2, 3, 4, and 5 (of section 4) provide a contrast. Examples 2 and 3 include a constraint to determine the number of exceptional elements whereas models 4 and 5 have the relations built into the objective function. According to Table 3, some commonly used constraints are unique part assignment (52%), maximum number of machines allowed in each cell (46%), machine capacity (37%); unique machine assignment (26%); and minimum number of machines needed in each cell (24%). Clearly, all these constraints are quite realistic in terms of managing and operating manufacturing cells.
3.4. Solution procedures One major bottleneck that prevents mathematical programming models from wide use in the real world is that once problem size grows, it is inefficient to seek and almost impossible to find an optimal solution. This phenomenon is known as NP-complete or NP-hard. To overcome this deficiency, two strategies often have been used in cell formation studies. One strategy is to develop efficient heuristic procedures and the other is to decompose the model into submodels or to model the problem in multi-phases and solve each through an optimal or heuristic procedure. According to Table 4, about 29% of previous studies used the decomposition or multi-phase approach to find solutions and about 27% used heuristic procedures (including general search, simulated annealing, genetic algorithm, tabu search, and fuzzy-c clustering). At the same time, 44 % of the studies relied on optimal procedures to solve the single-phase model, and 10% used optimal procedures to solve multi-phase models. So, if we consider that almost all heuristic procedures were developed or extended from an optimal procedure, we conclude that optimal procedures seem to play an important role in cell formation research. Furthermore, interest in using search algorithm [10,64], especially simulated annealing [6,17, 66,69], genetic algorithm [28,68], or tabu search [66], to tackle the problem has been increasing. 3.5. Formation logics The cell formation literature can be divided into four categories, according to the formation logic used [74]: (1) grouping part families only; (2) forming part families and then machine cells or vice versa; (3) forming part families and machine cells simultaneously; and (4) grouping machine cells only. According to Table 5, the literature is spread very evenly over the first three formation logics, with somewhat less effort devoted to the last procedure. Several observations can be extracted from the details provided in the appendices: (1) Most coefficient based models can be used only to form part families or machine
12 Table 3 S u m m a r y of constraints Rank
Constraints
Code+
Frequency Percentage Used Used (%)*
Logical Constraints: 1
Unique part assignment- each part can be assigned to only one part family 4 Unique machine assignment - each machine can be assigned to only one machine cell 9 P a r t family formation logic - a part family m u s t b e f o r m e d b e f o r e p a r t s c a n b e ass i g n e d to t h a t f a m i l y 9 Linkage between machines & parts - to ensure that all machines needed by a part be assigned to the same cell 9 U n i q u e o p e r a t i o n s a s s i g n m e n t - e a c h ope r a t i o n c a n b e a s s i g n e d to o n l y o n e machine 12 Linkage between operations, parts, and machines 13 Unique routing selection - only one routing be selected 13 Linkage between routings and machines 14 Machine cell formation l o ~ c - a machine cell m u s t be formed before machines can be assigned to t h a t cell 14 All p a r t types assigned to p a r t families 14 Unique operator assignments - each operator can only be assigned to one machine cell 14 Layout constraints - to ensure t h a t the machines in each cell do not overlap 14 No. of cells must be less than no. of available operators 14 M i n i m u m level of machine similarity for grouping 14 M i n i m u m level of tool similarity for grouping 14 Minimum level of intercell movement for moving Cell Size Constraints: 2 M a x i m u m no. of m a c h i n e s a l l o w e d i n e a c h machine cell 5 Minimum no. of machines to be qualified as a cell 6 Maximum no. of parts allowed in each part family 10 Minimum no. of parts to be qualified as a part family 14 Exact no. of parts required for each part family 14 Maximum no. of operators allowed in each machine cell
C1
28
51.85
C2
14
25.93
C3
7
12.96
C4
7
12.96
C19
7
12.96
C 19 C 19 C 19 C 19
3 2 2 1
5.56 3.7 3.7 1.85
C 19 C19
1 1
1.85 1.85
C19
1
1.85
C 19
1
1.85
C 19
1
1.85
C 19 C 19
1 1
1.85 1.85
C5
25
46.3
C6 C7 C8
13 11 6
24.07 20.37 11.11
C 19 C 19
1 1
1.85 1.85
+ Corresponding to Appendix C. * Based on 54 models. (Five models have constraints embodied in the structure.)
13 Table 3 (Continued) S u m m a r y of c o n s t r a i n t s Rank
Constraints
Physical Constraints: 3 Mach_ine c a p a c i t y c o n s t r a i n t s - to e n s u r e t h a t t h e t o t a l o p e r a t i o n t i m e s a s s i g n e d to a cell won't exceed capacity 7 A c o n s t r a i n t to s p e c i f y t h e n u m b e r of required cells 8 C o n s t r a i n t s to c o n s i d e r t h e no. o f machine types available 9 B u d g e t c o n s t r a i n t s - to e n s u r e t h a t t h e t o t a l - c o s t o f b u y i n g m a c h i n e s , tools, a n d overhead won't exceed 11 12 14 14 14
Production r e q u i r e m e n t s constraints Space constraints - to ensure t h a t the total space of m a c h i n e s assigned to a cell can be accommodated Constraints to restrict the maximum no. of procurable machines Constraints to restrict the maximum no. of cells allowed Constraints to restrict a part from subcontracting
Code +
Frequency Percentage Used Used (%)*
CIO
19
35.19
C9
9
16.67
Cll
8
14.81
C13
7
12.96
C12 C14
4 3
7.41 5.56
C19
1
1.85
C19
1
1.85
C19
1
1.85
C15
4
7.41
C18
3
5.56
C19 C16 C17
3 2 2
5.56 3.7 3.7
C19
2
3.7
C19 C19 C19 C19 C19 C19 C19
2 1 1 1 1 1 1 212
3.7 1.85 1.85 1.85 1.85 1.85 1.85 3.93
Modeling Constraints: 11 12 12 13 13 13 13 14 14 14 14 14 14
Total:
C o n s t r a i n t s to compute the needed no. of machine types C o n s t r a i n t s to compute the total intercell movements Constraints to consider the sequence dependent setup C o n s t r a i n t s to identify the bottleneck p a r t s C o n s t r a i n t s to identify the exceptional elements C o n s t r a i n t s to compute the total skipping operations (intracell m o v e m e n t s ) Constraints to meet sequence requirements Constraints to compute the completion times Constraints to meet due date requirements Modeling constraints Constraints to estimate the amount of capacity change Constraints to restrict the undirected flows Constraims to link the stages of process
+ C o r r e s p o n d i n g to Appendix C. * Based on 54 models. (Five models have constraints e m b e d d e d in the structure.)
14 Table 4 Summary of solution procedures Rank Solution Procedure Single-Phase: 1 Optimal (O) procedure 2 Heuristic (H) procedure 4 Simulated annealing (H) 5 General search algorithm (H) 5 Branch and bound (O) 5 Network algorithm (O) 6 Genetic algorithm (H) 7 Tabu search (H) 7 Assignment algorithm (O) 7 Fuzzy-C clustering (H) Subtotal: Multi-Phase: 2 Optimal then optimal
2 Heuristic then heuristic * 3 Optimal then heuristic 7 Heuristic then optimal Subtotal:
Frequency Used
Percentage Used (%)
21 6 4 3 3 3 2 1 1 1 45
33.33 9.52 6.35 4.76 4.76 4.76 3.17 1.59 1.59 1.59 71.43
6 6 5 1 18
9.52 9.52 7.94 1.59 28.57
* One model [4] uses simulated annealing and then other heuristic.
Table 5 Summary of formation logic used Rank
Formation Logic
Frequency Used
Percentage Used (%)
1
Group part families only
17
29.82
2
Form both part families and machine cells sequentially
16
28.07
2
Form both part families and machine cells simultaneously
16
28.07
3
Group machine cells only
8
14.04
Total:
57
15 cells because these procedures consider either machine similarity or part similarity, not both, in the formation stage. (2) Most models using an operation index in the formulation, i.e., considering processing sequences, can form part families and machine cells simultaneously. (3) Because of model complexity, it is virtually impossible to use optimal procedures to group part families and machine cells sequentially or simultaneously. In fact, most models rely on a heuristic procedure or use the decomposition strategy. (4) Some studies focus attention on grouping machine cells only, these studies often assume that part families already have formed in spite of its unreality.
3.6. Required input data Many different data are needed for cell formation. The minimum requirements are routing or design feature data. Because many new models have been developed to capture manufacturing reality more faithfully, additional cost and operational data also are needed. (See Table 6.) Most of these data are readily available from the shop floor although acquiring certain information such as number of cells, machine cells size, and cell overhead and budget requires additional effort. Both binary routing and cell number are highly demanded information (80%). Thus, (1) most mathematical programming approaches rely on routing information instead of design features to form manufacturing cells and (2) users normally (80%) need to specify required cell number on a prior basis if they wish to develop mathematical models. This requirement may be controversial because in practice it is very difficult for managers to know beforehand exactly how many cells are needed.
3.7. Special features considered Numerous practical issues concerning cell formation have been raised by early researchers [14,23,61,65,74]: (1) Many of the techniques developed to date fail to capture many of the realities of cell formation. Specifically, most consider only binary routing information in forming the cells and totally or partly neglect other important cost and operational information such as production demand, processing time, machine capacity, processing sequence, machine investment cost, materials handling, and cell overhead. (2) Very few studies have considered field's stochastic nature, sequence-dependent setups, machine tool process capability, alternative process plans, or layout. (3) Efficient approaches considering a number of objectives are needed. Table 7 demonstrates that the number of studies including such special features as alternative routings (process plans) or multiple functional machines, processing sequences, tooling requirements, and sequence-dependent setups has increased significantly. A few studies even have attempted to consider layout planning [4], scheduling [1], and labor allocation [46] during cell formation. One study [32] also considers both design features and processing information in cell formation.
16 Table 6 S u m m a r y of required input data Rank
Required Input Data
Frequency Percentage Used Used (%)*
Coefficients:
Similarity coefficients between parts
6
10.17
12
Similarity coefficients between machines
4
6.78
13
Dissimilarity coefficients between parts
3
5.08
13
Compatibility between machines and parts
3
5.08
14
Distance between parts
2
3.39
15
Dissimilarity coefficients between machines
1
1.69
15
Tool similarity
1
1.69
15
Similarity coefficients between machines and operators
1
1.69
15
Similarity coefficients between classification codes
1
1.69
5
Machine Investment (M)
19
32.2
6
Intercell material handling cost (H)
14
23.73
8
Cell overhead (0) - to setup and operate cell
9
15.25
9
Budget (B)
8
13.56
11
Intracell material handling cost (H)
5
8.47
12
Setup cost (S)
4
6.78
13
Tooling and fixture cost
3
5.08
13
Inventory (WIP) holding cost
3
5.08
14
Machine idle cost
2
3.39
15
Inspection cost (I)
1
1.69
15
Subcontracting cost
1
1.69
15
Wage rate
1
1.69
15
Cost for capacity expansion
1
1.69
15
Penalty for early finish or late
1
1.69
10
Costs:
* Based upon 59 models.
17 Table 6 (Continued) S u m m a r y of required input data Rank
Required Input Data
Frequency Percentage Used Used (%)*
Operation Factors: 1
Routing (binary)
47
79.66
1
Number of cells
47
79.66
2
Demand (production volume)
31
52.54
2
Processing time (P)
31
52.54
3
Size of machine cells (M)
30
50.85
4
Machine capacity
26
44.07
7
Routing (sequence)
11
18.64
7
Size of part families (P)
11
18.64
8
Number of machines available for each machine type
9
15.25
10
Alternative routings
6
10.17
11
Tooling requirements
5
8.47
12
Setup time (S)
4
6.78
13
Total available space
3
5.08
14
Batch size
2
3.39
14
Maximum machine utilization
2
3.39
14
Design features
2
3.39
15
Inspection time (I)
1
1.69
15
Maximum no. of operators allowed in each cell (O)
1
1.69
15
Space requirements
1
1.69
15
Distance
1
1.69
15
Due date
1
1.69
15
Arrival time
1
1.69
15
Required no. of parts for each part family
1
1.69
15
Maximum no. of parts each operator can handle
1
1.69
15
Skill matching factor
1
1.69
15
Minimum level of machine similarity for grouping
1
1.69
15
Minimum level of tool similarity for grouping
1
1.69
15
Minimum level of movement for moving
1
1.69
* Based upon 59 models.
18 Table 7 Summary of special features considered in the model Rank
Special Features
Frequency Percentage Used Used (%)*
1
Consider alternative routings
12
20.34
2
Can find an appropriate n u m b e r of cells
11
18.64
3
Use operations index in the formulation
8
13.56
4
Consider processing sequences
6
10.17
4
Consider tooling requirements
6
10.17
5
Consider setup dependent sequences
5
8.47
6
Consider design features
2
3.39
6
Deal with exceptional elements after CF
2
3.39
7
Consider layout planning with CF
1
1.69
7
Consider undirected flows of parts
1
1.69
7
Consider operator assignment in CF
1
7
Integrate with scheduling
1
1.69 1.69
* Based upon 59 models. 3.8. C o m p u t e r systems and software used If undertaken manually, optimal solution of even a small mathematical programming model demands a prohibitive amount of computational effort; all practical applications of mathematical programming therefore, require the use of computer and related software. Cell formation applications are no exception. In spite of the needs, about one-third of the previous studies nevertheless neglect to mention the type of computer and software used. Based upon the data released (Table 8), LINDO is the most popular software used by cell formation researchers. ZOOM is the next most popular, followed by SAS/OR. Surprisingly, MPSX, the once popular optimization software in the operations research field, has lost its shine because of an inefficient built-in 0-1 integer algorithm and high purchasing and maintenance cost (MPSX is available only in the IBM-compatible mainframe platform). Among the programming languages, PASCAL takes the lead, followed closely by FORTRAN and C. In terms of computer systems, personal computers have replaced mainframes as the most popular systems. This trend may be due to the facts that (1) the processing capability of PCs and of corresponding software packages have been enhanced and improved significantly over the past years and (2) the problems demonstrated by most studies are relatively small and uncomplicated.
19 Table 8 Summary of software and computer used Software or Computer System
Rank
Frequency Used
Percentage Used (%)
Software: 1
Not mentioned
20
33.33
2
LINDO
11
18.33
3
8
13.33
4 5
ZOOM SAS/OR PASCAL
6
C
6 5 3
10 8.33 5
6 7
FORTRAN BASIC
8 8
MPSX RELAXT III
3 2 1
5 3.33 1.67
1
1.67
Total: C o m p u t e r Systems: 1 P e r s o n a l c o m p u t e r (PC a n d Mac) 2 Not mentioned 3 Mainframe computer 4 Mini computer 5 Unix workstation 6 Super computer Total:
60
18 18 13 7 2 2 60
30 30 21.67 11.67 3.33 3.33
4. EXAMPLES OF MODEL D E V E L O P M E N T As the foregoing survey and discussion indicate, mathematical programming has played a seminal role in cell formation history. For instance, most newly proposed heuristic procedures such as simulated annealing, genetic algorithm, tabu search, fuzzy modeling, and neural networks are based upon mathematical models. Two common problems may be encountered in the development of mathematical programming models. The first problem relates to the selection of appropriate objective functions and constraints able to capture manufacturing reality. Survey results from this study can be utilized to support and to ease this selection decision. For instance, users can refer to Tables 2 and 3 and choose their own objectives and constraints from somewhere at the top of the list, without losing much generality. The second problem involves the actual formulation of selected objectives and constraints in a precise and simplistic format. An easier way of
20 doing this is to use a building block approach, i.e., to find and then adopt or modify similar formulations from prior studies instead of developing new models from scratch. In this section, five models with different objectives, constraints, and structures are provided for illustration. Model 1 is a traditional p-median formulation based upon similarity coefficients. A unique feature of this model is t h a t it represents the required n u m b e r of cells in a constraint. Model 2 minimizes the total opport u n i t y costs of producing bottleneck parts outside the cells. This model exemplifies how to use constraints to identify bottleneck parts. Model 3 provides an example of aggregating two compatible criteria into a single objective. It also depicts yet another way of identifying exceptional elements (both parts and machines) by means of constraints. Model 4 is a typical example of the multiobjective approach to cell formation. Other features of the model are t h a t it represents exceptional elements through the objective function and considers withincell workload between machines. Model 5 illustrates how processing sequences can be considered in cell formation. The model also shows users how to use constraints to determine the required number of machines for each type and how to solve a model with a nonlinear term in the objective function. Table 9 s u m m a rizes the characteristic of these models.
4.1. N o t a t i o n s u s e d Indices:
i j k l
= = = =
Decision
Bi = D~ = Lij k M~j k uij k v~jk
= = = =
Xik = X~kz = Yjk =
p a r t index; i = 1, ..., N. machine index;j = 1, ..., M. cell index; k = 1, ..., C. operation index of part i; l = 1, ..., JiVariables:
1, if p a r t i is a bottleneck part; 0, otherwise. 1, if part i needs to be produced outside, either due to machine capacity limits or due to B~ = 1; 0 otherwise. 1, if u~jk = 1 and part i will be processed at cell k; 0, otherwise. 1, if uij k = 1 and machine j will to be duplicated at cell k; 0, otherwise. 1, if a~j = 1 and X~k = 1, but Yjk = 0; 0, otherwise. 1, if a~j= 1 and Yjk =1, but Xik = 0; 0, otherwise. 1, if part i belongs to cell k; 0, otherwise. 1, if operation l of part i is performed in cell k; 0, otherwise. 1, if machine j belongs to cell k; 0, otherwise. n u m b e r of machines of type j required in cell k.
Zjk = Parameters:
B~kz = d~ = D = D~k~ = Fj
G
a d u m m y variable used in example 5 to eliminate a nonlinear term. d e m a n d (in batches) per period for part i. n u m b e r of elements in R; i.e., D = IR I 9 a d u m m y variable used in example 5 to eliminate a nonlinear term. = the set of parts processed by machinej. = an arbitrarily large number.
Table 9 Characteristics of examples Characteristics Obiectives:
Model 1 [36,371
Model 2 1711
Model 3 [I 71
Model 4 1681
Model 5 r661
- Min. total costs of intercell movement
- Min. total costs of machine investment
- Min. total intercell movement
- Max. total similarity between parts
- Min. total machine duplication costs - Min. number of exceptional elements - Min. total within cell load variation - Min. total opportunity cost Constraints:
- Unique part assignment
- Max. no. of machines allowed in each cell - Unique machine assignment - Min. no. of machines needed for each cell - A constraint to specify the required no. of
cells - Part formation logic - Linkage between machines and parts - Unique operations assignment - Constraints to compute the required no. of machines for each machine type - Constraints to identify the bottleneck parts - Constraints to identify the exceptional elements - Machine capacity constraints
Problem /Model Characteristics:
X
X
X
X
X
X
X X
X X X
X
X X 0-1 integer
0-1 integer
0-1 integer
- Number of decision variables
N~ N2+N+ I Part family only Modest Optimal procedure
(M+N)C+2N 2M+C+(2C+3)N P&M simultaneously Very complicated Optimal procedure
C(M+N+3 D) (N+M+C+DN) P&M simultaneously Complicated Simulated annealing
- Formation logic
- Model complexity - Solution procedure
X
X
- Type of mathematical programming model
- Number of constraints
X X
0-1 Integer Mixed integer Bi-criteria (N+M)C C(N L+M) (M+C) L(N+M+NC)+C P&M P&M simultaneously simultaneously Modest Very complicated Genetic Simulated annealing; algorithm Tabu search
22
Hi = cost to transport one batch of any part i between cells (bottleneck cost). = procurement cost per period of one machine of typej.
m o =
mk Pi Qi R S~k
= = = = =
wij
=
average cell load for machinej induced by part i; where
mi2 =
Z):= M ~w o
maximum number of machines allowed in cell k. the set of machines needed to produce part i. total number of machines in P~. set of pairs (i, j) such that a o = 1. similarity between part i and part k. to = processing time (setup plus run time) of part i on machinej. tik~ = processing time (setup plus run time) required to process one batch of part i through operation l on machine type j. Tj = available productive time (capacity) for each machine of type j per period. Uj = maximum acceptable utilization per machine of typej. workload on machinej induced by part i; where
wij -
tij x d i
Tj
9
4.2. M o d e l 1 [36,37,38] This model, called P-median, is one of the popular formulations used in early cell formation research. Its objective is to maximize total similarity between parts where the similarity between two parts can be defined in several different ways [12,58,59], of which Jaccard's similarity coefficient is used most frequent: Max
N
N
~
~,
(1)
S ik X i k
i=l k = l
Subject to: N
~, Xik
= 1
i = 1, . . . , N
(2)
k--1 N
EXkk = C
(3)
k=l
X~k -< Xkk
i,j
=
1,...,N
(4)
Constraint set (2) ensures that each part exactly belongs to one part family. Constraint (3) specifies the required number of part families. Constraint set (4) ensures that part i belongs to cell k only ff part family k has been formed. The P-median model is simple and easy to understand. The task of updating the formulation is straightforward, but the size of formulation is rather large. If N parts are to be produced, the model requires N 2 binary decision variables and N 2 + N + I constraints. Moreover, users must specify the number of part families by means of informal judgment, trial-and-error, or iteration [37]. Deciding which process to use is not an easy task and depends highly on experience, preference,
23 and judgment. The model, however, can be expanded easily to consider alternative routings or process plans [37,38]. According to [42], constraint (3) of the P-median model can be removed to limit the difficulty of determining the required number of cells. The model then will use embedded logic to find an appropriate number of part families that maximizes total similarity. And constraint set (4) can be aggregated into constraint set (5) to reduce the number of constraints and thus to improve computational efficiency [13,42]: N
Xik -< (N-1)Xkk
k = 1,...,N;i
~ k
(5)
i=l
The reduced model is more compact, consisting of only 2 N constraints. This model has been adapted and used successfully in [46] for grouping part families, machine cells, and operators simultaneously. An efficient heuristic also has been developed in [42] to improve computational efficiency further. 4.3. M o d e l 2 [71] Two major weaknesses are associated with model 1: (1) it can be used only to find part families and (2) it considers only binary routing information in the formation stage while neglecting many other important factors. Model 2 is an improvement, not only taking into consideration intercell movement cost but also forming part families and machine cells simultaneously. The purpose of the model is to minimize total cost of intercell movement, or what the authors call bottleneck costs: n
(6)
Min ]~ Hi Di i-1
Subject to: C
] = 1, . . . , M
(7)
~, Yjk <- mk
k = 1,..., C
(8)
Xik <- E
k = 1,...,C;i
~,Yjk=l
k=l M
i=1
Yjk <- QiXik
= 1,...,N
(9)
JePi C
l + Bi <-~, Xik <- 1 + GBi
i - 1, . . . , N
(10)
/ - 1, ..., M
(11)
i = 1,...,N
(12)
k=l
ieFj
tij di ( 1 - D i ) < Tj
Bi < D j
24 Constraint set (7) confines each machine to exactly one cell. Constraint set (8) allows only a maximum of m k machine types in cell k. Constraint set (9) ensures that all associated machines for each part are assigned to the same cell. Constraint set (10) identifies possible bottleneck parts. Constraint set (11) ensures that total machine hours assigned to a machine do not exceed its capacity. Constraint set (12) ensures that D i is set to 1 when either machine capacity is exceeded or part i is an exceptional element. If there are N parts that must use M machines and ff the number of desired cells is C, then the formulation requires (M+N)C+2N binary decision variables and 2M+C+(2C+3)N constraints. In other words, the greater the number of machines and part families, the more decision variables and constraints. Though the formulation is compact, the model has an intricate structure and thus is computationally intensive [13]. For example, updating and extending the formulation are very difficult. If the desired number of cells must be changed, most inputs of constraints- 2M+C+2(C+I)N- must be replaced. Furthermore, determinations of the number of desired cells, the maximum number of machines in each cell, and the unit cost of bottlenecking are limited by experience, acknowledgment, and preference. According to [13], if a large m k is selected, all parts might be assigned into one family; on the other hand, if too small an m k is chosen, clustering results would be unsatisfactory and computational efficiency inferior. Still, because the model could group parts into part families and form machine cells simultaneously, much time could be saved.
4.4. Model 3 [17] Taking one step farther, this model considers machine procurement (investment) cost in addition to intercell movement cost. The model's objective is to minimize the number of exceptional elements by considering a trade-off between total cost of intercell movement and machine duplication: Mine
C
E
k=l (i,j)eR
IjMijk + H i
C
~., ~.,
(diLijk)
(13)
i = 1,...,N
(14)
] = 1, . . . , M
(15)
k = 1, ..., C
(16)
k=l (i,j)eR
Subject to" C
k=l
Xik = 1
C
~.,Yjk=l k=l M
Z Yjk < mk i=1 Yjk -Xik
+ U ijk - V i j k
L ~jk + M ~jh = u ~jk
-- 0
V(i,j) eR;k
= 1,...,C
V (i,j) e R ; k =
1,...,C
(~7) (18)
25 Constraint set (14) ensures that each part belongs to only one part family. Constraint set (15) specifies that each machine can be assigned to only one cell. Constraint set (16) prevents each cell from being assigned more than m k machine types. Constraint set (17) identifies exceptional elements. Constraints set (18) ensures that the exceptional element is either an exceptional machine (a machine needing to be duplicated) or an exceptional part (certain operations of this part needing to be transferred and processed at another cell). Constraint sets (17) and (18) can be combined into one constraint set; the decision variable uijk thus can be eliminated to decrease problem size. This model is fairly large in that it contains C(M+N+3D) binary decision variables and (N+M+C+DN) constraints. Though it can be solved using a popular mixed integer programming package such as LINDO, ZOOM, or MPSX, obtaining an optimal solution requires more intensive computation as problem size grows. A simulated annealing heuristic has been proposed in [17] to retain the model's practical value.
4.5. Model 4 [68] Although we have illustrated in the last model that one can consider jointly more than one criterion, in an aggregated format, for cell formation, the method is applicable only when the criteria under consideration are combinable (or not in conflict with each other). If criteria are quite distinct in nature (for instance, cost vs. time), then utility theory or goal programming is a more appropriate approach. In this example, a model with two distinct criteria is introduced. The first objective is to minimize total intercell movement. The second is to minimize total within-cell load variation between machines: N
Min F1 = ~ di [Xik - 1]
(19)
i=l M
Min F2 -
C
~ ~
,]=-1 k = l
N
Yjk ~_, (wij i=1
--
mij) 2
(20)
Subject To: C
ZYjk=I
] = 1, . . . , M
(21)
k = 1,..., C
(22)
k=l M
Z Yjk > 2 i=1
Constraint set (21) ensures that each machine is assigned to only one cell. Constraint set (22) ensures that each cell contains at least two machines. The model not only can form part families and machine cells simultaneously but also takes processing time, machine capacity and part demand into consideration. Though the formulation is very compact, consisting of (N+M)C binary decision variables and (M+C) constraints, solving a 0-1 integer bi-criteria programming model is
26 much more difficult than solving a single objective 0-1 integer programming model, because as yet there is no commercial optimization software available for such a purpose. The authors have proposed a genetic algorithm heuristic to demonstrate its applicability.
4.6. Model 5 [66] Thus far, none of the above examples has considered processing sequences in the cell formation stage; neither has any directly determined the number of machines needed for each machine type. Model 5 illustrates how a 0-1 nonlinear programming model can be developed to meet such requirements. The objective function is to consider the trade-off between the total costs of intercell movement and total machine investment: Min Z Z (Ij Zjk) ]=1 k=l
C
"k-
Hi
Xikt = 1
Z
Z IXik(l+l) - Zikll
i=l l=l k=l
i=
]
1 , . . . , N ; l = 1 , . . . , J~
(23) (24)
k=l Ji ~-~gl ~-~l Zikl tikl di
TjUj
M
i=1
< Zjk
] = 1, . . . , M ; l =
1, ..., Ji
k = 1, ..., C
Zjk <- mk
(25) (26)
Constraint set (24) ensures that each operation of a part is completely carried out in a cell; that is, operation splitting is not allowed. Constraint set (25) computes the number of machines required in each cell. Constraint set (26) restricts the maximum number of machines allowed in each cell. Notice that the nonlinear term of the objective function can be eliminated by the addition of dummy variables Bik~and Dikz and a constraint set (27): Xik(l+l) - X i k l
= Bikl
i = 1 , . . . , N ; k = 1 , . . . , C ; l = 1. . . . ,Ji
- Dikl
(27)
As a result, the objective function can be restated as Min ]~ ]~ (IjZjk) + Hi i=l k=l
~-~ Z (Bikl - Dikl)
i=l l=l k=l
9
(28)
This model is the most complicated yet discussed. In terms of size, it consists of N C L binary and C M integer decision variables and L ( N + M + N C ) + C constraints. N
Where L = ~ Ji. i=l
The authors, however, have developed both simulated anneal-
ing and a tabu search heuristic to avoid possible challenges.
27 5. SUMMARY AND CONCLUSIONS Studying optimal solutions for cell formation has several advantages. First, unlike many other analytical methods (for instance, array-based methods), optimal formulations are insensitive to the existence of exceptional elements; one therefore does not need to expend effort in dealing with such elements before a final clustering can be made [13]. On the other hand, certain optimal models can be developed to find and to minimize the number of exceptional elements. Another benefit of using optimal algorithms is to have a benchmark against which to compare heuristics. The analytical approaches found in the literature are predominantly heuristic. Without optimal solutions, it would be difficult to judge heuristics [72]. Still another benefit of examining optimal algorithms is to clarify the embedded logic so that efficient heuristics can be developed or an improved optimal algorithm be created. Solution of optimal cell formation problems, however, faces the "curse of the dimension" [72] because as problem size increases computation intensifies. Furthermore, many optimal models require the cell number to be specified beforehand by means of informal judgment, iterative, or trial-and-error procedures. And in practice it is very difficult for managers to make such decisions. This chapter has discussed an extensive literature search and a survey of mathematical programming's use in cell formation. Several prototypical examples considering a variety of objectives, constraints, and structures were provided. Clearly, mathematical programming is one of the most popular analytical tools used by cell formation researchers. Several other observations can be made: (1) The mathematical programming approach is flexible enough to incorporate practical objectives and constraints into a model even though it may become too complicated to solve. (2) Because the objectives and constraints that can be considered in cell formation are so numerous, selecting suitable objectives and constraints to meet individual needs becomes an important issue for model construction. (3) Most data needed for cell formation, such as routing, processing sequence, demand, machine capacity, and processing time information, can be obtained easily from the shop floor. Development of mathematical programming models for cell formation therefore does not face the same degree of difficulty in obtaining data as many other mathematical programming applications do. (4) Emerging technological breakthroughs bring genetic algorithms, simulated annealing, tabu search, expert systems, fuzzy modeling, and neural networks into the CM arena. This breakthroughs limit the potential difficulty of solving largescale optimization models. (5) The research on cell formation has experienced tremendous growth in recent years, and many studies have considered unique features such as alternative routings, processing sequences, sequence-dependent setups, and tooling requirements. Many research issues raised in early reviews remain largely unexplored. Throughout the course of this study, it has been observed that (1) the current literature on cell formation has used quite different terminologies, indices, and notations in modeling problems and (2) no suitable criterion yet can be used to
28
justify the relative performance of optimal procedures. Contributions can be sought to unify the field's terms, indices, and notations and, consequently, to lead the development of standardized building blocks of objective and constraint sets. In this chapter, we have laid the foundation for such an attempt. An extension of the project could simplify the modeling process and thereby help users focus attention on identifying and selecting appropriate objectives and constraints and on developing ever more efficient solution procedures. REFERENCES
1. G.K. Adil, D. Rajamani and D. Strong, "A Mathematical Model for Cell Formation Considering Investment and Operational Costs," European Journal of Operational Research, 69, 3 (1993) 330-341. 2. M.U. Ahmed, N.U. Ahmed and U. Nandkeolyar, "A Volume and Material Handling Cost Based Heuristic for Designing Cellular Manufacturing Cells," Journal of Operations Management, 10, 4 (1991) 488-511. 3. I. A1-Qattani, "Designing Flexible Manufacturing Cells Using a Branch and Bound Method," International Journal of Production Research, 28, 2 (1990) 325-336. 4. A.S. Alfa, M. Chen and S.S. Heragu, "Integrating the Grouping and Layout Problems in Cellular Manufacturing Systems," Computers & Industrial Engineering, 23, 1-4 (1992) 55-58. 5. R.G. Askin and K.S. Chiu, "A Graph Partitioning Procedure for Machine Assignment and Cell Formation in Group Technology," International Journal of Production Research, 28, 8 (1990) 1555-1572. 6. F.F. Boctor, "A Linear Formulation of the Machine Cell Formation Problem," International Journal of Production Research, 29, 2 (1991) 343-356. 7. J.L. Burbidge, "Production Flow Analysis," The Production Engineer, 42 (1963) 742-752. 8. H.M. Chan and D.A. Milner, "Direct Clustering Algorithm for Group Formation in Cellular Manufacture," Journal of Manufacturing Systems, 1, 1 (1982) 65-75. 9. M.P. Chandrasekharan, and R. Rajagopalan, "An Ideal Seed NonHierarchical Clustering Algorithm for Cellular Manufacturing," International Journal of Production Research, 24 (1986) 451-464. 10. C. Cheng, "A Tree Search Algorithm for Designing a Cellular Manufacturing System," OMEGA, 21, 4 (1993) 489-496. 11. F. Choobineh, "A Framework for the Design of Cellular Manufacturing Systems," International Journal of Production Research, 26, 7 (1988) 1161-1172. 12. C.H. Chu and P. Pan, "A Comparison of Hierarchical Clustering Techniques for Manufacturing Cellular Formation," Proceedings of the Third International Conference on CAD/CAM, Robotics, and Factory of the Future, Vol. II (1988) 151-154.
29 13. C.H. Chu and W. Lee, "A Comparison of 0-1 Integer Programming Models for Cellular Manufacturing," Proceedings of the National Decision Sciences Institute Conference, (1989) 181-184. 14. C.H. Chu, "Clustering Analysis in Manufacturing Cellular Formation," OMEGA, 17 (1989)289-295. 15. C.H. Chu and M. Tsai, "A Comparison of Three Array-Based Clustering Techniques for Manufacturing Cellular Formation," International Journal of Production Research, 28 (1990) 1417-1433. 16. C.H. Chu and J.C. Hayya, "Fuzzy Clustering Approach to Manufacturing Cell Formation," International Journal of Production Research, 29 (1991) 1475-1478. 17. C.H. Chu and W.L. Shih, "Simulated Annealing for Manufacturing Cell Formation," Proceedings of the National Decision Sciences Institute Conference, (1992) 1520-1522. 18. C.H. Chu and C. C. Tsai, "Manufacturing Cell Formation in a Fuzzy Environment," Proceedings of the National Decision Sciences Institute Conference, (1993) 1432-1434. 19. C. H. Chu, "A Neural Network Model for Manufacturing Cell Formation," Working paper, Department of Management, Iowa State University, January 1994. 20. H.C. Co and A. Araar, "Configuring Cellular Manufacturing Systems," International Journal of Production Research, 26, 9 (1988) 1511-1522. 21. N.E. Dahel, "An Operation Sequence Based Model for Cell Formation Decisions," Proceedings of the National Decision Science Institute Conference, (1993) 1519-1521. 22. J.F. Ferreira Ribeiro and B. Pradin, "A Methodology for Cellular Manufacturing Design," International Journal of Production Research, 31, 1 (1993) 235-250. 23. G.V. Frazier, N. Gaither and D. Olson, "A Procedure for Dealing with Multiple Objectives in Cell Formation Decisions," Journal of Operations Management, 9, 4 (1990) 465-480. 24. T.A. Gongaware and I. Ham, "Cluster Analysis Applications for Group Technology Manufacturing Systems," Manufacturing Engineering Transactions, ( 1981) 503-508. 25. K.R. Gunasingh and R.S. Lashkari, "The Cell Formation Problem in Cellular Manufacturing Systems - A Sequential Modeling Approach," Computers & Industrial Engineering, 16, 4 (1989) 469-476. 26. K.R. Gunasingh and R.S. Lashkari, "Machine Grouping Problem in Cellular Manufacturing Systems - An Integer Programming Approach," International Journal of Production Research, 27, 9 (1989) 1465-1473. 27. K.R. Gunasingh and R.S. Lashkari, "Simultaneous Grouping of Parts and Machines in Cellular Manufacturing Systems - An Integer Programming Approach," Computers & Industrial Engineering, 20, 1 (1991) 111-117. 28. Y.P. Gupta, M.C. Gupta, C. Sundram and A. Kumar, "Minimizing Total Intercell and Intracell Moves in Cellular Manufacturing: A Genetic Algorithm
30
29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40.
41. 42. 43. 44.
Approach," Proceedings of the Decision Sciences Institute Conference, (1992) 1523-1525. C. Han and I. Ham, "Multiobjective Cluster Analysis for Part Family Formations," Journal of Manufacturing Systems, 5, 4 (1986) 223-230. G. Harhalakis, R. Nagi and J.M. Proth, "An Efficient Heuristic in Manufacturing Cell Formation for Group Technology Applications," International Journal of Production Research, 28, 1 (1990) 185-198. N.L. Hyer and U. WemmerlSv, "Group Technology Oriented Coding Systems: Structures, Applications and Implementation," Production and Inventory Management, 26 (1985) 55-78. A.K. Kamrani and H.R. Parsaei, "A Group Technology Based Methodology for Machine Cell Formation in a Computer Integrated Manufacturing Environment," Computers & Industrial Engineering, 24, 3 (1993)431-447. J.R. King, "Machine-Component Group Formation in Group Technology," OMEGA, 8 (1980) 193-199. J.R. King, "Machine-Component Grouping in Production Flow Analysis: An Approach Using a Rank Order Clustering Algorithm," International Journal of Production Research, 18 (1980) 213-232. J.R. King and V. Nakornchai, "Machine-Component Group Formation in Group Technology: Review and Extension," International Journal of Production Research, 20 (1982) 117-133. A. Kusiak, "The Part Families Problem in Flexible Manufacturing Systems," Annals of Operations Research, 3 (1985) 279-300. A. Kusiak, "The Generalized Group Technology Concept," International Journal of Production Research, 25, 4 (1987) 561-569. A. Kusiak and S.S. Heragu, "Group Technology," Computers in Industry, 9, 2 (1987) 83-91. A. Kusiak and C. Cheng, "A Branch-and-Bound Algorithm for Solving the Group Technology Problem," Annals of Operational Research, 26 (1990) 415-431. R.S. Lashkari, S.P. Dutta and G. Nadoli, "Part Family Formation in Flexible Manufacturing Systems - An Integer Programming Approach," in Kusiak, A., Modern Production Management Systems, Elsevier Science Publishers B. V., North Holland, 1987, 627-635. H. Lee and A. Garcia-Diaz, "A Network Flow Approach to Solve Clustering Problem in Group Technology," International Journal of Production Research, 31, 3 (1993) 603-612. W. Lee and C.H. Chu, "An Efficient Heuristic for Grouping Part Families," Midwest DSI Proceedings, (1990) 62-64. R. Logendran, "Methodology for Converting a Functional Manufacturing System into a Cellular Manufacturing System," International Journal of Production Economics, 29, 1 (1993) 27-41. R. Logendran, "A Binary Integer Programming Approach for Simultaneous Machine-Part Grouping in Cellular Manufacturing Systems," Computers & Industrial Engineering, 24, 3 (1993) 329-336.
31
45. J. Mcauley, "Machine Grouping for Efficient Production," The Production Engineer, 51 (1972) 53-57. 46. H. Min and D. Shin, "Simultaneous Formation of Machine and Human Cells in Group Technology: A Multiple Objective Approach," International Journal of Production Research, 31, 10 (1993) 2307-2318. 47. R. Nagi, G. Harhalakis and J.M. Proth, "Multiple Routings and Capacity Considerations in Group Technology Applications," International Journal of Production Research, 28, 12 (1990) 2243-2257. 48. G.F.K. Purcheck, "A Mathematical Classification as a Basis for the Design of Group-Technology Production Cells," The Production Engineer, (January 1975) 35-48. 49. G.F.K. Purcheck, "A Linear-Programming Method for the Combinatorial Grouping of an Incomplete Power Set," Journal of Cybernetics, 5, 4 (1975) 51-76. 50. D. Rajamani, N. Singh and Y.P. Aneja, "Integrated Design of Cellular Manufacturing Systems in the Presence of Alternative Process Plans," International Journal of Production Research, 28, 8 (1990) 1541-1554. 51. D. Rajamani, N. Singh and Y.P. Aneja, "A Model for Cell Formation in Manufacturing Systems with Sequence Dependence," International Journal of Production Research, 30, 6 (1992) 1227-1236. 52. D. Rajamani, N. Singh and Y.P. Aneja, "Selection of Parts and Machines for Cellularization: A Mathematical Programming Approach," European Journal of Operational Research, 62, 1 (1992) 47-54. 53. S. Sankaran, "Multiple Objective Decision Making Approach to Cell Formation: A Goal Programming Model," Mathematical Computational Modeling, 13, 9 (1990) 71-82. 54. S. Sankaran and R.G. Kasilingam, "On Cell Size and Machine Requirements Planning in Group Technology Systems," European Journal of Operational Research, 69, 3 (1993) 373-383. 55. R.P. Selvam and K.N. Balasubramanian, "Algorithmic Grouping of Operation Sequences," Engineering Costs and Production Economics, 9 (1985) 125-134. 56. S. Shafer and D.F. Rogers, "A Goal Programming Approach to the Cell Formation Problem," Journal of Operations Management, 10, 1 (1991) 28-43. 57. S. Shafer, G.M. Kern and J.C. Wei, "A Mathematical Programming Approach for Dealing with Exceptional Elements in Cellular Manufacturing," International Journal of Production Research, 30, 5 (1992) 1029-1036. 58. S.M. Shafer and D.F. Rogers, "Similarity and Distance Measures for Cellular Manufacturing: Part I - A ~urvey," International Journal of Production Research, 31, 5 (1993) 1133-1142. 59. S.M. Shafer and D.F. Rogers, "Similarity and Distance Measures for Cellular Manufacturing: Part II - An Extension and Comparison," International Journal of Production Research, 31, 6 (1993) 1315-1326.
32 60. A. Shtub, "Modeling Group Technology Cell Formation as a Generalized Assignment Problem," International Journal of Production Research, 27, 5 (1989) 775-782. 61. N. Singh, "Design of Cellular Manufacturing Systems: An Invited Review," European Journal of Operational Research, 69, 3 (1993) 284-291. 62. S. Song and K. Hitomi, "GT Cell Formation for Minimizing the Intercell Parts Flow," International Journal of Production Research, 30, 12 (1992) 2737-2753. 63. G. Srinivasan, T.T. Narendran and B. Mahadevan, "An Assignment Model for the Part-Families Problem in Group Technology," International Journal of Production Research, 28, 1 (1990) 145-152. 64. H.J. Steudel and A. Ballakur, "A Dynamic Programming Based Heuristic for Machine Grouping in Manufacturing Cell Formation," Computers & Industrial Engineering, 12, 3 (1987) 215-222. 65. A.J. Vakharia, "Methods of Cell Formation in Group Technology: A Framework for Evaluation," Journal of Operations Management, 6, 3/4 (May/Aug 1986) 257-271 66. A.J. Vakharia, Y. L. Chang and H.M. Selim, "Cell Formation in Group Technology: A Combinatorial Search Approach," Proceedings of the National Decision Sciences Institute Conference, (1992) 1310-1312. 67. A.J. Vakharia and B.K. Kaku, "Redesigning a Cellular Manufacturing System to Handle Long-Term Demand Changes: A Methodology and Investigation," Decision Sciences, 24, 5 (1993) 909-930. 68. V. Venugopal and T.T. Narendran, "A Genetic Algorithm Approach to the Machine-Component Grouping Problem with Multiple Objectives," Computers & Industrial Engineering, 22, 4 (1992) 469-480. 69. V. Venugopal and T.T. Narendran, "Cell Formation in Manufacturing Systems Through Simulated Annealing: An Experimental Evaluation," European Journal of Operational Research, 63, 3 (1992) 409-422. 70. T. Vohra, D.S. Chen, J.C. Chang and H.C. Chen, "A Network Approach to Cell Formation in Cellular Manufacturing," International Journal of Production Research, 28, 11 (1990) 2075-2084. 71. J.C. Wei and N. Gaither, "An Optimal Model for Cell Formation Decisions," Decision Sciences, 21, 2 (1990) 416-433. 72. J.C. Wei and N. Gaither, "A Capacity Constrained Multiobjective Cell Formation Method," Journal of Manufacturing Systems, 9, 3 (1990) 222-232. 73. U. WemmerlSv and N.L. Hyer, "Procedures for the Part Family/Machine Group Identification Problem in Cellular Manufacturing," Journal of Operations Management, 6 (1986) 125-147. 74. U. WemmerlSv and N.L. Hyer, "Research Issues in Cellular Manufacturing," International Journal of Production Research, 25 (1987)413-431.
Appendix A Summary of model types and solution procedures Formulation Logic Ref
Model Type
Solution Procedure
[I]
Mixed integer
Divide the model into three submodels and then solve each via an optimal procedure.
[2]
Mixed nonlinear
The model was linearized and then solved by both optimal and iterative heuristics.
[3]
Network
Branch and bound.
(41
0-1 nonlinear
Group machine cells via simulated annealing and then perform layout planning via a modified penalty heuristic.
[5]
0-1 integer
Divide the model into two submodels and then solve each with a graphic theoretic heuristic.
[6]
0-1 integer
Simulated annealing
[LO]
0-1 integer
Search algorithm.
[I I]
0-1 integer
Part families were formed via single linkage clustering. An optimal procedure then was used to form machine cells.
[I31
0-1 integer
Optimal procedure
[I61
0-1 integer
F u n y c-mean clustering.
[I71
0-1 integer
Simulated annealing
[20]
0-1 integer
An optimal procedure was used to assign jobs to machines. An
extended rank order clustering procedure then was used to form machine cells and a direct search method was used to group part families.
Machine Cell onlv
Part Familv Onlv
Sequentiallv
Softwarel Language Simultaneouslv X
X
Computer System Used
Hyper LMM)
486 PC
LMDO
-
X
Fortran
o o o o
Perform CF and scheduling together. Consider sequence-dependent setup. Can find an approximate no. of cells No intercell movement is allowed.
o Can find an appropriate no. of cells. o Allow only one machine for each type. o Need to specify an initial solution. o Can generate alternative solutions.
-
o Perform machine grouping and then layout planning. o A quadratic assignment model is briefly inwoduced.
VAX
o The first submodel assigns parts into machines. The second submodel groups machines into cells.
(Mini)
Turbo PASCAL
Comments
PC
o Assume that part families have been formed. o Consider processing sequence in similarity coefficients.
X X
X
Hyper LMW
286 PC
BASIC
286 PC
Turbo PASCAL
486 PC
LMDO
PC
o Can find an appropriate no. of cells.
o Need a set of formulation for each machine. o Can fmd an appropriate no. of cclls.
W
W
Appendix A: (Continued) Summarv of model t v ~ e sand solution ~rocedures Formulation Logic Ref
Model Type
Solution Procedure
Machine Cell Only
Part Family Only
Sequentially
(211
0-1 integer
Optimal procedure.
X
[22]
0-1 integer
The problem was modeled in two phases. The first phase is to select machines and assign parts to machines via the Knapsack algorithm. The second phase is to form part families via the dynamic cloud heuristic.
X
[25]
0-1 integer
[26a]
Simultaneously
Softwarel Language used
Computer System Used
-
Comments o Consider operation index. o Consider unidirected flow of P*.
Turbo PASCAL
PC
The problem was modeled in two phases. The first phase is to form machine cells. The second phase is to assign parts to machines. Both phases are solved by optimal procedures.
SASIOR
IBM 4381
o Consider tooling requirements in part families formation.
0-1 integer (Model 1)
Optimal procedure.
SASIOR
IBM 4381
o Consider alternative routings and tooling requirements. o Assume that part families are known.
[26b]
0-1 nonlinear (Model 2)
The model was linearized and then solved by an optimal procedure.
SASIOR
IBM 4381
o Consider alternative routings and tooling requirements. o Assume that part families are known.
[27a]
0-1 nonlinear (Model 1)
The model was linearized and then solved by an optimal procedure.
SASIOR
IBM 4381
o Consider tooling requirements.
[27b]
0-1 nonlinear (Model 2)
The model was linearized and then solved by an optimal procedure.
X
SASIOR
IBM 4381
o Consider tooling requirements.
[28]
0-1 integer
Genetic algorithm.
X
[29]
0-1 integer
Optimal procedure.
PCIXT
o Base upon design features. o Use a classification and coding scheme.
[30]
Linear
The problem was modeled in two phases. Heuristic procedures were used to solve both phases.
SUN (unix)
o Consider processing sequences.
X
C
o Consider alternate routings.
Appendix A: (Continued) Summary of model types and solution procedures Formulation Logic Solution Procedure
Machine Cell Only
Part Family Only
Sequentially
Simultaneously
Sofhvarel Language used
Computer System Used
PASCAL LMDO
PC
Comments
Ref
Model Type
[32]
0-1 integer
The problem was modeled in two phases. The first phase is to form part families via a classification and coding scheme. The second phase is to form machine cells. Both phases are solved via optimal procedures.
[36]
0-1 integer
Optimal procedure.
[37]
0-1 integer
Optimal procedure.
-
o Consider alternative routings.
[38]
0-1 quadratic
Use P-median model to obtain approximate solution.
-
o Consider alternative routings.
1391
Matrix
Branch and bound.
PASCAL
Prime (Mini)
o Can find an appropriate no. of cells.
1401
0-1 nonlinear fractional
The model was linearized and then a search algorithm used with embedded optimal procedures.
SASIOR
IBM 4381
o Consider tooling requirements.
[41]
Network
Network flow algorithm.
RELAXT 111
PCIAT
o Can find an appropriate no. of cells.
[42]
0-1 integer
Use both optimal and heuristic procedures.
BASIC
386 PC
o Can find an appropriate no. of cells.
[43]
0-1 quadratic
The model was modeled in three phases. The first phase is to group machine cells. The second phase focuses on assigning parts to machine cells. The third phase is to evaluate cell utilization. All are solved by heuristics.
Quick C
386 PC
o Consider operation index. o Consider processing sequences.
1441
0-1 quadratic
The model was linearized and then solved by an optimal procedure.
MPSX 370
IBM 4381
o Consider operation index. o Consider processing sequences.
1461
Goal (mixed integer)
The model was divided into two submodels. The first model is to form machine cells. The second model is to allocate operators to cells. Both models were solved by heuristics.
ZOOM (XMP)
VAX (Mini)
o Consider operator assignments. o Can find an appropriate no. of cells.
X
X X
o Consider design features. o Use a classification and coding scheme. o Intercell movement is not allowed.
35
Appendix A: (Continued) Summary of model types and solution procedures
36
w O\
Formulation Logic Ref
Model Type
Solution Procedure
[47]
Mixed integer
The model was divided into two submodels. The first phase is to deal with routing selection via linear programming. The second phase is to perform cell formation via a heuristic.
[48]
Linear
Simplex (matrix) method.
Machine Cell Only
Part
Family Only
Sequentially
Simultaneously
Software/ Language used
Computer System used
C
SUN (UNIX)
Comments o Consider alternative routings o Need to specify an initial solution.
[49]
Linear
Set parlition algorithm
[50a]
0-1 integer (Model 2)
Optimal procedure.
[50b]
0-1 nonlinear (Model 3)
The model was linearized and then solved by an optimal procedure.
[5 11
Mixed integer
Optimal procedure.
[52a]
Linear
Revised simplex (heuristic) procedure.
Fortran
IBM 4381
o Consider operation index. o Consider alternative mutings.
[52b]
Mixed integer
Branch and bound.
Fortran
IBM 4381
o Consider operation index. o Consider alternative mutings.
[53]
Goal (mixed integer)
Optimal procedure.
X
LINDO
IBM 4381
[54]
0-1 quadratic
The model was linearized and then divided into two models. The first model was solved via both optimal and heuristic procedure. The second model was solved via a heuristic.
X
LINDO
IBM 4381
[55]
Linear
Optimal procedure.
[%a]
Goal-Model 1 (mixed integer)
Divide the model into two submodels and then solve each via an optimal procedure.
[56b]
Goal-Model 2 (mixed integer)
Divide the model into two submodels and then solve each via an optimal procedure.
LINDO
o Consider alternative routings. o Assume that part families an known.
X
LINDO
o Consider alternative routings.
X
LMDO
o Consider sequence-dependent setup.
o Consider operation index. o Consider alternative routings.
o Consider processing sequences in similarity coefficients. ZOOM
X
ZOOM
VAX (Mini)
o Consider sequence-dependent setup. o Purchase all new equipment
VAX
o Consider sequence-dependent selup. o Use only existing equipment
(Mini)
Appendix A: (Continued) Summary of model types and solution procedures Formulation Logic Ref
Model Type
Solution Procedure
Machine Cell Only
[56c]
Goal-Model 3 (mixed integer)
Divide the model into two submodels and then solve each via an optimal procedure.
[57]
Integer
Optimal procedure
[60]
General assignment
Subgradient procedure.
X
[62]
Quadratic assignment
Lagrangean relaxation; branch and bound.
X
[63]
Assignment
Use assignment procedure to form machine cells and then search for part families.
Part Family Only
Sequentially
Siultaneously
X
Sofhvare/ Language used ZOOM
LINW
Computer System used
VAX
o Consider sequence-dependent setup. o Can add some new equipment to existing equipment.
PC
o Deal with exceptional elements after cell formation.
(Mini)
-
o Consider alternative routings.
-
o Can find an appropriate no. of cells.
-
o Can find an appropriate no. of cells.
Dynamic
Search (heuristic) procedure.
0-1 nonlinear
The model was linearized and then solved by both simulated annealing and tabu search.
[67]
Mixed nonlinear
The model was linearized and then solved via a heuristic procedure.
[68]
0-1 Bi-criteria integer
Genetic algorithm.
X
PC
[69]
0-1 integer
Simulated annealing.
X
Mainframe
[70]
Network
Network algorithm.
X
-
o Can find an appropriate no. of cells
[71]
0-1 integer
Optimal procedure.
X
ZOOM
Supercomputer
o No inter- or intra- cell movement is allowed.
[72]
Goal (0-1 integer)
Divide the model into two submodels. The first model was solved via an optimal procedure. The second model was solved via a heuristic.
X
ZOOM
Main & SuperComputer
16
40
41
X
ZOOM1 XMP ZOOM
17
8
I6
Mac
o Consider operation index.
VAX
o Deal with exceptional elements afkr cell formation. o Consider operation index. o Can be extended to consider processing sequences.
(Mini)
37
[64] 1661
Total
X
Comments
4 W
Ref
38
Appendix B: Summary of objective functions Remaining Measures
Cost Based Measures
Coefficient Based Measures
Format of 05
06
07
08
X
09
010
X
X
0 1 1 (Miscellaneous Cost)
012
013
014
015
0 1 6 (Miscellaneous)
WIP; penalty for late and early finish; capacity expansion.
Measure Aggregated
Aggregated
X X X
X
Single
X
Aggregated
X
Aggregated
Tooling.
X X X
Single
X X
Aggregated
X
Work-in-process (WIP).
Aggregated Single o Minimize total distances.
X
Machine duplication
Single Aggregated
X
Single Single Single Aggregated Single Aggregated Single
Machine duplication.
Aggregated
X
X
Aggregated
o Minimize total distances. X
Single Single
Appendix B: (Continued) Summary of objective functions Ref
Coefficient-Based Measures
Cost-Based Measures
Remaining Measures Format of
05
06
07
08
X
X
X
X
09
010
011(Miscellaneous Cost)
012
013
014
015
0 1 6 (Miscellaneous)
Fixtures and tools.
Measure Aggregated
o Minimize total distances.
Single Single Single Single Single
o Minimize total dissimilarity between machines.
Single Single
X
X
X
X
Aggregated Aggregated
X
Labor.
o Maximize total similarity between machine and operator. o Minimize total machining times. o Match each operator's skills.
Multiple
Single Single Single Single Single Aggregated o Maximize total amount of parts produced.
Single
39
X
Single
40
Appendix B: (Continued) Summary of obiective functions Cost Based Measures
Coefficient Based Measures
Ref
Remaining Measures Format of
05
06
07
08 X
Tooling.
X
X
X
Space utilization.
X
X
X X
09
010
011(Miscellaneous Cost)
012
013
014
X
X
015
016(Miscellaneous) o M i . total machine hours
Measure Multiple Aggregated
X
Aggregated
X
X
X
o Min. total setup times.
Multiple
X
X
X
o Min. total setup times.
Multiple
X
X
X
o Min. total setup times.
Machine duplication; subcontracting.
Multiple Aggregated
Total costs (no detail is given).
Single Single Single Single Agsngated
Machine duplication.
Aggregated o Minimize total withii cell load variation.
Multiple
o Minimize total withii cell load variation.
Single Single
Opportunity.
Single o Maximize average cell utilization. o Minimize intracell load imbalance. o Minimize intercell load imbalance.
Total
5
3
3
3 1 1 4
15
5
6
3
3
Multiple
Appendix C: Summary of constraints Logical Constraints Ref
C1
C2
C3
C4
1
Ccll Size Constraints CS
C6
C7
C8
Physical Constraints
Modeling Constraints CIS
C16
C17
C18
Miscellaneous Consmints o Completion; sequence; capacity change; due date; stage link, modeling.
X
o Constraints to compute the no. of skip. ping operations. o Constraints were embedded in the network. o Layout constraints.
X X o All paris be assigned.
o Unique operations assignment o Consmints a restrict the unidirected ~OWS.
X
X
X
X X
X o A machine ccll should be formed before machines can be assigned to that cell.
o Constraints to restrict the maximum no. of each machine type one can procure.
X X
X X
X
X
41
Logical Constraints
42
Appendix C: (Continued) Summary of constraints Cell Size Constraints
Physical Constraints
C9
CIO
C11
C12
Modeling Constraints C13
C14
CIS
C16
C17
C18
Miscellaneous Constraints
X X
1381
X
o Constraints to specify the required no. of parts in each family.
o Constraints were embedded in the network.
o Unique operations assignment. o Constraints to link operations, parts, and machines. o Unique operations assignment. o Constmints to link operations, parts, and machines. o Unique operators assignment o No. of cells must be less than no. of operators. o Constraints to restrict the maximum no. of operators be assigned into each cell.
o Unique muting selection. o Link routing with machines. o Unique routing selection. o Link routing with machines. o Unique sequence assignment o Sequence constraints.
X
X
X
X
X
X
X
Appendix C: (Continued) Summary of constraints Logical Constraints Ref
C1
C2
C3
C4
Cell Size Constraints C5
C6
C7
C8
Physical Constraints C9
C10
Cll
C12
Modeling Constraints
C13
CIS
C14
Cl6
C17
CIS
Miscellaneous Constraints o Minimum level of machine similarity. o Minimum level of tool similarity. o Minimum level of movements. o Unique operations assignment. o Constraints to compute the no, of intracell movements. o A constraint to restrict the maximum no. of cells.
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
o Sequence-dependent constraints,
X
o Sequence-dependent constraints.
X
o Sequence-dependent constraints. X
X
o Constraints to restrict parts from subcontracting.
o Constraints wen embedded in the assignment model. o Constraints were embedded in the dynamic programming model. o Unique operations assignments o Consuaints to link operations, parts, and machines. o Unique operations assignment
o Constraints were embedded in the network. o Constraints to identify parts to be produced outside the cells.
Total
28
14
7
7
25
13
I1
6 1
9
19
8
4
7
3
1
4
2
2
3
w P
-
Ref
Routing Binmy (B), Sequence (S)
Product Demand
44
Appendix D Summarv of reauired in~utdata
-
Tine Proc. (P); Setup (S): Inspection
-
Machine Capacity
Ccll S h e Machine (M); Part (P); Operator ( 0 )
No. of Machine Types
(I)
Cost - Budget (B); Handling (H); Inspxtion @);Machine I); Cell Overhead (0); Setup (S); Tooling 0
Others
Simiiaritv (S) or Dissimilarity - ID) ., Coefficients Machines -
[I]
S
M,0; S
P
X
H (Inter- & Intra- cell)
121
B
[31
B
141
B
X
Is]
B
X
161
B
1101
B
X
P
[I 11
S
X
P
1131
B
- --
Pam --
Others -
Due date; arrival time; inventory holding wrt; penalty wN; wst for capacity expansion. Machine idle w s t No. of cells.
H (Inter- & Intra- cell)
H (Intercell); M; 0 ; T
P
No. of cells; space mailable for each cell; distance. No. of cells. No. of cells.
X
H (Intercell); 0
No. of cells; batch size.
B; M; 0; S
No. of cells; WLP w s r
[I61
B
[I71
B
[201
B
P
X
1211
S
X
P
X
M
[22]
B
X
P
X
M; P
No. of cells.
1251
B
M; P
No. of cells; tooling requircmenrs.
[26al
B
M
[26bl
B
M
[27a1
B
M,P
X
[27bl
B
M;P
x
D i c e
NO. of cells.
x
M
H (Intercell); M
X
No. of cells. No. of cells.
X
No. of cellr; tooling requirements.
H (Intercell); M
Compatibility Compatibility
No. of cells; maximum no. of each machime typc one can proclue. No. of cells; tooling requirements.
H (Intercell); M
D S
NO. of cells; tooling requinmmts.
Compatibility
Appendix D: (Continued) Summary of required input data Ref
[281
Routing Binary (B); Sequence (S)
B
Product Demand
Time Pmc. @'); SeNp (S); lnspect~on (1)
Machine Capac~ly
Cell Size Machine (M); P M (P); Operator ( 0 )
No. of Machine Types
-
Cost Budget (B); Handling (H);Inspection (I) Machine ; 0; Cell Overhead (0). Setup (S), Tooling 0
M
Others
Similarity (S) / Dissimilarity (D) Coefficients Machines
Pam
Othcrr
No. of cells. No. of cells; design feahms.
Distance (code)
No. of cells; WIP eon; batch size. B; H (Intracell); I; M, 0, S; T [361
No. of cells; design features.
D Distance (P)
No. of cells.
B
No. of cells; altcrnativc routings.
S
No. of cells; altcmativc routings; no. of p m in each family.
S
No. of cells; tooling requirements.
No. of cells; available space. No. of cells. Wage rate; skill matching factor; max. no. of pam an operator can handle.
S (machine
& operator)
No. of cells; altcmative routings. M M
No. of cells. No. of cells. No. of cells; Altcmative mutings.
B; M
No. of cells; Altcmative Routines .
45
B; M
Ref
Routing Binary (B); Sequence (S)
Product Demand
46
Appendix D: (Continued) Summary of required input data
-
Time Proc. (P); Setup (S); Inspection (r)
-
Machine Capacity
Cell She Machine (M); Part (P); Operator (0)
No. of Machine Types
-
Cost Budget (El); Handling OI); Inspcction (I);Machine (M); Cell Overhead (0); Setup (S); Tooling (T)
Others
Similarity (S) I Dissimilarity Coefficients Machines
Parts
0)
Others
No. of cells. No. of cells. No. of cells. No. of cells; min. level of movement; min. level of machine similarity; min. level of tool similarity.
H (Inter- & Inba- cell); M; 0
No. of cells; available space; space requirements; alternative routing?..
H (Inter- & InIra- cell)
No. of cells; machinc idle time wst.
S
No. of cells. No. of cells. No. of cells. Subcontracting wst; cell wnfiguration. No. of cells. No. of cells. S S
H (Intercell); M
No. of cells; maximum machine utilization.
H (Intercell); M
Cell configuration; max. machme utilization. No. of cells. No. of cells. No. of cells. No. of cells; opportunity cost No. of cells.
Tool Similarity
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
47
A n I n d u s t r i a l A p p l i c a t i o n of N e t w o r k - F l o w M o d e l s in C e l l u l a r M a n u f a c t u r i n g Planning Alberto Garcia-Diaz a and Hongchul Leeb aTexas A&M University, College Station, TX 77843, USA bKorea University, Seoul, Korea ABSTRACT A network flow methodology is developed for grouping machines into cells and forming part families in cellular manufacturing. A three-phase methodology allows the study of important considerations related to number of cells, number of machines in each cell, and part family size. The first phase uses a functional relationship between machines based on the operation sequences of the parts to be manufactured to provide the parameters needed for a network representation of the problem. The second phase performs a network partitioning procedure to group machines into cells. The third phase identifies the part families through a network approach that allows the identification of a feasible assignment of parts to machine cells satisfying a restriction on part family size. An industrial application of the proposed methodology is presented. I. I N T R O D U C T I O N The process of planning a cellular manufacturing system is essentially based on the recognition and quantification of the production flow relation between machines. This relationship is created by the operational requirements, expressed in terms of machine sequences, of the parts that need to be processed. A comprehensive approach to modelling a cellular manufacturing system should include all significant variations of cell formation, particularly those related to number of cells, cell size, and cell utilization. The purpose of this article is two-fold. First, a network-flow approach to cell formation and part family identification is developed. Second, an industrial application of the network-flow approach is presented and the corresponding results are summarized. The measure of effectiveness used in this approach has been chosen as the minimization of the overall intercellular flow of parts. This minimization process is conducted by means of a state-of-the art optimization procedure known as a "relaxation method" (Bertsekas and Tseng 1988), which belongs to a special class of iterative ascent methods. Once the machine cells are designed, the parts to be manufactured are integrated into part families based on the machine requirements, and in such a way that no part family size exceeds a specified upper bound. The following three important scenarios of the cell formation process will be considered:
48 Case 1. Unrestricted number of cells and unrestricted cell size Case 2. Restricted number of cells and unrestricted cell size Case 3. Restricted number of cells and restricted cell size In Case 1 no desired number of machine cells is stipulated in advance, and no limits are imposed on cell size. In Case 2 and Case 3 the number of machine cells is specified as result of considering factors such as space limitations on the shop floor, number of required tools of each part, and scheduling considerations. Additionally, Case 3 allows the consideration of an upper bound on the number of machines in any cell. This article is organized as follows. Existing algorithms and solution procedures for solving a special case of the problem to be investigated, as well as other related problems, are briefly reviewed in Section 2. Section 3 presents the overall approach to the machine-part grouping problems along with a short discussion of each proposed network flow methodology. An industrial application of the proposed methodology is presented in Section 4. Section 5 has a few final recommendations. 2. LITERATURE REVIEW The cell formation problem can be conceptually viewed as being directly related to the problem of partitioning a network into subnetworks. Although the current technical literature in the areas of operations research and manufacturing systems boasts a large number of heuristics and optimization procedures for group technology, most procedures do not guarantee global optimal solutions and have limitations on the size of problems that can be solved in a reasonable computer time (polynomial time). In this section, various approaches to network partitioning problem are briefly discussed. Graph-theoretic techniques for Case 1 have been developed by Rajagopalan and Batra (1975) and Lee and Hwang (1991). Rajagopalan and Batra used the idea of cliques in a graph as a basis to decompose the graph into subgraphs. Once all cliques in the graph are found by using an algorithm due to Bron-Kreshoch, they are merged to form subgraphs, using a modified version of an algorithm due to Kernighan and Lin. The major limitation of this approach comes from the result that the cliques identification problem is NP-complete. Lee and Hwang developed a heuristic algorithm based on the construction of a maximum spanning tree. The tree is then partitioned into subtrees by a hierarchical divisive algorithm. A drawback of this method is the non-uniqueness of a maximum spanning tree. As a result of this, the method may generate more than one solution so that both qualitative and quantitative evaluations are needed after generating the subsets. Mulvey and Crowder (1979) proposed an optimization algorithm to solve the p-median model for Case 2 based on Lagrangian relaxation by employing a subgradient method for determining lower bounds and a simple search procedure for determining upper bounds. The algorithm by Mulvey and Crowder computes the lower bounds based on a heuristic algorithm developed by Ward (1963). The number of iterations required by the subgradient method was reduced by Kusiak et al. (1986) by using a simpler procedure due to Arthanari and Dodge (1973)
49 for computing lower bounds. Another solution procedure for solving the p-median model is a dynamic programming algorithm (DP) developed by Jensen (1969). Although an effective approach to reduce the computational time, the DP approach, however, requires more computer memory than is needed under a total enumeration strategy. Khumawala (1972) and Christofides and Beasley (1982) have developed tree search procedures based on the branch and bound concept to determine lower bounds. The identification and verification of converged optimal solution is more tedious than other heuristic procedures and considerably large problem may not be solvable by this approach. Kernigan and Lin (1970) have developed a heuristic method to partition a given number of nodes into p subnetworks of size n based on a pair-wise subset exchange routine with computational complexity of the order O(n2). The technique assumes that the subnetwork size is known in advance. Kumar et al. (1986) have developed a 0-1 quadratic programming (QP) model for Case 3. Their formulation maximizes the sum of "costs" (cumulative quantities to be minimized) within all p subnetworks in such a way that each nodes belongs to exactly one subnetwork, and each subnetwork does not exceed the limit number of nodes. The eigenvector approach due to Barnes (1982) was modified to approximate the global solution of the 0-1 QP. 3. NETWORK FLOW APPROACH
The methodology consists of three phases: (a) preprocessing for computing the material flow between machines for a network representation of the problem; (b) network modeling and solution procedure for creating machine cells; and (c) part family formation subject to a constraint on family size. The conceptual approach is shown in Figure 1. To proceexl in a more systematic fashion, the network model to be formulated in the preprocessing phase of Figure 1 will be represented as G = (A,N), where N is the set of nodes (or machines) and A is the set of arcs. In Case 1 there is a directed arc eii connecting node i to node j if any number of parts are processed by machine j immediately after being processed by machine i. The weight of arc eii is denoted by pii and it is computed as the total number of parts moved from machine i to machine j. Alternatively, in Cases 2 and 3 two nodes are joined by an undirected arc if there is any part movement between the machines represented by the nodes. In these cases, the weight of the arc e~i between machines i and j is also denoted by Pii but it is computed as the sum of part moved (in either direction) between the two machines. After representing the problem as a network model in the first phase, a network partitioning procedure is required to form machine cells in the second phase. The objective of the cell formation model proposed in this article is to minimize intercellular movements of material (which will reduce the setup times and material handling costs). In order to meet this objective, machines having strong relationships (high number of part movements) should be grouped together, and machines having weak relationships (low number of part movements) should be separated. This can be accomplished systematically using the concept of network partitioning.
50
LOperational sequences Network Representation Nodes: machines Arcs weight" Material flows between mac lines
Preprocessing Phase
f
Desired number of cells? /
Desired number of machines in the cell ?
|
. /
Network Model
A 0-1 LP Model
Cell Formation Phase
A 0-1 QP Model
N e t w o r k flow based solution procedure
I Part Family Size Consideration I Part Family Formation Phase
A O-1LP Model
N e t w o r k flow based solution procedure Figure 1. Overall Conceptual approach A partitioning of the nodes of network G into two subsets N I and N 2 satisfies the two conditions N = N1UN2 and N~f~2 = O. The set of all arcs which have one node in N~ and another one in N2 is known as a cut-set and is denoted by (N~, N2). The total number of part movements associated with this partition is defined by
C(N~, ~ ) = ~
~
i e~v~ j e~
51 A minimal cut-set or partition is defined as the one associated with a minimal value of C(N~,N2). This minimal-value partitioning corresponds to the rearrangement of machines into two cells with minimal interaction (part moves) between them. As an illustration, Figure 2 shows the partitioning of a network into two subnetworks. Here N = { 1, 2, 3, 4, 5, 6, 7} and A = {el2, el3, ..., e67}, l~ll = {1, 2, 5}, N2 = {3, 4, 6, 7}, (NI, N2) {el3, e14, e24, e45, e57}, and C(N~,N 2) = 21. This minimum total number of part movements corresponds to five arcs. =
3 3
3
3
3)
4 9
@ Figure 2. Illustration of Network Partitioning Concept The partitioning concept illustrated in Figure 2 can be generalized to any number of subnetworks p. In this case, we are interested in determining p mutually exclusive sets N l, N2, ..., Np such that the total sum C(N~,N2,...,Np) of parts moved along the p cut-sets is minimized. In this article we propose a specialized network flow methodology to identify the partitioning associated minimal value. That is, a procedure will be developed to arrange the machines into cells in such a way that the total inter-cell material flow is minimized. We will consider procedures for both the case where p is known and the case where it is unknown. In the third and final phase the parts to be manufactured are assigned to the machine cells, already identified in the second phase, on the basis of the machine requirements, considering a specified maximal size for any part family. A network programming model is formulated to represent the part formation problem. The solution of this model specifies a feasible assignment of parts to the machine cells resulting in the maximization of the total sum of suitability values defined for all possible part-cell combinations.
52
3.1 Preproeessing Phase The objective of this subsection is to describe a relationship for computing the material flow between machines i and j based on the operational machine sequences and production volumes of the parts to be integrated into families. In order to formally express the arc weight P.~i associated with arc ~i between machines i and j, let us introduce the notation r/iik = 1 if the k t~ part is processed at machine j immediately after machine i; and r/iik = 0, otherwise. Moreover, let vk be the production volume of the k th part, k = 1, ..., K. Thus, we can now write K k--1
where i,j = 1, ..., M.
3.2 Cell Formation Phase CASE 1: The fundamental network modeling concept for solving the problem being considered in Case 1 is to generate directed circuits (cycles or loops) that connect the nodes of a network. In this network nodes represent machines, arcs represent relationships between machines generated by part operational sequences), and arc weights are defined as the number of parts moved machines. A circuit is a finite sequence of connected nodes such that first and last nodes are identical. The optimal solution to this problem is the set of circuits such that the total number of parts moved within the circuits is a maximum. Each circuit corresponds to one cell consisting of those machines represented by the nodes in the circuit. The network approach can be divided into the following steps: Step 1. Represent node i in the original network as arc (il,i2) in a new network. Step 2. For each arc (i,j) in the original network create an arc (i2,Jl) in the new network. Set both the lower and upper bounds on arc flow equal to 1, to ensure that one unit of flow will go along this arc in any feasible solution. Note that the consequence of this is that one unit of flow will move through each node in the original network. Step 3. For each arc in the new network define its "cost" as the negative of the number of parts moved between the corresponding machines in the original network. Step 4. Solve the minimal-cost problem on the new network using the RELAXT-III code (Bertsekas and Tseng 1990) to identify the optimal solution. Each connected sequence of arcs having flows equal to one in the optimal solution corresponds to a machine cell in the original problem. It is noted that in Step 4 the network problem is solved as a "cost "minimization" model. For this reason the original arc parameters are multiplied by -1 in Step 3. More details on the above procedure along with computational results can be found in l e e and Garcia-Diaz (1993).
53 CASE 2: The objective of the network partitioning problem for this case is to minimize the total weight of the arcs in the cut-sets separating the subnetworks. This is equivalent to maximizing the sum of material flows within cells subject to a specified number of machine cells. This problem can be solved using the p-median model (Kusiak et al. 1986), which is a 0-1 integer linear programming formulation. The objective of the p-median model is to select p cells from a set containing a known number of machines in such a way that the sum of parts moved within cells is maximized. In this article we propose an alternative, and com_outationally more efficient, network procedure. A network is said to be a bipartite network if all its nodes belong to either of two subsets and all its arcs connect nodes between these subsets. The basic network flow procedure to solve the problem considered in this case can be presented as follows, after transforming the original network (an example of which is shown in Figure 2) into an equivalent directed network. This transformation is easily done by replacing each undirected arc by two directed arcs with opposite orientations and having the same arc parameter. Step 1. For node i (i = 1, 2, ..., M) in the original network create two nodes ia and ib in a new network. For each arc (i,j) in the original network create a directed arc (ia,jb) in the new network. As a result of this, the new network is a bipartite network. Note that all nodes are either source nodes or terminal nodes. Step 2. Transform the bipartite network into an equivalent circulation network by performing the following activities: (a) introduce a super-source node and connect it to all source nodes; (b) create a super-terminal node and connect to it all the terminal nodes; (c) connect the super-terminal node to the super-source node with a return arc. Step 3. For each arc (ia,jb) of the bipartite network define a lower bound equal to 0 and an upper bound equal to 1 on the arc flow. Additionally, define its "cost'' as the negative of the number of parts moved from machine i to machine j in the original problem. For all arcs directed from the super source and all arcs directed to the super-terminal, the triplets containing a lower bound, upper bound, and per-unit cost are defined as [1,1,0] and [0,| respectively. Furthermore, the triplet for the return arc is defined as [M,M,0]. Step 4. Use the RELAXT-II1 code (Bertsekas and Tseng 1990) to solve the minimal cost flow problem on the circulation network. There is one cell associated with each terminal node receiving positive flow. The additional machines in each cell correspond to the source nodes from which flow was sent to each terminal node. If the number of terminal nodes receiving flow is exactly equal to p the problem has been solved. Otherwise, the parameters of the bipartite network are further adjusted to force the desired number of cells, as indicated by Lee and Garcia-Diaz (Working Paper 1993), and this step is repeated. A summary of the parameter adjusting procedure mentioned in Step 4 follows: If there are more than p nodes receiving flow, the corresponding flow cost is computed for each node
54 and the p nodes associated with the p lowest costs are chosen. However, if the number of nodes receiving flow is less than p, then the upper bounds on flow along those arcs connected to the terminal are reset from | to the integer value immediately larger than or equal to M/p. After this the min-cost problem is solved again. CASE 3: The p-median model does not consider the cell size (number of machines in each cell) but it can be modified to become a 0-1 quadratic programming model which imposes a constraint on cell size (Kumar et al. 1986). The more efficient network approach proposed in this article is actually a modified version of the procedure for solving the problem considered in Case 2. The fundamental difference between the two cases is that in the last cost minimization problem solved in Step 4 above, the lower bound, upper bound, and cost parameters for each arc (jb,T) connecting the terminal node Jb to the super-terminal node T are [0,| for Case 2 and [LbT,UbT,0] for Case 3, where LbT and UbT are the lower bound and upper bounds (expressed as number of machines) for the cell corresponding to terminal node JbThe solution to the problem considered in either Case 2 or Case 3 can be identified from the network optimization results as follows. One cell corresponds to each terminal node receiving flow. In addition to the machine represented by this node, all machines represented by source nodes connected to the terminal node by means of arcs with positive flow are also included in the cell.
3.3 Part Family Formation Phase Once machine cells are identified, the corresponding part families can be formed by assigning parts to cells based on the machine requirements of each part. One of the principal disadvantages of cellular manufacturing is an uneven distribution of parts between cells. As a result, the final formation may not be feasible or desirable due to the overloads of some cells. The proposed network approach assigns parts to families without exceeding a specified upper bound on the family size. Let Skc be an index which measures the degree of suitability associated with part k (k = 1, 2, ..., K) and cell c (c = 1, 2, .., p). In this article, the suitability index will be defined as the Hamming metric (Anderberg 1973). This measure is a symmetric function which takes on a low value when a part and a cell are not very suitable, and increasing values as the degree of suitability increases. For example, if part k has the machine sequence 1-4-6-9 and cell k consists of machines 1, 2, 4, 6 and 12 in a system with 15 machines, then we can generate the following two vector representations for part k and cell c, respectively:
[1 0 0 1 0 1 0 0 1 0 0 0 0 0 0 ] [1 1 0 1 0 1 0 0 0 0 0 1 0 0 0 ] Using the Hamming distance concept, we scan the two vectors and compare the jth entry in one vector to the jth entry in the other vector, considering j = 1, 2, ..., 15. The suitability index is in this case defined as the number of times the two entries are equal. Thus, Skc = 12.
55 The detailed network methodology for the part family formation process can be found in Lee and Garcia-Diaz (Working Paper 1993). A summary of the solution procedure follows: Step 1. Construct a bipartite network with source nodes representing parts and terminal nodes representing cells. Step 2. For each arc (k,c) connecting part k to cell c in the bipartite network, define the lower bound, upper bound, and cost parameters as [1,0,-Ske ]. Step 3. Create a super-source node S and a super-terminal node T. For those arcs (S,k) connecting the super-source to source nodes define the parameters [l,l,0]. Additionally, for arcs (c,T) connecting terminal nodes to the super-terminal node define the parameters [Lcr,Ucx,0], where LeT and UcT are the lowest and highest number of parts allowed in the part family processed by cell c. Use the RELAXT-III code to solve the minimal-cost flow problem associated with sending exactly K units flow from the super-source node. The part families are identified from the network optimization results by grouping together all parts represented by source nodes linked to the same terminal node by means of arcs having positive flows. Each part family is assigned to the cell represented by the corresponding terminal node. 4. AN INDUSTRIAL APPLICATION OF THE NETWORK APPROACH The purpose of this section is to present and discuss an actual application of the proposed network methodology. Realistic input data from an industrial company located in a north-eastern state was collected and processed before using the network-based methodology. The confidentiality of the company will be kept, but enough information will be provided to make this illustration cohesive and meaningful. The company that provided the input data to be analyzed in this section has two production plants, here referred to as Plant A and Plant B, located in two different cities of the state. Plant A has 44 machines, and Plant B has 99 machines. The production facilities of the company include a total number of approximately 150 machines, such as lathes, drills, presses, and other machines to manufacture hydraulic cylinders and pistons, power transmission gears, links, and other mechanical parts. A total number of 5,500 unique operational sequences result from the manufacturing processes of about 40,000 parts involved in the production of a lift and hoist. The complete input data provided by the company consists of the following two computer files: 9 Machine Resource List 9 Operation Routing List The first file contains information on machines and their locations, and the second one information on the machine sequences needed to process the parts, as well as the corresponding
56 production quantifies, respectively. Table 1 shows a sample of the machine resource list. The first column contains a 5-digit code of each machine or operation; the second column has the name of each piece of equipment or a description of the corresponding operation; the third column indicates the location of the machine (a code for the plant location and a code for the corresponding department in the plant).
Table 1. Sample Machine Resource List 10514
RADIAL DRILL
MCB1010
10516
BENCH DRILL
MCB1010
21535
TIME SAVER SLAG GRINDER
MCB1921
22000
MAJOR WELD
MCB0322
22030
FRAME ASSEMBLE TACK AND WELD
MCB0322
22500
CLIMAX PORTABLE MILLING MACHINE
MCB0322
22800
COUNTERWEIGHT/TO W BAR INST.
MCB0322
23000
BOOM WELD
MCB0323
23514
SPEEDMASTER RADIAL DRILL
MCB0323
27557
PANGBORN ROTO-BLAST
MCB0927
27588
HAND CLEAN
MCB0957
27562
HYDRAULIC PRESS
MCB0923
280O0
SUB-ASSEMBLY WELD
MCB0328
28300
CONCRETE COUNTERWEIGHT
MCB0745
45671
PRIMER
BED0147
47557
ROTOBLAST
BED0147
47558
CLEANER
BED0147
50MC B
STOCK
MCB1065
70000
INSPECT AND TEST
MCB0070
Table 2 shows a sample of the operational routing list, which includes the following information: (a) operation sequence code; (b) number of parts using this sequence; (c) machine sequence in terms of machine codes followed by the plant location code. The collected data were initially examined to eliminate purchasing and single-operation parts, as well as non-machining operations such as dispatching, assembly, packing and storage from the part routing list. After this initial screening the preprocessing phase of the network approach was executed to compute
57 the total number of inter-machine part moves. This number was calculated for Plant A as 6,487 and for Plant B as 54,674.
Table 2. Sample Part Operation Routing List 3450
1
5000322000 47557 47558 47671 50FTL MCB
3451
1
5000322000 47557 47558 47671 MCB BED
3452
1
5000322000 MCB
3453
3
50003 22000 70000 27557 27558 45671 MCB
3454
1
5000322000 70000 27557 27558 45671 99000 MCB BED
3455
1
5000322000 70000 27558 45671 MCB
3456
1
50003 22000 70000 99000 70000 MCB
3457
1
5000322000 99000 MCB
3458
1
5000322030 27557 27558 45671 MCB
3459
3
50003 23000 10514 23000 27557 45671 MCB
3460
1
50003 23000 10514 23000 MCB
3461
2
5000323000 10514 27557 27558 45671 MCB
3462
2
5000323000 10514 27557 45671 50MCB
3463
3
5000323000 10514 27558 27557 45671 MCB
3464
1
5000323000 10514 MCB
3465
1
5000323000 10514 70000 27558 27557 45671 MCB
3466
2
50003 23000 27557 27558 45671 MCB
3467
1
5000323000 23514 23000 23514 23000 27557 45671 MCB
Table 3 summarizes some important characteristics of the industrial data to be investigated. This table shows the average number of operations, as well as the m i n i m u m and m a x i m u m number of operations for each part. Since Plant A has 44 machines and the average number of operations for parts manufactured in this plant is 7, the number of cells for this plant can be set equal to about 6 in both of the scenarios considered in Case 2 and Case 3. Additionally, the m i n i m u m and m a x i m u m number of operations on the parts processed in each plant were determined to be 2 and 10 for Plant A, and 3 and 12 for Plant B. These values can be used as lower and upper bounds on the cell size for Case 3.
58
Table 3.
Important Characteristics of Industrial Data r
Plant A
Characteristic i
i'
Number of Parts
99
3803
25303
1001
Number of Operation Sequences i
i
.
i
44
Number of Machines
i
Plant B
f
ii
ii
Number of Parts Moved
i
ii
i
4236 i
i
6487
54674
Average Number of Operations
7
8
Minimum Number of Operations
2
3
Maximum Number of Operations
10
12
J
Table 4. Summary of Results for the Industrial Application Evaluation Factors
C a ~ s
Plant A
Plant B
i
18
37 12
Number of Cells
Number of Parts Moved Within Cells
Cell Efficiency (%) Lower and Upper Bounds on Family Size
Computing Time (in Seconds)
6
12
1
1589
14598
2
5571
43520
3
5137
39529
24.5
26.7
82.8
76.9
79.2
72.3
L=O O=oo
L=0 O=oo
L = 100 U = 200
L = 200 U = 500
34.2
128.3
34.5
129.5 ,
59 Due to the large size of the problem, it is difficult to present the entire list of machine cells and part families. The final results for the machine cell and part family formation procedures are shown in Table 4. This table shows the number of cells, number of part moves within cells, cell efficiency (the ratio of the total number of part moves within cells to the total number of part moves in the system) to evaluate the solutions obtained, part family size, and total computing time required for cell and part family formation. The cell efficiency for Case 2 is slightly higher than that of Case 3 for the same number of desired cells due to the relaxation of the cell size restrictions. It is noted that cell efficiency decreases as the number of cells increases. Once cells were identified from the output of the network optimization approach, parts were assigned to those cells to form part families under a part family size restriction. Table 4 shows the upper and lower bounds on part family size used in the computer runs of the network methodology. As shown in this table, the total computing for Plant A and Plant B were equal to approximately 35 and 130 seconds, respectively. These figures support the claim made in our article concerning the computational efficiency of the network flow procedures. 5. C O N C L U S I O N S
AND RECOMMENDATIONS
It is emphasized that the network procedures are simply decision-support methodologies. In each particular situation where these procedures are used Company management makes a final decision on the basis of factors such as technical and economic variables, changes in existing layout, physical shop floor restrictions, human and machine resources available, etc. Additional computational results indicated that the proposed approach is appropriate for solving large-scale industrial problems including up to several hundreds machines and several thousands of parts in a microcomputer environment. The computer running time of about 200 seconds for the cell and part family formation in a industrial problem considering 150 machines and 5,500 operation sequences seems to be attractive. Based on this preliminary work we feel confident that our approach can be recommended to study large-scale applications. The machine grouping method addressed in this article assumes that each operation of a part is restricted to one machine and that there is one machine for each type of machine. When there are several identical machines of the same type, the following simple procedure can be carried out before using the network approach: (a) From the operation sequences determine the total processing time T i required from all machines of type i, i = 1, 2, ..., t. Let r~ be the number of identical machines of type i. Therefore, the time for each machine of type i can be calculated as t~ = Ti/n~. (b) Consider machine type i. We assume that all parts can be considered in a given order k = 1, 2, ..., K. Now we can start scanning parts in this order and calculate the cumulative time for the operation processed by machines of type i. When the cumulative processing time reaches the value lq or a value close to it the first machine of type i is labeled i~. Similarly, when the cumulative value reaches the value approximately equal to 21q the second machine of type 1 is labeled i2 and so on. This procedure can be repeated for all types of machines. Further research is neexled to investigate the use of identical machines. Additionally, the
60 aggregation of multiple operations on the same machine needs to be studied when there are multiple functions performed by the same machine. In order to examine the computational efficiency of the network procedures, a set of 15 hypothetical problems was generated and then solved on an IBM/PC AT microcomputer. The number of machines in these problems ranged from 10 to 500. The number of parts was also varied from 100 to 10,000. Six of these problems included from 10 to 30 machines, and the remaining problems included between 35 and 500 machines. The first group of problems was solved using LINDO for the p-median model and RELAXT for the network methodology for Case 1. The computer time required by the p-median (integer model) method ranged from 3.49 to 97.69 seconds, while the network approach required only from 0.88 to 1.64 seconds. Due to the size of the problem, the microcomputer memory capacity was exeeexled for problems in the second group; for this reason, these problems were solved by the network methodology only. In this case, the computer time ranged from 1.92 to 55.36 seconds, with noticeable increases starting at about 200 machines. A similar analysis was conducted for Case 3. First a set of 30 problems including 10 to 30 machines was solved both optimally (using LINDO) and then by means of the network methodology (using RELAXT). The computer time requirements in seconds ranged from 2.50 to 117.32 for LINDO and from 1.97 to 12.36 for the network model. The solutions from the network model were less than 5% from the optimal value. In order to demonstrate the applicability of the network approach to large-scale applications, 30 additional problems were generated varying the number of machines from 100 to 1000. It was noted that the computer time in this case varied from 24.67 to 351.25. REFERENCES
Anderberg, M.R., 1973, Cluster Analysis for Applications, Academic Press, New York, N.Y. Arthanari, I.S., and Dodge, Y., 1981, Mathematical Programming in Statistics, John Wiley, New York. Barnes, E.R., 1982, "An algorithm for partitioning the nodes of a graph," SIAM J. of Algebraic and Discrete Methods, 3, 541-550. Bertsekas, D., and Tseng, P., 1988, "Relaxation methods for minimum cost ordinary and generalized network flow problems," Operations Research, 36, 93-114. Bertsekas, D., and Tseng, P., 1990, "RELAXT-III: A new and improved version of the RELAXT code," Laboratory for Information and Decision System Report LIDS-P-1990, MIT, Cambridge, MA. Christofides, N., and Beasley, J.E., 1982, "A tree search algorithm for the p-median problem," European Journal of Operational Research, 10, 196-204. Jensen, R.E., 1969, "A dynamic programming algorithm for cluster analysis," Operations Research, 12, 1034-1057. Kernighan, B.W., and Lin, S., 1970, "An efficient heuristic procedure for partitioning graph," Bell System Technical Journal 49 (1970) 291-307. Khumawala, B.M., 1972,"An efficient branch and bound algorithm for the warehouse location
61 problem," Management Science, 18, 718-731. Kusiak, A., Vannelli, A., and Kumar, K.R., 1986, "Cluster analysis: models and algorithms," Control and Cybernetics, 139-154. Kumar, K.R., Kusiak, A., and VanneUi, A., 1986, "Grouping of parts and components in flexible manufacturing systems," European Journal of Operational Research, 24, 387-397. Lee, C.S., and Hwang, H., 1991, "A hierarchical divisive clustering method for machine-component grouping problems," Engineering Optimization, 17, 65-78. Lee, H., Garcia-Diaz, A., 1993, "A network flow approach to solve clustering problems in group technology," International Journal of production Research, 31,603-612. Lee, H., Garcia-Diaz, A., 1993, "Network flow partitioning procedures for the analysis of cellular manufacturing systems," Working Paper INEN-MS-WP, Department of Industrial Engineering, Texas A&M University, College Station, TX 77843. Mulvey, J.M., and Crowder, H.P., 1979, "Cluster analysis: an application of Lagrangian relaxation," Management Science, 25, 329-340. Rajagopalan, R., and Batra, L., 1975, "Design of cellular production systems: A graph-theoretic approach," International Journal of Production Research, 13, 567-579. Ward, J.H., 1963, ~Hierarchical grouping to optimize an objective function," Journal of American Statistical Association, 58, 236-244.
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular ManufacturingSystems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
63
Design Quality: The Untapped Potential of Group Technology
C.T. Mosier a and F. Mahmoodi a ~Department of Management, Clarkson University, Box 5790, Potsdam, New York, 136995790 USA This paper expands the application domain of Group Technology oriented coding and retrieval systems to address the problem of design quality. The basis of this application is the notion of fully utilizing the information contained within the firm's various databases to enable design evaluation and control design retrieval in a manner consistent with design quality objectives. We discuss the operational form of such an application, as well as some of the benefits and technical obstacles. 1.
INTRODUCTION
The opportunities offered by Group Technology (GT) for improving manufacturing competitiveness are just being explored by American industries. A recent paper (Gaither, Frazier, and Wei, 1990) discussed these potentials with respect to cellular manufacturing. For a more general overview of GT, see the review by Mosier and Taube (1985). Shunk (1985) provides a good working definition of GT:
"...a disciplined approach to identify things such as' parts, processes, equipment, tools, people or customer needs by their attributes, analyzing those attributes looking for similarities between and among the things; grouping the things into fatuities according to similarities; and finally increasing the efficiency and effectiveness of managing the things by taking advantage of the similarities. " Currently, the primary impact of GT on domestic manufacturing firms has been essentially limited to two of its basic applications. The most publicized of these applications is the process of restructuring the shop floor to a cellular layout. Identification of part families and machine cells dedicated to their manufacture has demonstrated great potential for productivity improvement (Allison and Vapor 1979, Droy 1984). The second application of GT, which is commonplace and often integral to cellular layout restructuring, is the classification and coding of parts. The most common usage of classification and coding systems is for the retrieval of designs (from the design database) to be used as the basis for new designs and for determining part families and cells. In this paper we consider an expansion of the domain of the retrieval-oriented application of these classification and coding systems. Design retrieval using classification and coding systems has been shown to result in significant productivity improvements in the engineering and design functions (Mosier and Taube, 1985). Using such systems, designers are able to initiate a new design task using a similar "old" design as a starting point. They are able to search on the basis of a "target" GT code specification, seeking designs which, to the degree feasible, match new design
64 requirements. In a computer aided design (CAD) environment this involves using the system to: 1. develop the "target" code number, 2. generate a list of similar designs existing in the CAD database, 3. preview the designs in the list, and 4. select the design which will require the minimum modification to produce a new design satisfying the new design requirements. The increased efficiency induced by the GT code retrieval capability, which is directly associated with the creation of the design geometry, is only the tip of the iceberg. A large array of planning activities must be accomplished before any manufacturing takes place, including tasks associated with the iterative process of engineering design, such as: 9 a myriad of engineering analyses 9 design documentation 9 test documentation 9 proto-typing and tasks associated with planning for manufacture, such as: 9 cost estimation 9 process planning 9 NC part programming 9 vendor consideration and selection 9 production and inventory planning. To some degree, all of these activities can share in the productivity improvements induced by the design retrieval capability. Since all of these activities are required for the retrieved design, linking the appropriate databases can result in significant savings by simply eliminating redundancy of effort associated with the new design. Standardization of design features has well established benefits (Ettlie and Reifeis, 1987). Furthermore, as evidenced by industry practice, the standardization of more complex features, parts, or assemblies leads to economic benefits. The GT code-based design retrieval capability induces a rather painless form of design standardization. If the part database is well populated, and if the retrieval system interface is user-friendly, design engineers are more inclined to use the system as frequently as possible. Therefore, the basis for new designs will deviate from old designs only where functionality, manufacturing costs, or the availability of new materials dictate that such deviations are warranted. Note that the retrieval approach to design standardization should have significantly different results than standardization dictated through heavy-handed managerial mandate, or through design "evaluation" software which engineers must run for each new design. We feel that design retrieval systems enable a passive, but effective, effort to standardize products. When using these retrieval systems design creativity is not stifled, but rather, it is channeled; design engineers are immediately informed of the concurrence or discordance of their design decisions with past practice. Part designs that deviate significantly in shape, form, and manufacturing requirements from the past practice of the firm are by their nature costly. Spurious creativity that induces such deviations should be discouraged. Alternatively, creativity based upon sound
65 engineering practice and current production or process economics should be encouraged. In other words, design creativity should not be eliminated, but rather should be rationally and consistently factored into the economics of the design "formula." Design standardization induced by the use of the retrieval system will reflect considerations associated directly with the infrastructure of the firm in question. If decisions concerning a particular facet of a part design are generally allowed by engineering standards, hidden costs (for example, costs associated with acquiring a particular processing capability in-house, costs of out-sourcing, etc.) of those decisions will become immediately obvious to the designer if that information is appropriately linked to the design database. To illustrate, consider the example of a manufacturing firm that produces a large number of "user maintenance components" (UMCs). Similar versions of one particular UMC have been produced by the firm for years (almost all of their product models use them). Over the years the efficient manufacture of these UMCs has been a challenge that has been successfully addressed, resulting in a great deal of profit. In recent years, to take advantage of their similarity, the firm designed and implemented a highly efficient, robotic assembly line to assemble them. Recently, a young, enterprising design engineer, working on a new product design team, decided that the new UMC to be used in his project should be fastened together using ultrasonic welding rather than the traditional assembly methodology (four screws). This design decision did not consider any of the implications concerning the use of the existing robotic assembly line; rather it considered only the intrinsic economics and functionality of the assembly methodology. Ultimately, the new UMC design required manual assembly, virtually destroying the past level of profitability. In this case, the young engineer's motivation and engineering training were the best possible; he simply did not have all the information necessary for making a rational design decision concerning the appropriate fastening technology. 2. MANAGEMENT OF THE DESIGN FUNCTION AND THE DESIGN DATABASE Engineering and design functions associated with new product development are organized in a variety of ways. In some firms, new products are developed by dedicated multi-disciplinary teams, including personnel with expertise in a diversity of scientific and engineering disciplines (Imai, Nonaka, Takeuchi, 1984). Alternatively, in other firms new products that require large scale design efforts are rare (i.e., engineering and design tasks are parceled out to the engineering staff as the need arises). In either extreme, the evaluation of the performance of the engineering designer historically has involved two main criteria: 9 timeliness (t i), i.e., meeting the design schedule, and 9 functionality ((j), i.e., the ability ofthe design to accomplish the intended purpose. The intensification of foreign competition has forced the consideration of a third criterion: 9 estimated manufacturing costs (q). The rank order of importance of each of these criteria varies from industry to industry and from product to product, but it is reasonable to consider each a zero-one measure, where the success of the design project is determined by the product of all criterion. Without timely delivery of designs, product market opportunities are lost. If the design does not function as required, the product is a failure. If the manufacturing costs are too high, the product is not competitive. That is, if we rather simple-mindedly assess our criteria as:
66
t;, ~ or C; -
f~ success failure
(1)
and let q1 be our assessment of the design quality of part i , we have:
q; = t, • f, • c;.
(2)
Of these three criteria, only the schedule attainment criterion is immediately measurable (i.e., it is immediately obvious when design project schedules are not met). Accurate evaluation of the operational functionality of the design is often difficult to accomplish until the product hits the market. More often, the functionality evaluation of a design is limited to an assessment by the head engineer, engineering manager, or committees made up of a mixture of engineering managers and staff. Only recently quality engineering that involves extensive experimentation of the prototypes at the design stage has been advocated (e.g., Kackar 1985, Taguchi 1988). Quality engineering (also referred to as off-line quality control) assures the operational functionality of the design as well as the robustness of the design to various noise factors. Ultimately, the functionality of every part designed and manufactured is determined by the marketplace. However, the time span from product design to the final delivery of the product makes meaningful feedback to the design engineer or design group responsible infeasible. The cost of manufacturing performance criterion is by far the most difficult criterion to assess. In fact, the blanket application of this criterion may potentially be detrimental to the long-term competitive posture of the firm, often motivating design engineers, design project managers, and production engineers to plan for production offshore. One wonders if this was the original strategy of the designers of video cassette recorders (VCRs). TWO main factors impact the cost of manufacturing criterion: (a) cost of materials, and (b) "other" direct and indirect costs associated with production, which we will refer to as transformation costs. In most firms, material costs are well controlled within the context of the engineering design process, and are similarly straightforward to evaluate. But the transformation costs are often very rough estimates at best. We propose the expansion of GT-oriented classification and coding systems to better manage the design function through analysis and management of the engineering design and manufacturing database(s). Here we note our deviation from what we consider to be an anomaly in the traditional verbiage associated with management of databases. Traditionally, the term management in the database context refers to the management of the information system environment in which the database resides, and managing the interface between the user and the database system, i.e., managing the hardware configuration, managing the software and the myriad of details associated with the user interface. Rarely, if ever, does the term refer to interacting directly with the contents of the database as we propose in this paper. 3.
MANAGING DESIGN QUALITY THROUGH ENGINEERING DESIGN DATABASE
MANAGEMENT
OF
THE
We propose the extension of the GT coding and retrieval logic to two facets of the management of the engineering design function. First, we propose actively managing the engineering databases to control the population of designs that may be retrieved by design engineers, thus inducing the standardization that naturally occurs with the use of retrieval
67 systems. If a part has historically proven to be less than satisfactory, in terms of either functionality or manufacturability, it should not be retrievable by the design engineer. Furthermore, if new product or process technologies make a particular design obsolete, it should not be retrievable by the design engineer. Alternatively, if new process technologies enable the economic production of a previously uneconomical design (in terms of production costs), it should be placed back in the design pool for retrieval. The idea here is to actively examine the designs in the engineering database with reference to their historical performance records associated with functionality and costs of manufacturing. Clearly, the databases that contain the required performance data must be appropriately linked to the engineering database (or whichever database in which our retrieval system resides). The designs within the engineering database that are not desirable in terms of functionality or costs of manufacturing should be made inaccessible to the design retrieval system. Note that the evaluative logic of this sort applied to engineering design databases is really only a formalization and extension of current practice. For example, Toyota recently issued a recall to selected Land Cruiser owners to have their fuel tanks replaced. This is because of a design error which results in the eventual danger of leakage. Most assuredly, in the future, this design of the Land Cruiser fuel tank will not be repeated. This form of design quality control has been repeated countless times in the life span of a vast and diverse variety of products, significantly reflecting on their design "quality" (e.g., deteriorating motor mounts on full-sized Chevrolets made in the mid-1970s, excessive rust in the front fenders of Vegas, toxic shock syndrome, the gas tank design of the Ford Pintos, etc.). The database management approach is clearly less reactive than the current practice, as well as being much more sensitive to functionality measures and manufacturing cost measures. Thus, for the analysis and management of the engineering and manufacturing data-base(s) to be accurate and useful, in the context of improving design quality, performance statistics on all parts, assemblies, or end items must be gathered and linked to a central database location. 4. MANAGING MANUFACTURING COSTS THROUGH REFERENCE TO THE ENGINEERING DESIGN DATABASE The second proposed extension of the process of actively managing the design data-bases is the evahmtion of new designs according to manufacturing costs. If design engineering management can determine the frequency distribution of each of these criterion for a family of part designs that the new design fits into best, such as that illustrated in Figure 1, objective judgments can be made concerning the true "quality" of the design. The family of parts whose frequency distribution is illustrated in Figure 1 has 24 designs, with recorded unit manufacturing costs ranging from $1 to $7. In this family of designs we would "expect" a unit manufacturing cost of approximately $5. As is the practice in the corporate accounting function, manufacturing cost estimates that differ significantly (taking into account the variation of' the family cost distribution) from this expected value should warrant consultation. An engineering designer submitting a design with a unit manufacturing cost estimate in the "C" region in Figure 1 should be required to justify the need for the materials, design features, or other specifications that are driving the costs of the design up. In fact, it may be that the cost estimation procedures need examination. A new design with an estimated unit manufacturing cost in the "A" region should be examined as well, since they are in variance with the norm. Given that the cost estimation procedure is correct and has been correctly applied, and the required functionality has been attained, the new design should be scrutinized
68 to determine the differences which make it less expensive to manufacture. It may be that this design reflects significant innovation which should become the norm for future designs of parts in this family, hence, some judicious editing of the design database. A cost estimate in the region denoted "B" would indicate that the manufacturing costs are about average, with less need for detailed scrutiny. Later, actual manufacturing costs of "new" designs should be entered into the database. This will provide a true evaluation of the cost estimation procedure that is currently in place, pointing out the direction for making any necessary changes. 10
u
c
~t
LI.
6 4
$1
$2
$3
$4
$5
Unit Manufacturing Costs
$6
$7
Figure 1. Unit manufacturing cost distribution of a part family.
The basic presumption for implementing this form of analysis is that a highly accurate approach for determining the set of designs that make up the design "families" is being used. In a limited sense, this is already done. The true value of human engineering design experience is to provide a basis for suggesting alternatives to design problems (similar to the retrieval system described previously) and to recognize when proposed solutions deviate significantly from the expected levels of the performance criterion (similar to the evaluation mechanism described above). Human intelligence has been proven insightful in the design process. Although humans are extremely good at recognizing patterns and determining the overall impact of design decisions, they have very limited memory capacity. On the other hand, computers have prodigious memories, but are still very weak in recognizing patterns and analytically synthesizing the effect of design decisions. The determination of manageable families of entities within any large population requires both capabilities. Since there is little hope in expanding the human memory capability, we must teach our computers to search for and recognize patterns in our design databases. A broad variety of what are essentially pattern-recognition techniques have been developed to algorithmically determine families of designs using the computer (e.g., Chang and Fu, 1984). Many of these algorithms are very powerful, and reflect a great deal of creativity on the part of the developers, but this line of research is far from complete. The ultimate form of the process of computerized "family" determination might be similar to that illustrated in the "Wolf in the Fold" (Bloch, 1967) episode of the Star Trek series, where Scotty stands accused of murder.
69 During the "trial," Captain Kirk begins to suspect that forces outside the normal perception domain are responsible for the crime, and he eventually queries (verbally) the main computer of the USS Enterprise for an analysis of historical patterns of a particularly virulent form of violence, i.e.,
Captain Kirk:
"Computer...criminological files...cases of unsolved mass murders of women since Jack the Ripper."
COMPUTER:
WORKING... 1932..SHAHGHAI..CHINA..EARTH..7 WOMENKNIFEDTODEATH 1974..K!EV..USSR..EARTH..5WOMENKNIFEDTODEATH 2105..MARTIANCOLONY..8WOMENKNIFEDTODEATH 2156..ELIOPOLIS..ALPHAARIDONTIS..IOWOMENKHIFEDTODEATH THEREAREADDITIOHALEXAMPLES...
In this context, Kirk is querying to determine the events that make up the "family" of events roughly described by "cases of unsolved mass murders of women." The expansion of this logic to the design evaluation context is to present the computer with the design (i.e., via electronic media - the CAD design), resulting in an automatic judgment of its design "family" after being placed in the database. In the Star Trek story, given the information gathered from the computer, Kirk and Spock "logically" deduce the true nature of the villain of this particular drama and, after a number of dramatic near-misses and harrowing escapes, finally take rather creative measures to dispatch the perpetrator (via the "transporter"... "widest possible
dispersion ").
In the case of design evaluation, our "villain" is much more insidious, and evaluation software might follow this sequence: 1. a candidate design would be electronically "viewed" by the analysis program, 2. a series of estimates of the functionality and costs of the design would be generated, 3. a family of similar designs would be identified, and 4. an evaluation of the design would be generated with a specific set of candidate features, facets, or characteristics which may be the cause of the deviation from the expected in each of the criterion. Clearly, we are years away from the Star Trek level of capabilities, but we are at a state where: 9 classification and coding can be utilized to direct managers of the design function to similar "old" designs to be used for the evaluation of"new" designs ~
GT classification and coding can be utilized to direct design engineers toward "old" designs, which have been successful, to be used as the basis of new designs
9 the current algorithms can be applied to explore the usefulness of their application. The least well developed step of this process is that of determining design families. Contrary to recent discussions (Meredith, 1990) there is a dire need of primary research into
70 computerized methods for determining and assessing family structures within the design (and other) populations. 5. CONCLUSIONS AND EXTENSIONS
In this paper we have proposed the expansion of the application domain of GT-oriented coding and retrieval systems beyond simple design retrieval. We propose the use of these systems to manage the engineering design process, including the active management of the population of engineering designs accessible by the retrieval system. Furthermore, we suggest the use of the information within the engineering design database for making rational and objective evaluations of the costs associated with a new design. The major operational change required for the implementation of the proposed expansion is to (a) link (or integrate) the databases that contain the information required to manage the database, and (b) analyze all of the designs within the design database according to the various cost criteria. Very likely the most significant operational change is the organizational willingness to adopt such a management practice. The major technical roadblock to the design evaluation process is the lack of a proven set of approaches for quickly determining a reasonable and useful set of design "families" within the engineering design database. Any form of pattern recognition requires significant research efforts before the necessary level of discrimination and analysis capability can be developed. REFERENCES
1. 2. 3.
4. 5. 6. 7.
8. 9. 10.
Allison, J.W. and J.C. Vapor, (1979), "GT Approach Proves Out," American Machinist, 123, 197-200. Bloch, R. (Writer), (1967), STAR TREK Episode: "Wolf in the Fold," J. Pevney, (Director), G.L. Coon, (Producer), and G. Roddenberry, (Creator), Paramount Pictures. Chang, Y.T. and K.S. Fu, (1984), "Parallel Parsing Algorithms and VLSI Implementation for Syntactical Pattern Recognition," IEEE Transactions on Pattetw Analysis and Machine hTtelligence, PAMI-6, 302-313. Droy, J.B., (1984), "It's time to "CELL" your factory," Production Engineering, 34, 5052. Ettlie, J.E. and S.A. Reifeis, (1987), "Integrated Design and Man~facturing to Deploy Advanced Manufacturing Technology," hltetfaces, 17, 63-74. Gaither, N., G.V. Frazier, and J.C. Wei, (1990), "From Job Shops to Manufacturing Cells," Production and hn,entoly Management, 31, 33-37. Imai, K., I. Nonaka, and H. Takeuchi, (1984), "Managing the New Product Development Process: How Japanese Companies Learn and Unlearn," Harvard Business School 75th Anniversaly Colloquium otl Productivity and Technology. Kackar, R.N., (1985), Off-line Quality Control, Parameter Design, and the Taguchi Method, Journal of Quality Technology, 17, 176-188. Meredith, J.R. (Chair), (1990), Workshop: "The Realties of Cellular Manufacturing: Implications for Research," National DSI Meeting, San Diego. Mosier, C.T., and L. Taube, (1985), "The Facets of Group Technology and Their Impacts on Implementation: A State-of-the-Art Survey," OMEGA, 13,381-391.
71 11. Shunk, D., (1985), "Group technology provides organized approaches to realizing benefits of CIMS," Industrial Engineering, 17, 74-81. 12. Taguchi, G., (1988), "The Development of Quality Engineering," The ASI Journal, 1, Dearborn, MI: American Supplier Institute.
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
73
Partitioning Techniques for Cellular Manufacturing Soha Eid Moussa and Mohamed S. Kamel* Pattern Analysis and Machine Intelligence Lab, Department of Systems Design Engineering University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 This chapter addresses the partitioning problem in cellular manufacturing. Various traditional partitioning techniques, such as graph-based, matrix, and integer programming are reviewed. Similarity measures used in forming part families are discussed. More recent techniques which, unlike the traditional, attempt to solve the problem at the machine assignment level are introduced. The effectiveness of these techniques is demonstrated by some sample results.
1. INTRODUCTION Group Technology (GT) is studied in an attempt to assist modem manufacturing systems in the industrial environment to achieve improved efficiency. It is an organizational technique which seeks to improve manufacturing productivity by finding part-families and machine cells that form self-sufficient units of production with a certain amount of functional autonomy that results in easier control. It seeks the efficiency associated with flow-line production while maintaining the flexibility of a job-shop manufacturing system. The features associated with group technology are very important to companies wishing to remain competitive in today's manufacturing environment where more specialized orders are demanded, production life-cycles are reduced and competition is focused on such factors as delivery speed, quality, design flexibility, delivery reliability as well as price. Companies must now manufacture products having a larger product mix, smaller volume, increased part complexity and shorter production period. Group technology has become an integral part of computer integrated manufacturing systems (CIMS) in most industries. It is also rapidly being recognized as a stepping stone to flexible manufacturing systems (FMS). In an ideal group technology environment, each component (part, end item or subassembly) is processed entirely within the specific machine cell to which it is assigned. By achieving this goal, manufacturing costs may be reduced as a result of savings in setup time, labour, tooling, rework/scrap, machine tool expenditures and
*. This research was partially supported by research grants from the Natural Sciences and Engineering Research Council of Canada and from Manufacturing Research Corporation of Ontario, Canada (a Government of Ontario Center of Excellence).
74 work-in-process. Savings ranging from 15% to 75% can be achieved in these areas (Askin and Chiu, 1990). Some of the other benefits which can be obtained through the use of a group technology environment are more difficult to quantify. Examples of these benefits include lower product through-put time, an increase of on-time deliveries, better product quality, higher management efficiency and improved responsiveness to customers. Savings in these areas are a result of the use of standardized setups, reduced material handling as well as simplified scheduling and control. One of the main benefits achieved from GT applications is part family formation for efficient work flow. Efficient work flow can be a result of grouping machines logically so that material handling and setup are minimized as well as a grouping of parts so that the amount of handling between machining operations also can be minimized. Cellular Manufacturing (CM) is the application of group technology principles to production. In cellular manufacturing, a company's manufacturing system is organized into cells with each cell including a number of dissimilar machines processing a family of parts. Exceptional parts as well as bottleneck machines are the sources of intercellular moves. For maximum benefits of a cellular manufacturing system to be obtained, intercellular moves must be reduced to the minimum. One of cellular manufacturing's main problems is the determination of part families and machine cells. Much research has been devoted to the cell formation problem (Vakharia, 1986; Wemmerlov and Hyer, 1986). So far, all the models which have been developed have been based either on part attributes or machine routings. The models based on part attributes use part characteristics such as geometry, material chemistry, tolerance, etc. to establish part families (Hyer and Wemmerlov, 1985; Kusiak, 1985; Kumar et al., 1987). Process plans are generated for each of these parts in the part families and machines are assigned to form cells. Traditional solution methodologies for the group technology problem require knowledge of the part-machine assignment; this requires solving the assignment problem. The objective of the assignment problem is to determine an optimal assignment of parts to machines which minimizes cost. The most common constraint which is considered is the constraint that each machine can only process one part at a time. The assignment problem needs solving because there are often several alternate machines which are all capable of processing a particular part, this is known as the existence of alternate process plans.
2. TRADITIONAL SOLUTION METHODOLOGIES Clustering and partitioning techniques are commonly used to create group technology cells. In this section, the different methods used by researchers to solve the group technology problem will be discussed. First, graph-based techniques will be introduced. This will be followed by matrix techniques and integer programming ones. The section ends with a discussion on the type of constraints considered in solving the group technology problem.
75
2' 1. Graph-Based Techniques In the case of graph representations, the machines are denoted by nodes, the connection between the machines is denoted by arcs (edges) and the weights are the number of parts moving from one machine to another. The objective is to decompose the graph into sub-graphs such that there are minimal interconnections between sub-graphs and maximal connections within the sub-graphs. The sub-graphs are created by deleting the edges having the smallest weights (number of parts flowing in a single direction). Kernighan and Lin (1970) studied the problem of partitioning the nodes of a graph with costs on its edges into subsets of given sizes so as to minimize the sum of the costs on all edges cut. They presented a heuristic method for the partitioning of arbitrary graphs which is effective in finding optimal partitions as well as being fast enough to be practical in solving large problems. Fiduccia and Mattheyses (1982) introduce an iterative minimum cut heuristic for partitioning networks. Its worst case computation time, per pass, increases linearly with the size of the network. Real life applications only require a very small number of passes; as a result, a fast approximation algorithm for minimum cut partitioning is obtained. Starting from an initial cell configuration, the algorithm progresses by moving one machine at a time between the cells for the partition while maintaining a desired balance based on the dimensions of the cells rather than the number of machines per cell in order to deal with machines of various sizes. In addition, efficient data structures are used to avoid unnecessary searching for the best machine to move and to minimize unnecessary updating of machines affected by each move. Vannelli and Hadley (1990) discussed a graph theoretic method having several important features: it approximates a netlist by an undirected graph with weighted edges; and it is used on the resulting undirected graph to find a cut tree where the minimum cut separating any pair of modules can be determined. This method creates a hypergraph representation of part-machine interconnections which is approximated by a weighted undirected graph. Vannelli and Hadley (1990) tested the weighting by solving several linear programming problem models and determined the same optimal edge-weighting. Its most important design feature is that it allows the designer to consider a variety of cells by looking at the cut tree connecting the modules; thereby permitting an estimation of how far the net cut is from optimality. In addition, cell size and balance can also be analyzed using this approach. Hadley, Mark and Vannelli (1992) discussed an eigenvector approach to find initial machine partitions for use in interchange heuristics. The objective is to minimize the number of machines cut by the partition. In order to apply known partitioning results of Barnes (1982) based on eigenvectors, this heuristic relies on a method to approximate a hypergraph by a graph with weighted edges. The graph approximation of the hypergraph is easily obtained and the most expensive part of the procedure is finding and sorting the eigenvalues and eigenvectors corresponding to the adjacency matrix of the estimating graph; this results in the heuristic being very efficient.
76 These techniques are heuristic techniques which result in a good approximation of the optimal solution. This approximation is obtained by determining upper and lower bounds on the solution using interchange methods, or linear programming (LP) models to construct bounds. It should be noted that, in general, hypergraph partitioning problems are NP-hard even for the simple case where the hypergraph is a graph. Also, these graph-based techniques look solely at partitioning either the parts into part-families or the machines into machine cells. The next category, matrix techniques looks at the part-machine incidence matrix and simultaneously divides the machines into machine cells and the parts into part families.
2.2. Matrix Techniques One of the relevant applications of the matrix representation is in the area of group technology. In this application, rows represent the machines and columns represent the parts produced. The objective is to permute the rows and columns so as to obtain a block diagonal representation of the original matrix, with each block representing a cluster. Burbidge (1971) introduced a Production Flow Analysis concept which is applied at three levels: factory flow analysis, group analysis and line analysis. The machine-part incidence matrix is generated during the factory flow analysis with an attempt to identify machine cells being made at the group analysis phase. These machine cells are identified by rearranging the columns and rows of the incidence matrix. These machine cells are the clusters which are used during the line analysis phase to analyze the flow pattern on the shop floor, determine machine layout and identify bottleneck machines. McCormick, Schweitzer and White (1972) developed the Bond Energy Algorithm (BEA) which is an interchange clustering algorithm. The main objective of this techniques is to achieve a block-diagonal form by maximizing the following measure of effectiveness: m
n
1
ME = =2Z Z aij [ai,j-1 i=lj=l where
+ ai, j + 1 -at-a i - 1j -4- a i +
1,j]
aij=1 if part i is assigned to machine j and 0 otherwise.
The BEA has not been successfully applied in the machine and part grouping problem because visual identification in a solution matrix of natural machine groups and part families is very difficult and may be impossible for large problems (Boe and Cheng, 1991). King (1980) described a rank order clustering algorithm which is relevant to the problem of machine-component group formation. He also discusses a relaxation and regrouping procedure which permits the basic rank order clustering method to be extended to the case where there are bottleneck machines. The rank order clustering algorithm begins with the original machine-component matrix (also known as the incidence matrix) and then proceeds to rearrange rows and columns in an
77 iterative manner that ultimately produces a matrix in which both columns and rows are arranged in order of decreasing value when read as binary words. King and Nakomchai (1982) improved on King's (1980) rank order clustering algorithm. They do so by sorting with several rows or columns at the same time instead of element by element. This reduces the overall complexity to O(mn log(mn)) from O(mn(m+n)) where m is the number of rows and n is the number of columns. Chan and Milner (1982) introduced a four step method which sorts each row in increasing order corresponding to the total number of "l's" and then sorts each column in decreasing order corresponding to the total number of "l's". These rows and columns are rearranged until clusters which are satisfactory are obtained. Kusiak and Chow (1987) developed the Cluster Identification Algorithm which uses a binary machine-part incidence matrix to verify the existence of mutually separable clusters. Their algorithm has the disadvantage of only being able to identify perfect clusters if they exist in a matrix. Its advantage is that it has a relatively low computational time complexity which is of O(2mn) where m is the number of rows and n is the number of columns. Wei and Kern (1989) presented a linear clustering algorithm. It is based on the calculation of a commonality score which indicates the similarity in the way two machines are used in the shop to manufacture the products or parts. This algorithm generates consistent machine groupings regardless of the initial order of the input data. Boe and Cheng (1991) discussed "a close neighbour algorithm" which consists of two stages. The first stage clusters the rows (which represent the machines) resulting in the intermediate matrix. In stage two, the columns (parts) of the matrix are arranged by linking the machines in the intermediate matrix using their "closeness" measures. Except for the machine in the first row, a machine is the closest machine to the one above it. Therefore, this stage rearranges parts (columns) of the matrix while maintaining the desirable arrangement of machines achieved in the first stage. Chow and Hawaleshka (1992) proposed an algorithm to solve the machine chaining problem in cellular manufacturing. This problem arises when machines are improperly assigned to machine cells in a cellular manufacturing environment thus causing high intercellular movement of parts. The algorithm transforms the incidence matrix into a (m x m) matrix by means of the commonality scores method of Wei and Kern (1989). Then, they proceed to group the first two machines M i and Mj that contribute to the highest commonality score. These machines are then treated as a new machine unit which is replaced in machines M i and Mj in the incidence matrix. This procedure is repeated until all machines are grouped. Vannelli and Hall (1993) discussed the problem of finding part-machine families for cellular manufacturing through the use of efficient machine duplication and part subcontracting strategies. In order to find part-machine families, they develop an eigenvector approach which allows control over the number of cells, cell size and performance criteria by generating a variety of cellular manufacturing designs. An advantage of this technique is that it permits an estimate of how far the generated part-machine families are from the best or optimal part families or machine cells using machine duplication and part subcontracting strategies.
78 Next we discuss integer programming techniques next which attempt to solve the group technology problem starting from the incidence matrix and optimizing an objective function subject to some constraints with the restriction that all variables must take integer values.
2.3. Integer Programming Techniques Kusiak, Vannelli and Kumar (1986) described three distinct integer programming formulations which are characterized by two constraints: 1. fixed number of clusters 2. restriction on the number of elements within each cluster. The first formulation that they discuss does not incorporate any of these constraints; in other words, the algorithm is allowed to generate natural clusters. The second formulation restricts the number of clusters and the third formulation allows restrictions on the number of clusters and cluster size to be dealt with. These researchers showed that the first formulation is equivalent to two travelling salesman problems which are solvable using efficient heuristic techniques. The second formulation is solved using a Lagrangian relaxation method. Finally, an eigenvector approach is used to solve the third formulation by approximating the original problem by a linear transportation problem. Kusiak (1987) presented an integer programming formulation to the group technology problem called the p-median model. This formulation seeks to maximize the sum of similarity coefficients for a fixed number of groups with the constraint that a part will be assigned to one family only. In this model, the number of groups is a parameter to be specified at the formulation stage and is not an outcome of the solution procedure. The advantage of the pmedian model is that it is a non-heuristic approach since an integer programming problem is solved to attain the optimal solution. Srinivasen, Narendran and Mahadevan (1990) proposed an assignment model which makes use of a square matrix of similarity coefficients. The coefficient of similarity between two machines i and j is given by n
s ij = ~-~ dk
V i r j where d k = 1 if aik = ajk and d k = 0 otherwise
k=l
aik = 1 if part k is assigned to machine i sjj=O Vj The matrix of similarity coefficients is the input to an assignment problem. As in the Travelling Salesman Problem, allocation in any diagonal element is to be avoided. Since the objective is maximization, the diagonal elements are forced to zero. The method of Srinivasen, Narendran and Mahadevan (1990) is faster than the p-median method, especially when the problem size is large or when the number of groups is large, the pmedian model takes an enormous amount of time in comparison with the time taken by the assignment model.
79 In the following we discuss the various constraints which researchers have included in their study of the solution of group technology problems.
2.4. Constraints Considered in Group Technology When studying the group technology problem, researchers have attempted to consider several constraints on the final solution. This section introduces some of these constraints. Askin and Chiu (1990) discussed a mathematical model and solution procedure for the group technology configuration problem. They incorporate several parameters in the mathematical programming formulation including costs of inventory, machine depreciation, machine setup and material handling. The formulation is then divided into two subproblems. In the first, parts are assigned to specific machines based on similar operation sequences and the desire to minimize the number of groups while in the second, machines are grouped into cells by minimizing total cost. Jain, Kasilingam and Bhole (1990) discussed the cell formation problem in flexible manufacturing systems under resource constraints. They develop a 0-1 integer programming model to form machine-part groups and to decide on the number of machines and the number of copies of tools needed to achieve minimum overall system cost. The model takes into account the processing time available on any machine, tool lives and the processing requirements of the parts. Logendran (1990,1991,1992) stated that intracell moves and workload imbalances on machines within a cell are two important factors in the investigation of a part family - machine cell formation problem. In addition to studying these points, he has also included the sequence of operations and the impact of layout of cells in evaluating both inter as well as intracell moves. Logendran then studies the duplication of bottleneck machines in the presence of budgetary limitations in cellular manufacturing. Logendran (1992) introduces a two-phase process for the duplication of bottleneck machines. The object of the first phase is to determine the savings in material handling costs achieved by duplicating the bottleneck machines associated with each of the bottleneck parts, the amortized cost of duplicating the machines, and the identification of those bottleneck machines associated with bottleneck parts that were originally assigned to the same cell for the establishment of constraints during phase two. The second phase concentrates on using these estimates of costs as suitable coefficients to develop a binary integer programming model to determine the bottleneck machines which must be duplicated in order to maximize the net savings in material handling costs subject to budgetary limitations. Sule (1991) discussed a procedure which determines the number of machines, their groupings and the amount of material transfer between the groups, so that all components can be processed within the plant with minimum total cost. Ghosh, Melnyk and Ragatz (1992) studied the impact of sequence dependency on tooling constraints and shop floor scheduling. It has been found that tooling has become a major bottleneck for many manufacturing systems. The major focus of this study is on sequence dependency. Sequence dependency is a result of commonality. This commonality exists in one of three areas: setups, processing, or inventory withdrawals. The authors use a four-way
80 analysis of variance to determine the commonality and they use the SLAM II simulation model to perform their study. Heragu (1994) discussed several practical design constraints which must be considered during the solution of the group technology problem. Among these he lists machine capacity not being exceeded, safety and technological requirements being met, number of machines in a cell and number of cells not exceeding an upper bound, material handling costs being minimized, machine utilization rate being as high as possible, costs of machine purchase and work-in-process inventory being minimized, and minimizing of operating costs. Among these constraints, the author indicates machine capacity as being the most important. Operation sequences is an important constraint needing consideration when creating machine cells as it not only affects the creation of the cells themselves, but also affects the machine capacity constraint.
3. SOLVING GROUP TECHNOLOGY AT THE MACHINE ASSIGNMENT LEVEL To date, the majority of researchers have attempted to solve the group technology problem using a predetermined assignment of parts to machines. However, this method does not result in the best possible group technology division since there can exist several alternate assignments for a given problem with one assignment resulting in a better cellular division of part-machine combinations. This is a result of all the information concerning the possible existence of alternate process plans as well as all information pertaining to operation sequences, set-up time, operation types, manufacturing processing time and load having been lost. Furthermore, the assumption that there is only one process plan may cause an increase in capital cost and under-utilization; major disadvantages of cellular manufacturing. Solving the group technology problem at the machine assignment level has therefore been considered. Previous research has concentrated on solving this problem using two different types of methods, the first, indirect methods, require consideration of alternate process plans while the second, direct methods, solve the group technology and assignment problems directly and simultaneously.
3.1. Indirect Methods There are several assignments which can be achieved since for each operation required by a part more than one machine may be capable of performing the operation. The choice of the machine(s) to be used to produce a part is therefore dependent on the operations required by the part. In addition, since several machines may have overlapping capabilities, one machine assignment may be better for the creation of independent cells than another. Kusiak (1987) discussed a generalized group technology concept which is based on the generation of a number of different process plans for one part. However, his method does not take into consideration the constraint of having limited machine capacities. Rajamani, Singh and Aneja (1990) modelled and analyzed the influence of alternative process plans on resource utilization when part families and machine groups are formed. This
81 was done by developing three integer programming models; the first assigns machines to parts, the second assigns machines to known part families to form cells, and finally the third model identifies part families and machine groups simultaneously. All the models take into consideration demand, time and resource constraints with the objective being to minimize capital investment. The major disadvantage of the methods which consider alternate process plans is that they are highly impractical for large scale manufacturing environments since it would be necessary to list each and every possible process plan, a very cumbersome task. As a result, a need for a direct method of incorporating group technology at the machine assignment level has been identified. 3.2. Direct Methods
Kamel and Liu (1992) initiated the study of solving the group technology problem at the machine assignment level. They use a part-machine assignment algorithm developed by Ghenniwa (1990) to compare between the results of applying group technology criterion within the part-machine assignment and applying group technology methods after having completed the part-machine assignment. The algorithm developed by Ghenniwa (1990) that is used is the POMJ (Partially-Overlapped systems with Multiple operations Jobs) which finds an optimal or approximate solution satisfying the following goodness conditions: 1. Independency Condition: minimizing the number of machines assigned to each part 2. Load Balance Condition: minimizing the time load difference between the machines Despite POMJ achieving an optimal or very good approximate solution for the above two conditions, it does not guarantee an assignment which is separable for the purpose of group technology. Kamel and Liu (1992) introduced the following criteria for group technology into a part-machine assignment algorithm: 1. Bottleneck Condition: assign parts to lowest machine level whenever possible to avoid overlapping. 2. Commonality Condition: assign parts to most common machine combination at the same machine level with minimum overlapping machines between machine cells. For more details on the algorithm and results, see Liu (1992). Kamel, Ghenniwa and Liu (1994) refined the approach further by providing better grouping criteria. They introduced the following definitions: Definition 1:
A multi-machine system is a system consisting of a set of m machines denoted by M = {M i = ( C i ) , 1 < i < m}, where the capability set, C i, of M i is a set of operations that the machine can perform.
Definition 2:
A partially-overlapped system is a multi-machine system where some or all of the machine's capability sets are partially overlapped.
Definition 3:
A set of n parts is denoted by P = {Pj I Pj = (Oj), 1 < j < n}, where the operations set, Oj, of Pj is a set of operations that are required by the part.
82 Definition 4:
A machine-combination (or a combination) is a subset o f machines denoted by M e = {Mll ..... Mls }, 1 < s < m, Ir E { 1 ..... m }, r = 1 ..... s, Mlr E M with a capability set given by C c = u i ~ { / 1 . . . . . Is} Ci, such that f o r some j ~ { 1..... n } and Pj ~ P,,
Cc The cardinality of M c is called combination-level. Definition 5"
A machine cell (or a cell) is a member o f a set o f k, 1 < k < m, machinecombinations, and denoted by Mc~a 1 < ~, < k, such that k
0
Mc~r = d~, and
K=I k
k..)McK = M. K=I
For which P can be partitioned into k subsets (part-families) denoted by PF~r = {Pql ..... Pqy}, 1 < y < n , fix e {1 ..... n } , x = 1 ..... y, a n d P r l x ~ P, w i t h a s e t o f operations given by OF• = uje {111..... ~ly} 02 such that OF~r C CcK , VK.
Kamel, Ghenniwa and Liu (1994) modified Liu's (1992) criteria to become: A. Combination-level conditions 1. Combination-independence: minimizing the number of machine-combinations that are partially-overlapped. 2. Combination-load balance: minimizing the time-load difference between machinecombinations B. Machine-level conditions 1. Machine-independence: minimizing the number of machines assigned to process each part. 2. Machine-load balance: minimizing the time-load difference between machines (or members) of each machine-combination. Another fundamental difference between Kamel and Liu's (1992) algorithm and that of Kamel, Ghenniwa and Liu (1994) is the way in which the machine cells are formed. Kamel and Liu (1992) begin with the smallest number of machines required to make a part and insert them into a cell. They then proceed to add machines to the cell at need. Kamel, Ghenniwa and Liu (1994) form the cells by choosing the largest number of machines (machine combination) needed to create a part and make this the lower bound for at least one of the cells. They then proceed to see which machines are required for the remaining parts and attempt to assign the parts to machines without increasing the cell size where possible. Both methods introduced by these researchers solve the assignment problem in four steps. First, the assignment problem is represented graphically. This graph model is made up of
83 combination nodes, part nodes, and links between combination nodes as well as between part nodes and the appropriate combination nodes. The partially-overlapping relationship between the combinations is represented by the links between the combination nodes. Figure 1 shows the Group Technology Machine Assignment algorithm of Kamel, Ghenniwa and Liu (1994).
FIGURE 1.
Group Technology Machine Assignment Algorithm (GTMA) Read machine capabilities and part operations I
Find all machine combinations for parts with minimum number of machines
Construct subgraphs of problem model and solve for all possible cells by linking all part nodes of same level to their appropriate combination nodes at same or lower level. Place all partially overlapped candidates in same cell. I
Assign machines to cells to minimize cell size
Assign parts to cell candidates such that combinationload balance is satisfied
Assign parts to machine members in each cell such that the machine-load balance is satisfied
End Algorithm
3.2.1. Incorporating Operation Sequence Constraint In order to take into consideration operation sequence constraints, Kamel and Liu (1992) and Kamel, Ghenniwa and Liu (1994) use a similarity coefficient introduced by Tam (1990) to create part families in a separate algorithm. Tam's similarity coefficient considers the minimum number of transformations required to derive one operations sequence from another. He defines three transformations as follows:
84 Definition 6:
Let A 1 and A 2 be subsequences of an operation sequence, and let 01 and 02 be two operation symbols, 1.Substitution Transformation A IO1A 2 ----rs A IO2A 2 2.Deletion Transformation A101A2 '-~d AIA2 3.Insertion Transformation AIA2 '>i AIOIA2
Tam (1990) defines a similarity coefficient between two operations, d(x,y) as Definition 7:
d(x,y) is the smallest number of transformations required to derive y from x.
He then proceeds to assign weights to the transformations in order to generalize the similarity measure. This is done by assigning non-negative weights w s, w d, and w i to substitution, deletion and insertion respectively. The weighted similarity coefficient between two operation sequences x and y is defined by: dw(x,y) = min(wsn s + wdn d + wini) where n s is the number of substitution transformations n d is the number of deletion transformations n i is the number of insertion transformations. The similarity coefficient obtained using this equation represents the minimum number of transformations between two operation sequences. Furthermore, the similarity coefficient does not incorporate the number of common operations between two parts. Tam (1990) overcomes these problems by defining a new similarity coefficient as follows: Sc[i,j] =f(d~[i,j], c[i,j]) where dn[i,j] = dw[i,j]/max {dw[y,z] I 1 < y,z < number of parts } dn[i,j] is the normalized similarity coefficient between any two parts. eli,j] = I P i ~ Pj I / I P i u Pj I where Pi is part i and Pj is part j. c[i,j] is a coefficient representing the commonality of operations between two parts. and f i s a function that maps these two parameters onto a linear range. Kamel and Liu (1992) and Kamel, Ghenniwa and Liu (1994) use the following function to calculate the similarity coefficient: Sc[i J]
=
wndn[i,j ] + Wc(1-c[ij])
85 where 1 and w n, w c > 0. is the number of transformations between two parts w c is the commonality of two parts. w n + w c =
w n
The families are formed using a clustering algorithm which utilizes the similarity coefficients. After the part families are formed, the assignment algorithm can be applied to assign part families to machine cells. Figure 2 shows the part-family creation algorithm of Kamel, Ghenniwa and Liu (1994).
FIGURE 2.
Part-Families Algorithm
Read machine capabilities and part operations
While there are part pairs and the number of part families have not been exceeded find distinct part-pairs by choosing pair with smallest similarity coefficient.
Assign the part to be the seed of a part family
No
Yes
No
End Algorithm
Having operation sequences considered directly when solving the group technology problem at the machine assignment level is important because the operation sequences affect
86 set-up time which in turn affects processing time. As a result, Moussa and Kamel (1994) introduced an algorithm which takes into consideration the effect of operation sequences directly in the group technology machine assignment solution process. Figure 3 shows the flow-chart of their algorithm.
FIGURE 3.
Moussa and Kamers (1994) Algorithm Read machine capabilities and part operations
Calculate Similarity Coefficient for parts
Find all machine combinations for parts with minimum number of machines
For parts with more than one machine combination, search for largest similarity coefficient with another part I
Compare machine combinations of both parts; eliminate those which do not accommodate both parts I
Construct subgraphs of problem model and solve for all possible cells by linking all part nodes of same level to their appropriate combination nodes. I
Assign parts to cells to satisfy combination load balance I
Assign parts to machines to satisfy machine-load balance
End Algorithm
Moussa and Kamel (1994) favoured the use of the similarity coefficient introduced by Choobineh (1988) over Tam' s. Choobineh's similarity coefficient takes into consideration the operations sequences required to produce the various parts. His similarity coefficient calculates the number of operations having the same sequence in two parts. At the first level, the similarity coefficient calculates the number of single operations shared by two parts relative to the total number of
87 operations of the part requiring the least processing. At the second level, the similarity coefficient calculates the number of single and double operations in series shared by two parts, and so on for each level. N
Z qijqkj Sik(1 ) =
N
j=l
Z qq + q k j -
qijqkj
j=l
where
qij
1
= {0 ; qij = 1 if operation j is needed for part i, 0 otherwise.
1
Sik(L)
E
L
Cik (l)
~-~N-l+l
= Z Sik(1)+
j
/=2
where
Cik(1) is the number of common sequences of length I between parts i and k; L
is the level at which the similarity coefficient is calculated (i.e. when L=2, determine how many times a sequence of two operations is repeated). Sik(L) is the average similarity coefficient of order L between parts i and k,
0 < Sik(L ) <_ 1 N = mini(Ni) with N i being the number of operations of part i. The advantage of using Choobineh's (1988) similarity coefficient over Tam's (1990) similarity coefficient is that Tam's (1990) similarity coefficient requires a subjective choice of the weights and function to be used while this is not the case for Choobineh's (1988) similarity coefficient. Another advantage is that Choobineh's (1988) similarity coefficient is a measure of how similar the sequence of operations required by two parts are to each other while Tam's (1990) similarity coefficient is a measure of how different the sequence of operations required by two parts are from each other. In addition, Choobineh's similarity coefficient measures the number of consecutive operations shared by two parts. This number depends on the level at which the similarity coefficient is being calculated.
3.2.2. Sample Results To illustrate the performance of the algorithms, we consider a 6 machine, 20 part manufacturing system called SYS I as shown in Tables 1 and 2. For comparison purposes, results obtained for the same system using Kamel, Ghenniwa and Liu's (1994) algorithm are shown.
88
T A B L E 1. SYS
I: 6 Machines and their Machine Capabilities (Liu; 1992)
MACHINES
NUMBER OF OPERATIONS
MACHINE CAPABILITY
Machine Machine Machine Machine Machine Machine
4 2 4 5 5 7
ACKL BD EHIJ JKLGF ABCIS JKLUXYZ
1 2 3 4 5 6
T A B L E 2. SYS
I: 20 Parts and their Operation Sequences
PARTS
NUMBER OF OPERATIONS
OPERATION SEQUENCES
Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10 Part 11 Part 12 Part 13 Part 14 Part 15 Part 16 Part 17 Part 18 Part 19 Part 20
4 2 3 4 3 2 3 2 3 2 3 3 2 3 2 2 3 2 3 2
ABCD XY BCD CDKL EFG EG USA FG HIJ HJ IJK DKL IJ JKL US KL XYZ JK ABC YZ
Table 3 s h o w s the result o f u s i n g K a m e l , G h e n n i w a a n d L i u ' s ( 1 9 9 4 ) G T M A a l g o r i t h m without considering operation sequences.
89
TABLE 3. Result of Applying GTMA to SYS I M5
M6
P7
1
1
P15
1
1
P1
M1
M2
1
1
P3
1
1
P4
1
1
P12
1
1
P16
1
M3
M4
P8 P9
P10 P13 P5 P6
Pll P2 P14
P17 P18 P19
1
P20
In the above table, we note that the machines can easily be subdivided into three distinct cells of two machines each. The parts contained in the first cell of M1, M2 are P1, P3, P4, P12, and P16. Those contained in the second cell of M3, M4 are P8, P9, P10, P13, P5, P6, and Pll. Finally, the parts processed in the third cell of M5, M6 are P2, P 14, P 17, P 18, P 19, P20, P7 and P15. This result is based on having an assignment which is based on operation types but not operation sequences. Table 4 shows the clustering of parts for the part data in Table 2 into part families using Kamel, Ghenniwa and Liu's (1994) part-families algorithm. This algorithm uses Tam's (1990) similarity coefficient to determine the part-families. TABLE 4. Part Families for SYS I PART FAMILY
Parts
Operations
PF1 PF2 PF3 PF4 PF5 PF6
P1, P2, P4, P5, P7, P9,
ABCD XYZ CDJKL EFG ASU HIJK
P3, P19 P 17, P20 P12, P14, P16 P6, P8 P15 P10, Pll, P13, P18
90 Table 5 contains the result of using these part-families as input to the GTMA algorithm of Kamel, Ghenniwa and Liu (1994).
TABLE 5. Result of GTMA for part-families PF3
M2 1
PF1
1
M5 1
M6
M1
M3
M4
1 1
PF5 PF2 PF4 PF6
In Table 6, the part numbers are substituted for the part-families for comparison purposes with the results obtained by Moussa and Kamel (1994).
TABLE 6. Result With Part-Families Replaced by Parts M2
M5
M6
M1
M3
M4
P4 P12 P14 P16 P1 P3 P19 P7 P15 P2 P17 P20 P5 P6 P8 P9 PIO Pll P13 P18
Table 7 contains the results obtained after running Moussa and Kamel's (1994) algorithm, which takes into account operation sequence similarity while performing the assignment, for the SYS I data. It should be noted that these results are obtained using Choobineh's (1988) similarity coefficient.
91 TABLE 7. Result of Modified GTMA Algorithm for SYS I M1
M2
1
1
P3
1
1
P4
1
1
P12
1
1
P1
M3
M4
M5
M6
P9 PIO P13 P14 P16 P18 P5 P6 Pll P8 P2 P17 P19 P20
1
P7
1
1
P15
1
1
When the results obtained using Moussa and Kamel's (1994) modified GTMA algorithm (Table 7) are compared to those obtained using the GTMA algorithm alone (Table 3) as well as those obtained using it with the operation sequence algorithm (Table 6), the following observations can be made: 1. Moussa and Kamel's (1994) algorithm gives three distinct cells while the GTMA algorithm which is applied to the part-families created using the operation sequence algorithm is only capable of giving two distinct cells. 2. Moussa and Kamel's (1994) algorithm gives three distinct cells as does the GTMA algorithm which does not consider operation sequencing. Table 8 contains a comparison of the parts contained in each cell for both cases.
92
TABLE 8. Comparison of Part-Cell Distribution Between the GTMA Algorithm and the Modified GTMA Algorithm GTMA
Cell
Parts P1, P3, P4, P12, P16 P5, P6, P8, P9, P10, Pll, P13 P2, P7, P14, P15, P17, P18, P19, P20
Modified
GTMA
Cell 1
Parts P1, P3, P4, P12
2
P5, P6, P8, P9, P10, Pll, P13, P14, P16, P18 P2, P7, P15, P17, P19, P20
3
As can be seen in the above table, the results are not identical in both cases, a result which is expected due to the consideration of the operation sequencing. Now, let's consider each of these part groupings as a part-family and study the operations they require. TABLE 9. Comparison of Part-Family Operations GTMA
Part Family PF1 PF2 PF3
Operations ABCDKL FGHIJEK XYJKLZABCUS
Modified
GTMA
Part Family PF1 PF2 PF3
Operations ABCDKL HIJKLEFG XYZABCUS
By looking at this comparison, it can be observed that the modified GTMA algorithm gives a more even distribution of the operation sequences between the part-families. Therefore, although at first glance it appears that the modified GTMA algorithm is increasing the amount of work to be done in cell 2 relative to the original GTMA; it is possible to observe that this is beneficial from the point of view of tooling costs since operations JKL which are required in part-family 3 of the original GTMA algorithm are no longer performed for part-family 3 of the modified GTMA algorithm. In addition, moving the parts requiring operations JKL to partfamily 2 of the modified GTMA algorithm only adds operation L to the already existing operations. It is reasonable to assume that the setup time of adding a single operation to a set of operations will be less than that of adding three operations to a different set.
4. Summary and Conclusions In this chapter, cellular manufacturing has been introduced with its concepts, uses and solution methodologies. A brief description of various traditional solution methodologies has been given. In addition, the reasons for the need to study solving the group technology problem simultaneously with the machine assignment problem have been given. It has been shown that the techniques which allow this simultaneous solution are more effective in delivering good group technology solutions. Furthermore, new solution techniques which take into consideration the effect of operation sequences while solving the group technology
93 machine assignment problem directly have been introduced along with some results obtained using these techniques. These results illustrate the effect of operation sequences on the machine assignment as well as demonstrating the improvement in the group technology assignment. Research is continuing on incorporating machine capacity constraints, time constraints, and cost factors as well as limiting cell size and the number of cells.
REFERENCES 1.
2. 3.
.
5. .
.
,
.
10. 11. 12. 13.
Askin, R. G. and K. S. Chiu (1990). "A Graph Partitioning Procedure for Machine Assignment and Cell Formation in Group Technology", International Journal of Production Research, Vol. 28, No. 8, pp. 1555-1572. Barnes, E. R. (1982). "An Algorithm for Partitioning the Nodes of a Graph", SlAM Journal of Algebraic and Discrete Methods, Vol. 3, No. 4, pp. 541-550. Boe, W. J., C. H. Cheng (1991). "A Close Neighbour Algorithm for Designing Cellular Manufacturing Systems", International Journal of Production Research, Vol. 29, No. 10, pp. 2097-2116. Burbidge, J. L. (1971). "Production Flow Analysis", Prod. Engineer, April/May, pp. 139152. Chan, H. M. and D. A. Milner (1982). "Direct Clustering Algorithm for Group Formation in Cellular Manufacturing", Journal of Manufacturing Systems, Vol. 1, No. 1, pp. 65-74. Choobineh, E (1988). "A Framework for the Design of Cellular Manufacturing Systems", International Journal of Production Research, Vol. 26, No. 7, pp. 1161-1172. Chow, W. S. and O. Hawaleshka (1992). "An Efficient Algorithm for Solving the Machine Chaining Problem in Cellular Manufacturing", Computers and Industrial Engineering, Vol. 22, No. 1, pp. 95-100. Fiduccia, C. M. and R. M. Mattheyses (1982). "A Linear-Time Heuristic for Improving Network Partitions", Proceedings IEEE 19th Design Automation Conference, Ghosh, S., S. Melnyk and G. L. Ragatz (1992). "Tooling Constraints and Shop Floor Scheduling: Evaluating the Impact of Sequence Dependency", International Journal of Production Research, Vol. 30, No. 6, pp. 1237-1253. Ghenniwa, H. (1990). Solving the Assignment Problem of Multi-Agent Systems, M. A. Sc. Thesis, University of Waterloo. Ghosh, S., S. Melnyk and G. L. Ragatz (1992). "Tooling Constraints and Shop Floor Scheduling: Evaluating the Impact of Sequence Dependency", International Journal of Production Research, Vol. 30, No. 6, pp. 123-1253. Hadley, S. W., B. L. Mark and A. Vannelli (1992). "An Efficient Eigenvector Approach for Finding Netlist Partitions", IEEE Transactions on Computer-Aided Design, Vol. 11, No. 7, pp. 885-892. Heragu, S. S. (1994). "Group Technology and Cellular Manufacturing", IEEE Transactions on Systems, Man and Cybernetics, Vol. 24, No. 2, pp. 203-214. Hyer, N. L. and U. Wemmerlov (1985). "Group Technology Oriented Coding Systems: Structures, Applications and Implementation", Production and Inventory Management, Vol. 26, pp. 55-78.
94 14. Jain, A. K., R. G. Kasilingam and S. D. Bhole (1990). "Cell Formation in Flexible Manufacturing Systems Under Resource Constraints", Computers and Industrial Engineering, Vol. 19, Nos 1-4, pp. 437-441. 15. Kamel, M, H. Ghenniwa and T. Liu (1994). "Solving the Group Technology Problem at the Machine Assignment Level", Journal of Intelligent Manufacturing, Vol. 5, pp. 225234. 16. Kamel, M. and T. Liu (1992). "Machine-Part Assignment for Group Technology", Proceedings, ISRAM 92, pp. 811-817. 17. Kernighan, B. W. and S. Lin, (1970). "An Efficient Heuristic Procedure for Partitioning Graphs", Bell Systems Technical Journal, Vol. 49, No. 2, pp. 291-307. 18. King, J. R. (1980). "Machine-Component Grouping in Production Flow Analysis: An Approach Using a Rank Order Clustering Algorithm", International Journal of Production Research, Vol. 18, No. 2, pp. 213-232. 19. King, J. R. and V. Nakornchai (1982). "Machine-Component Group Formation in Group Technology: Review and Extension", International Journal of Production Research, Vol. 20, pp. 117-133. 20. Kumar, K. R. and A. Vannelli (1987). "Strategic Subcontracting for Efficient Disaggregated Manufacturing", International Journal of Production Research, Vol. 25, No. 12, pp. 1715-1728. 21. Kusiak, A. (1985). "The Part Families Problem in Flexible Manufacturing Systems", Annals of Operations Research, Vol. 3, pp. 279-300. 22. Kusiak, A. (1987). "The Production Equipment Requirements Problem", International Journal of Production Research, Vol. 25, No. 3, pp. 319-325. 23. Kusiak, A. and Chow, W. S. (1987). "An Efficient Cluster Identification Algorithm", IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-17, No. 4, pp. 696-699. 24. Kusiak, A., A. Vannelli, K. R. Kumar, (1986). "Clustering Analysis: Models and Algorithms", Control and Cybernetics, Vol. 15, No. 2, pp. 139-154. 25. Liu, T. A. (1992). Algorithms for Forming Part-Families and Their Assignment to Machines n Group Technology, M. A. Sc. Thesis, University of Waterloo. 26. Logendran, R. (1990). "Workload Based Model for Minimizing Total Intercell and Intracell Moves in Cellular Manufacturing", International Journal of Production Research, Vol. 28, No. 5, pp. 913-925. 27. Logendran, R. (1991). "Impact of Sequence Operations and Layout of Cells in Cellular Manufacturing", International Journal of Production Research, Vol. 29, No. 2, pp. 375390. 28. Logendran, R. (1992). "A Model for Duplicating Bottleneck Machines in the Presence of Budgetary Limitations in Cellular Manufacturing", International Journal of Production Research, Vol. 30, No. 3, pp. 683-694. 29. McCormick, W. T., P. J. Schweitzer, and T. W. White (1972). "Problem Decomposition and Data Reorganization by Cluster Technique", Operations Research, Vol. 20, N. 5, pp. 993-1009. 30. Moussa, S. E. and M. S. Kamel (1994). "An Algorithm for the Assignment of Parts to Machines in Cellular Manufacturing", Proceedings ISRAM 1994, pp. 487-492. 31. Rajamani, D., N. Singh, Y. P. Aneja (1990). "Integrated Design of Cellular Manufacturing Systems in the Presence of Alternative Process Plans", International Journal of Production Research, Vol. 28, No. 8, pp. 1541-1554.
95 32. Srinivasan, G., T. T. Narendran, B. Mahadevan (1990). "An Assignment Model for the Part-Families Problem in Group Technology", International Journal of Production Research, Vol. 28, No. 1, pp. 145-152. 33. Sule, D. R. (1991). "Machine Capacity Planning in Group Technology", International Journal of Production Research, Vol. 29, No. 9, pp. 1909-1922. 34. Tam, K. Y. (1990). "An Operation Sequence Based Similarity Coefficient for Part Families Formation", Journal of Manufacturing Systems, Vol. 9, No. 1, pp. 55-68. 35. Vakharia, A. J. (1986). "Methods of Cell Formation in Group Technology: A Framework for Evaluation", Journal of Operations Management, Vol. 6, pp. 257-271. 36. Vannelli, A. and S. W. Hadley (1990). "A Gomory-Hu Cut Tree Representation of a Netlist Partitioning Problem", IEEE Transactions on Circuits and Systems, Vol. 37, No. 9, pp. 1133-1139. 37. Vannelli, A., R. G. Hall (1993). "An Eigenvector Methodology for Finding Part-Machine Families", International Journal of Production Research, Vol. 31, No. 2, pp. 325-349. 38. Wei, J. C., G. M. Kern (1989). "Commonality Analysis: A Linear Cell Clustering Algorithm for Group Technology", International Journal of Production Research, Vol. 27, No. 12, pp. 2053-2062. 39. Wemmerlov, U., and N. L. Hyer (1986). "The Part Family/Machine Group Identification Problem in Cellular Manufacturing", Journal of Operations Management, Vol. 6, pp. 125147.
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
97
Manufacturing Cell Loading Rules and Algorithms for Connected Cells Giirsel A. Siier a, Miguel Saiza, Cihan Daglib and William Gonzalez c aIndustrial Engineering Department, University of Puerto Rico-Mayagiiez, Mayagiiez, PR 00681-5000 bEngineering Management Department, University of Missouri-Rolla, Rolla, MO 65401 CAvon Lomalinda Incorporated, San Sebastihn, PR 00755
1. INTRODUCTION Many manufacturing companies are using Cellular Manufacturing (CM) techniques and converting their conventional manufacturing facilities to manufacturing cells. In a survey of 54 US manufacturing companies, WemmerlSv and Hyer [22] found out that 60% of the companies that responded are implementing CM techniques. Half of them have only cells, whereas the remaining half have cells and dedicated machines. The average number of cells they have is 6. However, the number of cells varies between 1 and 35. This substantiates the claim that CM is gaining importance. Cellular Manufacturing can be defined as the implementation of Group Technology (GT) principles in a manufacturing environment. The central theory is that various situations that require decisions can be grouped together based on pre-selected, commonly shared criteria and that decision that applies to one situation in the group will apply to all of them in that group. The application of Group Technology to manufacturing is achieved by identifying the items with either similar design or manufacturing characteristics and grouping them into families of like items. Ham [12] said that GT is the realization that many problems are similar and by grouping similar problems, a single solution can be * This study has been supported by the National Science Foundation under Grant No. DDM-9113901 and Avon Lomalinda Incorporated.
98
found to a set of problems, thus saving time and effort. Ranson [17] mentioned that GT is a logical arrangement and sequence of all facets of company operation to bring the benefits of mass production to a high variety, mixed quantity production environment. Gal]agher and Knight [8] defined it as a manufacturing philosophy that identifies and exploits the underlying sameness of items and the processes used for their manufacture. The benefits derived from Cellular Manufacturing include reduced work-in-process inventory and setup time, improved product quality, easier scheduling, better visibility of product schedule status and quicker feedback of manufacturing deficiencies. 2. CONTROL OF MANUFACTURING CELLS There are two major stages in a Cellular Manufacturing implementation: 1. Designing manufacturing cells 2. Controlling manufacturing cells Designing manufacturing cells consists of several tasks such as family formation, determination of machine requirements and cell formation, internal cell design and layout, forming product versus feasible cell matrix, etc. Having focused the entire manufacturing system into smaller units (focused factories) and designed the manufacturing cells, there is a need to devise a system to determine the items to be manufactured, how many of them should be made, or in which order they should be processed at each workstation. MRP can answer these questions, but it does not address the issue of how to produce in the most efficient way. It follows that MRP and CM are not conflicting but are complementary as specified by WemmerlSv [23]. There are two activities of special interest in Cellular Manufacturing in connection with how to control the focused units: 1. Cell Loading 2. Cell Scheduling
Cell loading involves determining to which cell(s), among the feasible cells of a focused unit, products of that family should be assigned, how many units should be produced and in what order. On the other hand, cell scheduling deals with the scheduling of operations in a cell through several workstations once a product has been assigned. This includes the determination of start times, completion times, lot sizes and transfer sizes. In some cases, finding the order of products in a cell might be included among the cell scheduling tasks. This paper focuses on cell loading aspects in CM.
99 The cell loading process in a focused unit requires three major tasks to be performed: 1. Select a product 2. Select a cell 3. Find the order of products One of the decisions to be made is to select a product from the family in question. Another decision point is to select a cell from the focused unit. If a cell is selected first and then a feasible product is chosen, this type of search is called cell priority. If the search process is reversed then it is called product priority. Another task to be performed is to find the order of products in a cell once all the products to be produced in that cell have been determined. However, this task is usually inherent in the product selection process and usually product selection order implies the sequence of the production as well. The overall objectives of a cell loading process are to minimize the work-in-process inventory, minimize tardiness, maximize the utilization of the cells and balance the load among the cells. 3. L I T E R A T U R E REVIEW Cellular Manufacturing, as an application of Group Technology, has emerged after years of development and theoretical considerations in some academic and application-oriented arenas. In recent years, the literature on GT and CM has shown an emerging interest in the areas of cell formation, layout, product standardization and coding and the application of job shop scheduling rules for some cell scheduling problems. However, little attention has been paid to the cell loading problem in the literature so far. The objective of this study to cover this gap in the literature and meet the need in the industry. Some group scheduling methods available as discussed by Ham [12] are: 1. Flowline Group Production Planning by Petrov. This technique is effective for a large-scale flowshop scheduling problem. 2. Work Loading and Scheduling in Manufacturing Cells. This method is concerned with the scheduling of work in a manufacturing cell. 3. Group Scheduling Technique (GST) by Hitomi and Ham. This technique provides the optimal solution for both group sequence and job sequence in each group for a single machine and the multi-stage manufacturing system. Greene and Sadowski [9], and Greene and Cleary [10] mentioned scheduling issues, benefits, drawbacks and system variables in a cellular manufacturing environment. Chisman [2] suggested a novel approach for a single cell for estimating sequence-dependent changeover time data and used this data in a
100 traveling salesman algorithm to find an optimal product ordering for a special case. Fry, Wilson and Breen [7] described a successful implementation of Group Technology in an industry. Taylor and Ham [21] discussed scheduling algorithms for a single part family and a set of part families. Sunduram [20] provided two heuristic algorithms to find near optimal sequences for GT shop scheduling problems to minimize makespan. Later, he compared the performance of the heuristics with integer programming solutions. Dale and Dewhurst [4] analyzed a GT cell under various conditions. They concluded that SPT minimizes the work-in-process inventory. Mosier, Elvers and Kelly [ 15] studied a GT shop using three job shop scheduling rules. Wirth, Mahmoodi and Mosier [24] mentioned that both labor scheduling and group scheduling heuristics need to be considered to control the cell effectively. In another study, Mahmoodi, Tierney and Mosier [13] compared the performance of traditional single-stage heuristics and the two-stage group scheduling heuristics. Miles and Batra [14] also simulated a manufacturing system with many similar cells. Their primary concern was to minimize the tardiness and the labor costs. Guang-Xun and Li-Hang [11] modified the job scheduling rules and used them in a GT scheduling application. Siier and Gonzalez [18] discussed the use of a fixed time bucket approach to synchronize the flow of materials in a cell when interruptions exist in the process. Espino [6] explained four cell loading algorithms to deal with the cell loading problem. Siier and Saiz [19] provided a simple classification scheme for cell loading problems. 4. P R O B L E M STATEMENT The term connected cells refers to the fact that the output from a cell becomes an input to the next cell, i.e., it takes at least two cells to complete a product. There are several possible combinations for connected cell configurations. In general, they can be grouped in two categories: 1. Pure serial systems (see Figure l a) 2. Serial-parallel mixed systems (see Figure lb) Assigning a product to a cell(s) requires careful consideration in a connected cells environment since the product will consume some of the available capacity in the succeeding cells as well. As a result, the assignment of a product cannot be made solely based on the initial cell(s) but it should consider the load on the affected cells simultaneously. In pure serial syStems, it is relatively straightforward to determine the impact in each cell because the product will go through each cell successively (in some cases, cell skipping is likely). When serial-parallel mixed systems are considered, cell loading task is further complicated since alternative routes emerge and some cells can serve as common cells to many others. This study concentrates on the common cell(s) case and
101
a) CELL 1
CELL 2
CELL n
CELL 3
b) CELL 1 CELL 2 CELL n
CELL 21 COMMON CELL
CELL 22 CELL 2n
Figure 1. Connected Cells Configuration. a) Pure Serial Systems. b) Serial-Parallel Mixed Systems. several different possible configurations are discussed in the following paragraphs. Each product family might be limited to one dedicated cell besides the common cell. In this case, cell loading is greatly simplified to the determination of the order of the products within a family. The results of the single machine scheduling literature can be used to perform the cell loading task. While this restriction may be unavoidable in some facilities, it does impose constraints on the system which will limit the machine utilization and the system flexibility. The product family versus cell matrix (Mij) describing this situation is presented in Figure 2a where (i) designates product families and (j) designates cells (Mij=l ff cell (j) is a feasible cell for product family (i); otherwise Mij=0). The number of dedicated cells and the number of product families is equal to (n). In other CM systems, a product family can be processed by any cell besides the common cell. In this situation, the cell loading task becomes more complicated than the previous case. It requires the assignment of the product families to the cells along with the order of the product families in each cell and the order of the products in each family. The results of the parallel machine scheduling literature (identical, uniform or unrelated) can be used for the cell loading purposes. This case is shown in Figure 2b where (n) represents the number of cells and (f) denotes the number of product families.
102
a) CELLS
PRODUCT 2
FAMILIES
..
COMMON
n-1
CELL
1
n-1
b) CELLS
PRODUCT
1
FAMILIES
f-1
2
..
n-1
COMMON
CELL
1
1
""
1
1
1
1
1
""
1
1
1
1
1
""
1
1
1
1
1
""
1
1
1
c) CELLS
PRODUCT FAMILIES 1
2
1
1
..
1
""
1
1
""
1
1
1
""
1
1
1
2
f-1
1
..
COMMON
1
n-1
CELL
1 1
Figure 2. Possible Connected Cell Configurations. 2a) Dedicated Cells. 2b) Parallel Cells. 2c) Overlapping Cells.
103 This paper addresses the situation where the product family versus cell matrix takes any other form between the two extremes shown in Figures 2a and 2b. In other words, there is at least one product family that cannot be processed in all the cells or at least one product family that can be processed at least in two cells besides the common cell. This implies an overlapping between the cells for processing different product families, which makes the cell loading task a very complex one. The reason for the increased complexity comes from the fact that there are limitations to be considered in terms of assigning a product family to the cells. In other words, the feasible cells are the only ones a product family can be assigned. Unfortunately, the parallel machine scheduling results are no longer applicable in this situation. A typical product family versus cell matrix representing this case is given in Figure 2c. In the previous paragraphs, various manufacturing cell configurations and their impact on cell loading were discussed. It is also important to mention that capacity of the common cell(s) has to be taken into account in the cell loading task as well. The number of common cells and their processing capabilities will be important in selecting appropriate scheduling rules and/or algorithms. Furthermore, in some cases, the cell loading task might actually be centered on the common cell since it affects the production in many other cells and also most likely the capacity is tighter in the common cell. In the literature, the cell loading task in CM where there are several overlapping cells and those cells are connected through a common cell(s) has not been addressed before. There is a need to define some rules to handle this problem and then evaluate their performance. This study aims to meet this objective with an application in a real manufacturing setting. 5. THE RULES USED Eleven rules included in this study are grouped in six categories as described in the following subsections. The example problem given in Figure 3 is used throughout this section to facilitate the explanation of the rules. Figure 3a shows the current load on five cells, Figure 3b denotes the feasible cells for each product and finally Figure 3c presents the demand figures for each period for five products considered. 5.1. Search Priority: Cell Priority (CP), Product Priority (PP) Search priority determines the way the search process is carried out. If the cell priority is used, first a cell is chosen and then the search shifts to find a feasible product for that cell. Referring to the example problem, the first task would be to choose a cell among five cells by considering one of the rules described in section 5.4. Later, the search would focus on selecting a feasible product to run on the selected cell among five candidate products by considering the rules described in sections 5.2. and 5.3. The search process is reversed if the product priority is
104 used. In other words, in this case first a product is selected and later a feasible cell is searched to run the selected product.
5.2. Primary Product Rule: Earliest Due Date (EDD) The products are sorted by EDD first. When there is a tie, one of the secondary rules mentioned in section 5.3 is used to break the tie. In the example problem, products P 1, P2 and P3 have the same due date (week 1). Therefore, they have to processed before products P4 and P5, which are due in week 2. 5.3. S e c o n d a r y P r o d u c t Rules: Number of Feasible Cells (NFC), Number
of Cells Required (NCR)
A secondary rule is used to select the appropriate product when there is a tie with respect to EDD rule. In the example problem considered, there is a tie among products P 1, P2 and P3 since they all have the same due date (week 1). The NFC rule checks the number of available cells among the feasible cells that each candidate product can go through at the time of a product selection. In the above example, P1, P2 and P3 have 3, 2 and 2 feasible cells, respectively. However, cell 2 and cell 5 are fully loaded in period 1. As a result, we have to consider the number of available feasible cells in selecting the product. They are 2, 1 and 1 for P1, P2 and P3, respectively. The selection will be made based on these results (minimum or maximum depending upon the rule combination used as explained in sections 5.6 and 6). If the product with maximum NFC value is preferred then P1 would be chosen. If the user decides to choose one with minimum NFC value, then the decision would be to select P2 or P3The NCR rule calculates the number of cells required to complete each candidate product by its due date and uses this information to choose the next product to be assigned. In this case, the number of cells required for P 1, P2 and P3 are 0.60, 1 and 2 cells, respectively. If the product with maximum NCR value is given higher priority, then P3 would be assigned the first. If the decision is to favor minimum NCR value, then P 1 would be chosen. These rules are applied in a dynamic manner, i.e., the values of NFC or NCR are revised after each product assignment. 5.4. P r i m a r y Cell Rules: Cell Load (CL), Number of Feasible Products
(NFP), Product Mix (PM)
The CL rule chooses the cells depending on the current load. In the example given above, the current loads on cells are 20, 40, 10, 25 and 40 hours for cells 1, 2, 3, 4 and 5, respectively. The selection in this case will be made based on these results (minimum or maximum depending upon the rule combination used). If minimum CL is the basis for selection then cell 3 is selected. However, if the user wants to select the cell with maximum load, the decision would be to select either cell 2 or cell 5.
105
a) PERIOD 1 15 HR
CELL 1
.....
PERIOD 2
20 HR
!-I
_-iI .........
CELL 2
[ -:[
CELL 3 CELL 4 CELL 5 lO HR
I ....
25 HR
40 HR
80 HR
b) CELLS
PRODUCTS
P1 P2
2
3
1
1
1
1
1
4
5
1
P3 P
1
1
1
1
1
4
1
P5
1
1
c) Demand Product
Weekly Production Rate
Week 1
Week 2
P1
60
.
100
P2
40
.
40
P3
80
.
40
P4
-
60
90
P5
-
60
120
.
.
.
.
Figure 3. Example Problem. 3a) Current Load on Cells. 3b) Feasible Cells for Each Product. 3c) Demand Figures and Production Rates.
106 The NFP rule checks the number of feasible products that can be assigned to each cell at the time of a cell selection. Since there are only three products that can be assigned to cells currently, we will determine the number of feasible products for each cell considering those three products. The number of feasible products for cells 1, 2, 3, 4 and 5 are 2, 2, 2, 0 and 1, respectively. If the user decides to choose the cell with maximum NFP then cells 1, 2 or 3 are candidates. If minimum NFP is the basis for selection, then cell 5 is chosen (NFP = 0 implies that there is no product to assign to cell 4, and therefore, it is not considered at all). PM takes into account the current product mix in a cell. There are 2, 3, 1, 2 and 2 products already assigned to cells 1, 2, 3, 4 and 5, respectively. The cell selection process for this case is discussed in section 5.6. The values of these rules are updated dynamically as well.
5.5. C o m m o n Cell Capacity: M i n i m u m Load The utilization of common cell(s) is important in selecting the set of products to be considered. Whichever common cell has minimum load, the set of products that needs to be processed on that common cell is chosen and an appropriate product from that set is finally selected using the rules mentioned above, i.e., the common cell utilization directs the search toward obtaining the right set of products. Consider the additional information given in Figure 4 for the same problem. Figure 4a includes more products and common cell type information whereas Figure 4b presents the current load on common cells. Demand Figures for each product are presented in Figure 4c. Since the common cell CC1 has lower load, then the set of products that will go through CC1 is considered for product selection namely, P 1, P2, P3, P4 and P55.6. S e l e c t i o n Criterion: Maximum (MAX), M i n i m u m (MIN) In comparing the candidate products and cells, the decision might be to select the one with a maximum value or minimum value. For example, if we choose PM as the cell rule to be used, there are two possibilities. We might choose cell 3 if we would hke to level the number of products assigned to each cell since it has the minimum number of products (MIN). On the other hand, the policy might be to reduce the product mix in most of the cells by sacrificing one or two cells. In this case, the cell with maximum product mix is selected (MAX). In the same example, that would correspond to cell 2 with 3 products already assigned to it. 6. COMBINATIONS OF THE RULES The rules described in the previous section are combined in different ways and 48 possible combinations are created. Twenty-four of the rule combinations are of cell priority type and the remaining 24 are of product priority type. The Figures 5
107
a) CELLS PRODUCTS
P1
1
2
1
1
1
1
P2 P3
3
1
5
COMMON
CELL
CC1 CCl
,,
.,
1
P4
1
1
P5
1
P6
1
P7
1 1
P8
b)
4
1
1
PERIOD 1
1 ,' CCl "CC1 1 CC1 1 ~CC2 i 1 !CC2 !CC2 ,=
I
PERIOD 2
CC1 CC2 c) Demand Product
Week 1
Week 2
P1
60
-
,,
,,
,
Weekly Production Rate 100
P2
40
-
40
P3
80
-
40
P4
-
60
90
P5
-
60
120
P6
50
-
100
P7
40
-
80
P8
-
80
120
Figure 4. Extended Example Problem. 4a) Products versus Cells Matrix. 4b) Current Load on Common Cells. 4c) Demand Figures.
108
and 6 show the structuring of combinations of rules for the cell priority and the product priority cases, respectively. For example, the combination CP/CL/MinfEDD/NFC/Min means that this application is a cell priority case where first a cell is selected based on minimum cell load. Later, a product with EDD is searched for. If there are several products with the same due date, then the product with minimum number of feasible cells is given the highest priority. The complete description of the rule combinations is included in Appendix A. 7. T H E A S S U M P T I O N S MADE The most important assumption made in this study is the presence of connected cells through a few common cells. Each product will visit only one common cell depending upon the processing requirements. This implies that inter-cell material transfers exist. Components and raw materials arrive at one end; the parts are manufactured, the subassemblies are prepared in the cell and later sent outside the cell for further processing. Finally, they return to the cell for the final operations to be completed. Another assumption made is that the production rate of a product on any of its feasible cells is the same. In addition to these, the production can start earlier. In other words, if there is available capacity in a cell in the current period, a feasible product with a future demand can be assigned now to utilize the limited resources better. A product can be assigned to more than one feasible cell when it is necessary to complete it by its due date, i.e., lot splitting is allowed. Naturally, the number of cells a product can be assigned is limited by the maximum number of feasible cells it has. The purpose of this policy is to prevent a product from becoming tardy as long as there are feasible and available cells to assign it. Even though the objective is to minimize the number of tardy products, in the case when a product has a very large processing time, it might adversely affect the performance of the entire system by delaying the starting times of other products. A product is selected after two decision levels. If there is still a tie among the products, then the first product found with the best value is selected. In the case of a cell selection, if there is a tie among the cells with respect to the cell selection rule used, then the first cell that achieves the best result is chosen as the cell to be loaded. The order of the product selection also determines the order of processing the products in a cell. Moreover, the setup times are assumed to be sequence.independent and negligible. Therefore, they are not considered in this study.
109
ICPI I
I CLI I
IMINI EDD/NFC
I
!
INF'! IMA:~
IMIN I
A
I
I I MAX~
A
! ! I MIN
A
I PMI I
A
EDD/NCR
EDDINFC
x
Ii,NI I~Ax
Ii
EDD/NCR
Figure 5. Structuring of Rule Combinations for Cell Priority Case
EDD/NFC
I
EDDINCR
I MAX
I
I MIN
IM;N I I ~~.~ ~!~l ~, MI I CL! II !_PI ! PMi I I_~L II NFPI I PMi I IcEI
//, _
,N I IMAx
N
/
MIN
/
M~eX
I IN =PI IPMI MIN
I IMAXI
Figure 6. Structuring of Rule Combinations for Product Priority Case
110 8. A L G O R I T H M S In this section, the algorithms for both the product and the cell priority searches are discussed. The flowchart of the algorithm for product priority search is presented in Figure 7 and its steps are given below: 1. Sort products by EDD. 2. Divide products into different sets based on common cell type required. 3. Find common cell with minimum utilization and identify corresponding set of products. 4. Select next unassigned product from the set If there is a tie, use secondary to rule (NFC or NCR) to select a product. 5. Find a cell to assign the product (CL or NFP or PM) (If none go to step 4). 6. If the product can be completed on the selected cell before its due date, then assignment process is complete. Revise cell load and common cell load information and go to step 3. Otherwise, continue. 7. Search for another cell to load the capacity requirement beyond its due date in the original cell. Repeat this step until a) load is completely assigned or b) no feasible cell is available to complete load (maintain the balance of the load as a task to be considered for later periods) 8. Revise cell load and common cell load information and go to step 3. The application of the algorithm to the example problem mentioned previously will be appropriate to complete the discussion. 1. Products by EDD: P 1, P2, P3, P6, P7, P4, P5, P82. Products on common cell 1, Sccl=(P 1, P2, P3, P4, P5) Products on common cell 2: Scc2=(P6, P7, P8) 3. Common cell with minimum utilization is CC 1. Therefore, Scc 1 is selected. 4. Products P1, P2 and P3 have the same due date. Using NCR as a secondary rule, we obtain 0.60, 1 and 2 cells for P 1, P2 and P3, respectively. Assuming that we are interested in minimum (MIN) NCR value, P 1 is chosen. 5. Feasible cells for P1 are cells 1, 2 and 3 with loads of 20, 40 and 10 hours, respectively. Obviously, cell 2 is not available for week 1. Using CL as basis for cell selection and favoring minimum (MIN) value, cell 3 is chosen as the cell to assign P 16. P 1 requires a total of 24 hours (number of cells x hours in a week). Since the completion time of P 1 on cell 3 will be 34 hours, the assignment process for P 1 is completed successfully and loads for dedicated and common cells are revised. The application continues with step 3 again.
Ill
Sort Products by EDD
$
..•
EXvide products into sets based on common cell used
Choose~cell (X) with minimum utilization
.=,
Choose next product from set (X)
Yes Use secondary rule to break the tie
J
-I
selecta cell by
usingacellrule
No Assign the product to the cell chosen Yes
Product Do
1
No ng
Select another cell for the balance of the load No
Available?
Yes
Assign the
Figure 7. Flowchart for the Product Priority Approach
product
112 For the cell priority case, the order of the search process is reversed by selecting a cell to be loaded first. Once the cell has been identified, the set of products with minimum common cell utilization is determined. Finally, a feasible product from the same set is chosen using EDD and/or a secondary rule to assign to the cell identified previously. 9. E X P E R I M E N T A L CONDITIONS Experimentation has been performed in a real cellular manufacturing environment in Avon Lomalinda, Inc., a major jewelry manufacturer located in Puerto Rico. The operations are mostly labor intensive. The company has chosen low cost, light weight equipment and machinery to increase the flexibility of the manufacturing system.
9.1. Shop S t r u c t u r e All the production areas were converted to manufacturing cells by using GT concepts as shown in Figure 8. There is a total of 17 manufacturing cells grouped in four business units by using Focused Factory concepts in addition to the three common plating cells (manual, barrel and chain). The team size in a manufacturing cell varies between 20 to 30 employees depending upon the required production rate. This study focuses on the Business Unit 2 with 120 products and five cells. The products in the family are grouped in five different subfamilies. The classification of the products to the subfamilies and the allocation of the product families to the cells is given in Table 1. The last column in the Table denotes the number of products in each subfamily whereas the last row shows the number of products that each cell can process. Table 1 Product versus Cell Matrix
Cells Subfamily Casting Earring Plastic Earring Stamping Earring Porcelain Earring Cuff Bracelet No. of Products
1 1 1 1 1 1 120
2 1 1 1 1 1 120
3 1 1 1 65
4 1 1 1 65
5 1 1 1 65
Products 33 18 37 20 12
113 9.2. Intra-cell M a t e r i a l T r a n s f e r The cells are arranged and equipped such that intra-cell material transfer follows a unidirectional flow. 9.3. Inter-cell Material T r a n s f e r Most of the operations of a product are performed in the cell. Then, the parts leave the cell to be plated in one of the plating cells depending on the process requirements. Later, the parts return to the same cell for the remaining operations to be completed as shown in Figure 8. Inter-cell material transfers indicate that the cells are connected even though some products don't visit plating cells. 9.4. T y p i c a l O p e r a t i o n s The typical operations performed in the Business Unit 2 are: casting, deburring, degating, inspection, cleaning, tumbling, putting sleeve, plating, removing sleeve, finishing and packing. 9.5. C a l c u l a t i o n of P r o d u c t i o n R a t e s An integer linear programming model has been suggested to determine the hourly production rates. Another decision variable is the number of employees to be assigned to each operation and or stage (a group of operations). The objective is to maximize the production rate (equation 1). There are three types of constraints in the model. The first constraint type (equation 2) guarantees that there are enough employees at each stage to reach the required production rate. The second constraint type (equation 3) ensures that the number of employees assigned to each stage/operation does not exceed the upper bounds. The upper bounds denote the number of machines available for each operation/stage. The machine limitations are given in Table 2. The last constraint type (equation 4) sets the upper limit for the availability of employees in the cell. The maximum number of employees that can be assigned to a cell is 30 due to space restrictions. The equations (5) and (6) provide the bounds on the decision variables. The notation used and the model formulated follows: R : hourly production rate Xi : number of employees required for stage/operation i U i : upper bound on the number of employees at stage/operation i PT i : unit processing time (hr) for stage/operation i W : upper bound on the total number of employees s : number of stages/operations
114
MANUFACTURING CELLS
CELL #1
CELL #2
........
CELL #16
I CELL I #17
I
MANUAL PLATING CELL BARREL PLATING CELL CHAIN PLATING CELL PLATING AREA
Figure 8. Pictorial Representation of the System
Table 2 Upper Bounds Used Cells Upper Bounds Number of Tumbling Machines N u m b e r of Casting Machines Number of Heat Sink Machines Maximum Team Size
3 1 12 30
3 1 12 30
3 1 30
3 1 30
3 1 30
115 Objective Function: MaxZ=R
(1)
Subject to: [(Xi)*(lfPTi)] - R > 0 i = 1,2,3 ..... s
(2)
Xi < U i i = 1 , 2 , 3 ..... s
(3)
s
Zxi_<w
(4)
i=l Xi, integer and positive for all i
(5)
R, integer and positive
(6)
9.6 Cell S c h e d u l i n g Considerations Although the cell loading precedes the cell scheduling task in the hierarchy of a planning process, the cell scheduling strategy needs to be considered before applying the cell loading procedure. A 4-hour time bucket-based synchronization approach is used for cell scheduling as described by Siier and Gonzalez [18]. As a result, 4-hour production rates are used in the cell loading process to be compatible with the cell scheduling approach. Four-hour production rates are calculated by simply multiplying hourly production rates determined using integer linear programming model by 4. 9.7 Data Input for Cell Loading The input data required for cell loading are demand figures, subfamily classification and 4-hour production rates. Demand forecast figures over 1990 and 1991 are grouped in four 6-month planning horizons, thus obtaining four different demand patterns, DP1, DP2, DP3 and DP4, respectively. A 6-month planning horizon consists of 13 2-week periods due to the marketing strategy of the company. An existing subfamily classification is used as is. The work schedule for manufacturing cells is 40 hours per week with only one shift and no overtime work allowed. However, the work schedule for common cells is more flexible with possibility of working overtime and/or second shift, when needed. 9.8 Software D e v e l o p e d The programs were developed in Turbo Pascal Version 6.0 along with Topaz version 3.0 database manager and objects from Object Professional version 1.1. The programs were run using an IBM compatible 486 PC.
116 10. PERFORMANCE MEASURES The four performance measures (PFM) included in this study are: 1. Number of Tardy Jobs (n T) 2. Total Tardiness (TT) 3. Maximum Tardiness (Tma x) 4. Average Cell Utilization (CUav)
11. THE BEST RULE COMBINATIONS FOR EACH PERFORMANCE MEASURE The discussion in this section is limited to the selection of the rule combinations considering its performance concerning a single performance measure only. The results obtained are based on the ranking procedure and the statistical analysis. 11.1. R a n k i n g Procedure This analysis is performed by using the simple ranking procedure. The values of the rule combinations with respect to a specified performance measure are obtained and later ranked starting from the best toward the worst. This procedure is repeated for each demand pattern independently. The complete list of ranked rule combinations for each performance measure is given in Appendix B. Having determined the ranking for four demand patterns, it is necessary to generalize the results and derive the set of rule combinations that consistently perform well with respect to a performance measure. 11.1.1. N u m b e r of Tardy Jobs The rule combinations listed below gave minimum n T values in three out of four demand patterns (DP1, DP2, DP4); 42/37/43/38, 41/25, 26, 29, 30/44, 47/48/27/28 However, the n T slightly increased when the third demand pattern was used in the order the rule combinations are presented above. The "/" between the rule combinations represents the deterioration in the value of the PFM whereas "," shows the identical performance. The cell priority search technique outperformed the product priority type since all the fourteen rule combinations recommended are of the cell priority type. The CL and PM rules both are equally represented in the set of the selected rule combinations. The NFP rule performed poorly. The NCR rule gave slightly better results than those of the NFC rule.
117 11.1.2. M a x i m u m T a r d i n e s s The following rule combinations resulted in the minimum Tma x values for the three out of four demand patterns as in the previous case;
48/30/37/25, 43/42/26, 29/38, 41/44,47 Similarly, Tma x slightly increased when the third demand pattern was used in the order the rule combinations are listed above. The performance of the CP search was superior to the PP search. Again, the NFP rule failed to produce satisfactory results. The NCR rule gave better results than those of the NFC rule. CL and PM performed equally well. 11.1.3. Total T a r d i n e s s The results obtained for TT were similar to n T and Tma x except that the order of the rule combinations slightly changed. The results are listed below;
42/37/43/38, 41/30/25/48/26, 29/44,47 The CP search outperformed the PP search and similarly, the NFP rule proved to be a poor choice for this PFM as well. The NCR rule produced better results than the NFC rule. 11.1.4. A v e r a g e Cell U t i l i z a t i o n The most difficult decision was the determination of rule combinations for CU. The four rule combinations listed below resulted in good performance only in three out of four demand patterns. This was the only PFM where the PP search produced acceptable results.
40/20,22,23 11.2. S t a t i s t i c a l A n a l y s i s To observe the variability of the set of outstanding rule combinations selected under certain variations of the first demand pattern, some replications of the experiment are conducted (A set of outstanding rule combinations consist of top 24 rule combinations with respect to a specified PFM). A normal random component is added to the demand figures. The Tables provided by Nelson [16] for the Analysis of Mean (ANOM) procedure are used to determine the necessary sample size. By the use of the Tables with (c~ =.05), power = .95, (A / o =3), and k = 24 treatments, the sample size needed is determined to be 7. As the non-normality of data is suspected, Kruskal-Wallis Test, the nonparametric counterpart of the one way analysis of variance, is conducted. Conover [3] presented the Kruskal-Wallis test as an extension of the
118
Mann-Whitney test from two independent samples to k independent samples. The hypotheses tests are: H o : All the 24 rule combinations have identical means H1 " The 24 rule combinations do not all have identical means
Table 3 Kruskal-Wallis Test Results
Performance Measure
Number of Tardy Jobs Maximum Tardiness Total Tardiness
Kruskal-Wallis
155.80 163.90 161.93
Significance Level
0 0 0
The Kruskal-Wallis Test shows that there is a significant difference between the mean responses of the rule combinations with respect to the performance measures n T, Tma x, and TT as shown in Table 3. Since the null hypothesis is rejected, the next issue is to find which rule combinations differ. A multiple comparison procedure called Fisher's least significant difference, based on the ranks rather than the data, is used to achieve this task for n T. The results show that top fourteen rule combinations have equal performance, which is very consistent with the results of the Ranking procedure.
12. THE BEST RULE COMBINATIONS C O N S I D E R I N G ALL P E R F O R M A N C E M E A SU R E S In this section, the objective is to select the rule combinations that perform well with respect to multiple-criteria. The use of the elimination techniques provides a set of decision rules to help a decision maker eliminate one or more alternatives to narrow the set of choices and perhaps even lead to a decision as mentioned by Canada and Sullivan [1]. The method is applicable when the alternatives can be measured or put in an ordinal rank.
119 12.1. R u l e v e r s u s Rule: C o m p a r i s o n A c r o s s P e r f o r m a n c e M e a s u r e s This method is called a dominance check. If one rule is better than or equal to some other rule with respect to all performance measures (should be better at least with respect to one PFM), the other rule is said to be dominated and can be eliminated. There was not any rule combination which dominated the others with respect to all the performance measures in this study. However, the rule combinations that could not be eliminated when each demand pattern is used are given in Table 4.
Table 4 The Rule Combinations not Eliminated
DP
Rule Combinations
1, 6, 7, 9, 12, 13, 15, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 47, 48 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48 3
30, 37, 38, 41, 42, 43
4
25, 26, 27, 28, 29, 30, 37, 38, 40, 41, 42, 43, 44, 46, 47, 48
The rule combinations 30, 37, 38, 41, 42 and 43 have not been dominated by any rule combination in four out of four cases. Therefore, the use of those rules is strongly recommended. The CL rule gave better results than the PM rule. The NFP rule performed very poorly in this case as well. When product selection rules are considered, the NCR rule outperformed the NFC rule. All the recommended rules use CP search priority. 12.2. R u l e v e r s u s Rule: C o m p a r i s o n C o n s i d e r i n g R o b u s t n e s s The rule combinations not eliminated in the previous section indicate that each rule combination was superior to another with respect to at least one PM. In other words, a rule combination may not be eliminated even though its performance is satisfactory with respect to only one measure and very poor with respect to the others. In this section, the robustness of each rule combination is measured considering the lowest rank it got during the simple ranking procedure
120 Table 5 Results of the Robustness Analysis
Rule Combination 42 37 43 30 25 48 41 38 26 29 44 47
Highest Rank
Lowest Rank
1 1 1 1 1 1 1 1 1 1 1 1
6 6 6 6 6 7 7 7 8 8 9 9
Average Rank 1.93 2.00 2.12 2.12 2.31 2.25 2.31 2.31 2.56 2.56 2.87 2.87
Standard Deviation 1.73 1.63 1.63 1.78 1.95 2.32 2.02 2.02 2.39 2.39 2.82 2.82
with respect to all the performance measures and demand patterns. The best rule combinations and relevant statistics are summarized in Table 5. In this case, NCR did better than NFC. However, CL and PM performed equally well. Cell priority rules outperformed product priority rules once again.
12.3. Rule v e r s u s Rule: Comparison Across Rules (Lexicography) The performance measures are ranked so that it is ascending for performance measures to minimize and descending for performance measures to maximize. For each performance measure, the rule combination which performs better is chosen. In case of a tie between two or more rules, break the tie selecting the rule t h a t performs better in the next performance measure or so on, until a single rule emerges or all the performance measures have been checked. If the tie cannot be broken, the set of rule combinations remains as the better set. Since there are four performance measures included in this study, there are twenty-four possible permutations of ordering the performance measures (n!). The analysis in this section is based on the assumption that the company is interested in minimizing nT, Tma x, TT and maximizing CU in the order of decreasing importance. The s u m m a r y of the results of the lexicographic analysis is given in Table 6. The rule combination 42 was the only best common rule combination in four out of four demand patterns. The details of the analysis are given in Appendix C.
121 Table 6 Results of Lexicographic Analysis
DP
Rule Combinations
1, 6, 7, 9, 12, 13, 15, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 41, 42, 43, 44, 45, 47, 48 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48 42/37/43/38, 41/30/25/26, 29/44, 47/48/
1, 6, 7, 9, 12, 13, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 47, 48
13. CONCLUSIONS There was no a single rule combination that performed very well with respect to all the performance measures. When performance measures were considered independently, it was observed that n T, TT and Tma x behaved similarly. The CP search gave the best results for those performance measures. The CL rule and the PM rule gave much better results than the NFP rule. NCR did slightly better than NFC. PM and CL performed equally well. However, the results varied significantly when CU was considered. Both CP search and PP search produced good results. Interestingly, none of the rule combinations recommended for CU were included in the best rule combinations for n T, TT and Tma x. When all the performance measures are considered simultaneously, the results of the robustness analysis might be used. Twelve rule combinations gave very close results out of which six have never been dominated in the experimentation carried out in this study, namely the rule combinations 30, 37, 38, 41, 42 and 43. The CP search proved to be superior to the PP search in this case as well. In general, choosing the best rule combination becomes more complicated if the performance measures do not have equal weight or if the user specifies a different preference among the performance measures. As mentioned before, experimentation has been performed under different demand patterns with many products and cells by using actual data. However, as in any other experimentation, it is still difficult to claim that the results obtained in this study are valid for other environments as well.
122 ACKNOWLEDGEMENT The authors are indebted to the current and former personnel of Avon Puerto Rico Operations, and particularly to Fernando Fernandez, Serafin Masol, Joe Quifiones, Dennis Roman and Florentino Quifiones for their support of our research activities and implementation efforts. The authors also would like to thank Ayse Siier and graduate student David Mir6 for their assistance in preparing the paper. REFERENCES [1] [21 [3] [41
[5] [6] [71
[8] [9] [10] [11] [12] [131
Canada, J. R. and Sullivan, W. G., Economic and Multiattribute Evaluation of Advanced Manufacturing Systems, Prentice Hall, Englewood Cliffs, N.J., 1989. Chisman, J. A., "Manufacturing Cell: Analytical Setup Times and Part Sequencing," The International Journal of Advanced Manufacturing Technology, V 1, pp. 55-60, 1986. Conover, W. J., Practical Nonparametric Statistics, Second Edition, John Wiley and Sons, New York, NY, 1980. Dale, B. G. and Dewhurst, F., "Simulation of a Group Technology Cell," Engineering Costs and Production Economics, No. 8, pp. 45-54, 1984. Dillon, W. R. and Goldstein, M., Multivariate Analysis. Methods and Apolications, John Wiley and Sons, New York, NY, 1984. Espino, M., "Scheduling in Cellular Manufacturing with Interruptions in the Process Flow," Master's Thesis, University of Puerto Rico-Mayagiiez, 1991. Fry, T. D., Wilson, M. G. and Breen, M., "A Successful Implementation of Group Technology and Cell Manufacturing," Production and Inventory Management, Third Quarter, pp. 4-6, 1987. Gallagher, C. and Knight, W., Group Technology, Butterworth and Co., London, 1973. Greene, T. J. and Sadowski, R. P., "Cellular Manufacturing Control," Journal of Manufacturing Control, pp. 137-145, 1984. Greene, T. J. and Cleary, C.M., "Is Cellular Manufacturing Right for You?" Annual International Industrial Engineering Conference Proceedings, pp. 181-190, 1985. Guang-Xun, Y. and Li-Hang, Q., "A Production Scheduling Method for Group Technology Cell," Production Management Systems, pp. 121-133, 1987. Ham, I., "Introduction to Group Technology," SME Technical Report MMR76-03, Society of Manufacturing Engineers, Dearborn, MI, 1976. Mahmoodi, F., Tierney, E.J. and Mosier, C.T., "Dynamic Group Scheduling Heuristics in a Flow-through Cell Environment," Decision Sciences, Vol. 23, pp. 61-85, 1992.
123
[14] Miles, T. and Batra, A., "Scheduling of a Manufacturing Cell with Simulation," Proceedings of Winter Simulation Conference, pp. 668-676, 1986. [15] Mosier, C. T., Elvers, D. A. and Kelly, D., "Analysis of Group Technology Scheduling Heuristics," International Journal of Production Research, Volume 22, pp. 857-875, 1984. [16] Nelson, P. R., Design and Analysis of Experiments. Handbook of Statistical Methods for Engineers and Scientists, McGraw Hill, New York, NY, 1990. [17] Ranson, G. M., Group Technology, McGraw Hill, London, 1972. [18] Siier, G.A. and Gonzalez, W., "Synchronization in Manufacturing Cells: A Case Study," International Journal of Management and Systems, Vol.9, No.3, Sept.-Dec., 1993. [19] Siier, G.A and Saiz, M., "Cell Loading in Cellular Manufacturing Systems," Proceedings of the 15th Conference on Computer and Industrial Engineering, Cocoa Beach, Florida, March 8-10, 1993. [20] Sunduram, R. M., "Some Scheduling Algorithms for Group Technology Manufacturing Systems," Computer Applications in Production and Engineering, pp. 765-772, 1983. [21] Taylor, J. F. and Ham, I., "Group Scheduling by Shop Supervisory Personnel Using a Micro-Computer," SME Engineering Conference 1982, MS82-248, Society of Manufacturing Engineers, Dearborn, MI, 1982. [22] WemmerlSv, U. and Hyer, N., "Cellular Manufacturing in the US Industry: A Survey of Users," International Journal of Production Research, Volume 27, No. 9, pp. 1511-1530, 1989. [23] WemmerlSv, U., Production Planning and Control Procedures for Cellular Manufacturing. Concepts and Practices, The Library of Production, APICS, Falls Church, VA, 1988. [241 Wirth, G.T., Mahmoodi, F. and Mosier, C.T., "An Investigation of Scheduling Policies in a Dual-Constrained Manufacturing Cell," Decision Sciences, Vol. 24, No. 4, 1993.
124
Appendix A. Rule Combinations
No.
Rule Combination
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
PP/EDD/NFC/Min/CL/Min PP/EDD/NFC/Min/CL/Max PP/EDD/NFC/Min/NFP/Min PP/EDD/NFC/Min/NFP/Max PP/EDD/NFC/Min/PM/Min PP/EDD/NFC/Min/PM/Max PP/EDD/NFC/Max/CL/Min PP/EDD/NFC/Max/CL/Max PP/EDD/NFC/Max/NFP/Min PP/EDD/NFC/Max/NFP/Max PP/EDD/NFC/Max/PM/Min PP/EDD/NFC/Max/PM/Max PP/EDD/NCR/Min/CL/Min PP/EDD/NCR/Min/CL/Max PP/EDD/NCR/Min/NFP/Min PP/EDD/NCR/Min/NFP/Max PP/EDD/NCR/Min/PM/Min PP/EDD/NCR/Min/PM/Max PP/EDD/NCR/Max/CL/Min PP/EDD/NCR/Max/CL/Max PP/EDD/NCR/Max/NFP/Min PP/EDD/NCR/Max/NFP/Max PP/EDD/NCR/Max/PM/Min PP/EDD/NCR/Max/PM/Max
No. Rule Combination
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/ CP/
CL/M.in/EDD/NFC/Min CL/Max/EDD/NFC/Min NFP/Min/EDD/NFC/IV[in NFP/Max/EDD/NFC/Min PM/Min/EDD/NFC/Min PM/Max/EDD/NFC/Min CL/Min/EDD/NFC/Max CL/Max/EDD/NFC/Max NFP/Min/EDD/NFC/Max NFP/Max/EDD/NFC/Max PM/Min/EDD/NFC/Max PM/Max/EDD/NFC/Max CL/Min/EDD/NCR/Min CL/Max/EDD/NCR/Min NFP/Min/EDD/NCR/Min NFP/Max/EDD/NCR/Min PM/Min/EDD/NCR/Min PM/Max/EDD/NCR/Min CL/Min/EDD/NCR/Max CL/Max/EDD/NCR/Max NFP/Min/EDD/NCR/Max NFP/Max/EDD/NCR/Max PM/Min/EDD/NCR/Max PM/Max/EDD/NCR/Max
125 A p p e n d i x B. Results of Ranking Procedure
PM: n T DP
Rule Combinations 1, 6, 7, 9, 12, 13, 15, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 41, 42,43, 44, 45, 47, 48/46/3/40/14, 16, 17/20, 22, 23/2, 4, 518, 10, 11! 31, 32, 33, 34, 35, 36 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48/31, 32, 33, 34, 35, 36 42/37/43/38, 41/25, 26, 29, 30/44, 47/48/31, 32, 33, 34, 35, 36/27/1, 6, 9, 15, 39/13, 18, 28/2, 3, 4, 5, 7, 12, 40/8, 10, 11, 14, 16, 17/19, 21, 24, 45, 46/20, 22, 23 25, 26, 27, 28, 29, 30, 37, 38, 41, 42, 43, 44, 47, 48/31, 32, 33, 34, 35, 36/6/18/15, 39/9/45/21/24/12/1/13/7/40/46/8, 10, 11/20, 22, 23/3/2, 4, 5/14, 16, 40/46/14, 16, 17/3, 4, 5
PM: 'IT DP
Rule Combinations 1, 6, 7, 9, 12, 13, 15, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 41, 42, 43, 44, 45, 47, 48/46/3/40/20, 22, 23/14, 16, 17/2, 4, 5/8, 10, 11/31, 32, 33, 34, 35, 36 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48/31, 32, 33, 34, 35, 36 42/37/43/38, 41/30/25/48/26, 29/44, 47/31, 32, 33, 34, 35, 36/27/18/12/15, 39/13/40/6/1, 9/28/14, 16, 17/7/2, 3, 4, 5/8, 10, 11/24/19/46! 21, 45/20, 22, 23 25, 26, 27, 28, 29, 30, 37, 38, 41, 42, 43, 44, 47, 48/31, 32, 33, 34, 35, 36/6/18/45/39/21/24/9, 15/12/13/1/19/7/40/46/8, 10, 11, 20, 22, 23/3/2, 4, 5/14, 16
126 PM: Tma x DP
Rule Combinations 1, 6, 7, 9, 12, 13, 15, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 41, 42, 43, 44, 45, 47, 48/46/3/2, 4, 5, 14, 16, 17, 40/20, 22, 23! 8, 10, 11/31, 32, 33, 34, 35, 36 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48/31, 32, 33, 34, 35, 36 48/30/37/25, 43/42/26, 29! 38, 41! 44, 47/31, 32, 33, 34, 35, 36/1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 27, 28, 39, 40, 45, 46 25, 26, 27, 28, 29, 30, 37, 38, 41, 42, 43, 44, 47, 48/6/18/45/12, 24, 39/1, 19/7/13, 21/9, 15/31, 32, 33, 34, 35, 36/40/46/3/2, 4, 5, 14, 16! 8, 10, 11/20, 22, 23
PM: CUav DP
Rule Combinations 40/20, 22, 23/14, 16, 17/2, 4, 5/8, 10, 11/1, 3, 6, 7, 9, 12, 13, 15, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48/31, 32, 33, 34, 35, 36 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48/31, 32, 33, 34, 35, 36 48/25, 26, 29, 30, 38, 41/44, 47/42, 43/37/31, 32, 33, 34, 35, 36/27/46/1, 6, 9, 15, 19, 21, 24, 39, 45/13, 18/28, 40/7, 12/2, 3, 4, 5/8, 10, 11, 14, 16, 17, 20, 22, 23 46/40/20, 22, 23/8, 10, 11, 25, 26, 27, 28, 29, 30, 37, 38, 41, 42, 43, 44, 47, 48/6, 9, 12, 13,15, 18, 21, 24, 39, 45/1/19/7/14, 16! 3! 2, 4, 5! 31, 32, 33, 34, 35, 36
127 Appendix C. Results of Lexicographical Analysis
DP
Rule Combinations 1, 6, 7, 9, 12, 13, 15, 18, 19, 21, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 41, 42, 43, 44, 45, 47, 48/46/3/40/14, 16, 17/20, 22, 23/2, 4, 5/8, 10, 11/31, 32, 33, 34, 35, 36 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48/31, 32, 33, 34, 35, 36 42/37/43/38, 41/30/25/26, 29/44, 47/48/31, 32, 33, 34, 35, 36/27/15, 39/6/1, 9/18/13/28/12/40/7/2, 3, 4, 5/14, 16, 17/8, 10, 11/24/19/46/21, 45/20, 22, 23 25, 26, 27, 28, 29, 30, 37, 38, 41, 42, 43, 44, 47, 48/31, 32, 33, 34, 35, 36/6/18/39/15/9/45/21/24/12/1/13/19/7/40/46/8, 11/20, 22, 23/3/2, 4, 5/ 14, 16
10,
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
129
Cellular Manufacturing System Design: A Holistic Approach L. L. Massay~, C.O. Benjaminb, and Y. Omurtag b "Department of Industrial Engineering, North Carolina A&T State University, 1601 East Market Street, Greensboro, NC 27411, U.S.A. bDepartment of Engineering Management, University ofMissouri-Rolla, Rolla, MO 65401, U.S.A.
ABSTRACT A systematic approach to the design of Cellular Manufacturing Systems is described. The design methodology, which incorporates several widely accepted design axioms, is divided into four phases; analysis, conceptual design, embodiment design, and detailed design. In the analysis phase, part-feature/process data are analyzed to identify part families. Abstract concepts, initially conceived in the conceptual design phase, are developed and refined in an iterative manner into concrete proposals in the embodiment design phase. The purpose of the Detailed Design Phase is to finalize all specifications and dimensional details of the selected "best" design from the embodiment phase. Emphasis was centered on the embodiment design phase for which a five stage approach was developed. Consistently good results were obtained when the methodology was applied to three case scenarios representing varying levels of system complexity. The methodology is intended for use by manufacturing system designers and engineers.
1. INTRODUCTION Manufacturing industry's fight for survival and to the emergence of new manufacturing technologies such as Flexible Manufacturing Cells and Systems, and Computer-Integrated Manufacturing is generating considerable interest in Cellular Manufacturing in the U.S.A. (Wemmerlov and Hyer 1987). To realize the potential benefits offered by the new computerbased technologies, there must be a corresponding investment in research into the design, evaluation, implementation, and management of manufacturing systems (Wu 1992). In this paper, a systematic methodology for the design of manufacturing cells and systems is proposed. The methodology utilizes a holistic system design approach that facilitates the evaluation of the total system being developed. The approach uses currently available tools and techniques and can be readily adopted by manufacturing system designers and engineers. An action/case study approach, incorporating three case scenarios, is used to evaluate the effectiveness of the proposed design methodology.
130 2. CMS DESIGN CONCEPTS Cellular manufacturing (CM) occupies the middle ground between low-volume, high-variety, process-oriented manufacturing (job shop) and high-volume, low-variety, dedicated production flow lines (mass production). Effective design of a Cellular Manufacturing System (CMS) requires careful consideration of part and cell characteristics (Green and Cleary 1985), and structural and operational issues (Wemmerlov and Hyer 1987). Every decision made during the design process, whether related to structure or operation, can affect the cost and performance of a manufacturing system. Therefore, decisions related to structure and decisions related to operation cannot be evaluated independently during the system design process. Design considerations are summarized and grouped as part characteristics, cell characteristics, structural issues, and operational issues in Table 1. Various techniques and frameworks that have been proposed to facilitate CMS planning and design are listed in Table 2. These techniques and frameworks are categorized as techniques for cell analysis, CMS layout methodologies, and planning/design approaches. Of particular relevance to the proposed design methodology are the practical, four-phased Systematic Layout Planning (SLP) methodology (Muther and Hales 1979) which is widely accepted by industry practitioners, and computer simulation modeling, a powerful tool for the analysis and design of manufacturing systems (Steudel 1991). A distinction is often made between the concepts of logical design and physical design of production facilities (Nof 1982). According to Montreuil and Nof (1988), logical design "involves that part of the facility design that seeks to satisfy the requirement set in terms of the facility logic." Physical design is "a specific facility instanciation that satisfies the requirement set for a given logical design." The requirement set is a given set of requirements for the facility under design. The details of the requirement set are usually uncontrollable by the designer and may include the product mix, demand for products, equipment types and capacity, and space and capital availability. The physical design often involves the layout of machines on a factory floor. The concepts of logical and physical design can reduce the complexity of the system design process and are adopted in this methodology. Black (1988) proposes an integrated view of manufacturing systems and states that: a manufacturing system should be an integrated whole, composed of integrated subsystems, each of which interacts with the whole system. The system will have a number of objectives and its operation must optimize the whole. Optimizing pieces of the system (i.e., the processes or the subsystems) does not optimize the whole system. Materials, information, workers, and energy are inputs to a set of machines where the materials are processed and gain value. Some inputs cannot be fully controlled and the effect of disturbances must be counteracted by manipulating the controllable inputs or the system itself. 3. PROPOSED CMS DESIGN METHODOLOGY Decision-making in the design of manufacturing cells and systems involves virtually every aspect of manufacturing from part design through process planning, methods, staffing, and production planning and scheduling. Table 3 summarizes the important factors to be considered when determining the type of manufacturing cell, the arrangement of equipment into cells, and the
131 arrangement of cells into systems. Table 1 Design considerations in cell planning Part Characteristics
Number of operations per part. Part routing. Number of different part types (a part type is defined as having a unique machine routing). Part mix (composition by part type of the parts in the system).
Cell Characteristics
Number of cells. Cell size (number of machine types per cell). Total number of machine types (number of machines with different capabilities). Cell composition (machine component of cell).
Structural Issues
Selection of part populations and grouping of parts into families. Selection of machine and process populations and grouping of these into cells. Selection of tools, fixtures, and pallets. Selection of material handling equipment. Choice of equipment layout.
Operational Issues
Formulation of maintenance policies. Formulation of inspection policies. Design of procedures for production planning, scheduling, and control. Design of jobs and formulation of job responsibilities for operators and support staff. Design of reporting mechanisms and reward systems. Outline of procedures for interfacing with the remaining manufacturing system (in terms of work flow and information, whether computer-controlled or not).
3.1. Methodological concepts A methodology is "a set or system of methods, principles, and the rules used in a given discipline, as in the arts and sciences" and method is "a procedure, technique, or planned way of doing something." Wilson (1984) defines methodology as "a structured set of guidelines which enable an analyst to derive ways of alleviating a problem (any expression of concern about a situation)." This author points out that a methodology needs to be more flexible than a method or technique, in terms of its structure and application, to be appropriate to a variety of real-world problems. Thus, a methodology represents A structured set of guidelines within which the analyst can adapt, in a coherent
132 way, the concepts being used. This enables him to remain problem oriented as the analysis progresses and the nature of the situation confronting him unfolds. Thus he stands more chance of producing results that will turn out to be appropriate for the particular unique situation of concern [Wilson (1984)]. Table 2 CMS plannins/design techniques A. Cell Analyses Visual Examination
Holtz (1978), Ham et al. (1984)
Classification and Coding Systems
Opitz & Wiendahl (1971), Hyer & Wemmerlov (1985), Nolen (1991)
Production Flow Analysis (PFA)
Burbid~e (1989)
Cluster Analyses
McAuley (1972), King (1980), King & Nakornchai (1982), Chan& Milner (1982), Chandrasekharan & Rajagopalan (1986)
Graph Theoretic Approaches
Raja~;opalan & Batra (1975), Faber & Carter (1986)
Other Methods
Choobineh (1988), Co & Araar (1988), Sundaram & Lian (1990), Logendran ( 1991), Khosravi-Kamrani (1991)
B. Layout Methodologies Systematic Layout Plannin$ (SLP)
Muther and Hales (1979)
Spaceplan
Lee (1988)
FactoryPLAN
Cimtechnolo~;ies Corporation (1992).
C. Planning/Design Approaches General frameworks
Dumolien & Santen (1983), Kinney & McGinnis (1987)
Rough cut analysis
Suri (1984)
Cell Design Aid
Brown (1986)
Simulation approaches
Steudel (1991)
Neural network approaches
Chryssolouris et al. (1990)
Rule-based approaches
Kochar & Pegler (1991), Mellichamp & Wahab (1987), Mellichamp, Kwon, and Wahab (1990)
133 Nance and Arthur (1988) suggest that "a methodology, in contrast with a method, is a collection of complementary methods and a set of rules for applying them." A methodology provides guidance on the order (phases, increments, prototypes, validation tasks, etc.) in which a project's major tasks should be carded out. Two roles are proposed (1) conceptual guidance in understanding the developmental task, and (2) a practical guide. Table 3 Important factors considered in the proposed CMS design methodology Factor
Considerations
Product(s)
Part size, shape, weight, and other physical attributes (influence the size and type of material handling).
Quantity
Volume of each part or part-family (influences the number of machines in a cell, the cost of operating the cell, and the justifiable investment to organize and equip the cells).
Routing
Process operations, their sequence, and the process machinery required (determines the work flow).
Supporting services
Activities that are required to back up the processing operations.
Time
Timing considerations (work hours, seasonality, etc.)
Personnel requirements
Motion and ers;onomic factors (impact in-cell material flow).
Space requirements.
Space for equipment, material, personnel, and aisles.
Operational issues
Maintenance policies. Quality assurance and inspection policies. Production planning and control procedures. job responsibilities for operators and support staff. Procedures for interfacing with the remaining manufacturing system (in terms of work flow and information, whether computer-controlled or not).
3.2. Design Axioms and Concepts The design axioms on which the methodology is based provide a foundation of understanding and an appreciation for both the conceptual and practical roles of the methodology. In either case the axioms form the nucleus of the methodology. From the ardoms are derived the directions and procedural guidance (the statements of the fight rules of conduct for proper and efficient task accomplishment) that enable achievement of the targeted objectives. Thus, the axioms are the foundation for the methodology's second role, a practical guide. The proposed methodology incorporates the six design axioms listed in Table 4 and these are the fundamental principles on which the methodology was developed.
134 Table 4 Desi[jn axioms on which the methodolo~t is based Design Axiom
Procedural Guidance Derived
Top-down system definition followed by bottom-up system specification
Overall system conceptualization is followed by subsystems design and specification.
Simplification followed by integration
Maintained by dividing the system into subsystems; by clearly identifying input, process, and output elements of each subsystem; by determining appropriate controls for monitoring achievement against predetermined standards; by integrating subsystems in a later design-integrating step.
Alternative generation
More than one solution is sought for each subsystem. By combining sub-solutions, a number of alternative solutions for the system can be produced.
Iterative refinement and prosxessive elaboration
Stepwise refinement and development are essential.
Holistic evaluation
System designs are evaluated as whole systems.
Concurrent design documentation
Documentation and specification are inseparable.
Blanchard and Fabrycky (1981) describe design as a process that follows a set of stated requirements and evolves through conceptual design, preliminary system design, and detail design. Hundal (1987) describes the design process as consisting of apparently discrete steps, each with an outcome. VDI GUIDELINE 2221, a design guideline published by VDI (Society of German Engineers), and Pahl and Beitz (1988) split the design process into four main phases of Task clarification and definition., Conceptual design., Preliminary, or embodiment design, and Final or detailed design. Takeda, Tomiyama, and Yoshikawa (1990) suggest that design is an iterative process in which the designer modifies and adds detail to the design object using past design experience, engineering know-how, and/or established procedures. In the initial stages of the design process the descriptions and specifications of the design object may be uncertain, incomplete, and conflicting. However, as the iterative design process is performed, the uncertainty and incompleteness are progressively removed and a set of solutions emerges. Babcock (1991) says that design is essentially a process of creating a model, described by drawings and specifications, of a system that would meet some previously identified purpose. The design process leads from the abstract (requirements) to the concrete (design documents). 3.3. Overview of Methodology The methodology addresses the three fundamental aspects of manufacturing systems: transformational, procedural, and structural (Hitomi 1979). The transformational aspect considers
135 a manufacturing system as a conversion process by which raw materials are converted into finished products. Thus the production process is based on material flow and processing, and decisions on manufacturing process technologies are required. The design aim is to maximize productivity and efficiency. The procedural aspect considers a manufacturing system as the management of the production process. The manufacturing system plans and implements the productive activities to convert raw materials into products to meet production objectives, and controls this process according to the degree of deviation of actual performance from the plan. Decisions for the information flow are required such that smooth material flows are supported. The structural aspect considers a manufacturing system as a unified assemblage of workers, process equipment, materials-handling equipment, and supporting devices that forms a static spatial structure or layout of a plant. The layout of the plant influences the effectiveness of the transformation process and therefore an optimum design of the plant layout is sought. Also, a distinction is made between system logic and system structure, and the concepts of logical design and physical design are utilized. The facility logic implies the logic of material flow and dynamic flow control. The topology and configuration of the production machines and material handling devices are referred to as the physical facility structure. The four-phased engineering design approach (Pahl and Beitz 1988) is adopted and the methodology is divided into four phases: Analysis, Conceptual design, Embodiment design, and Detailed design. Figure 1 is an illustration of these four proposed phases.
3.4. Analysis phase In the analysis phase, part-feature/process data are analyzed to identify part families or combinations of families that represent opportunities for exploiting CM. The techniques listed in Table 2 may be used. The identified part families, production quantities, process plans, and production schedules define the required capabilities and capacities for the cell. These requirements, rather than specific equipment types that may be used, provide the input to the conceptual design phase.
3.5. Conceptual design phase
I.
ANALYSIS
OK
II. CONCEPTUAL DESIGN
III. EMBODIMENT DESIGN
In the conceptual design phase, "what" is to be designed is decided upon and conceptual modeling is necessary. The operations and ~aD sequence of operations required are visualized ~ "~"~ f" and a flow process chart is developed. The -- "~ f"~ [ Iv. DETAIttD ] architecture of the system is defined so that the [,.__ OESIGN J project team gets an overview of the operations, users get an opportunity to Figure 1. Four phases of the proposed CMS participate in a contributory manner, and the design methodology. formal decomposition of all functions provides the framework for all of the subsequent systems development work.
136 3.6. Embodiment design phase The focus of the proposed design methodology is on decision-making during the embodiment design phase. In Embodiment design, an abstract concept is developed into a more concrete proposal, represented by layout drawings. The Embodiment design phase includes both cell design and system design. The conceptual system is divided into cells, solutions are sought for each cell, the cells are reintegrated and adjusted, and then the whole system is evaluated. This phase comprises five stages as follows: Logical Cell Design, Physical Cell Design, Physical System Design, Logical System Integration, and Overall System Evaluation. Figure 2 is an outline of the proposed approach. As the design evolves through each stage of the design process, a system design review is conducted to ensure that the design is correct at that point prior to proceeding with the next stage. In the Logical CellDesign (LCD) stage of the embodiment design phase, production machines along with material handling and other support equipment, which are capable of producing the part families in the quantities and quality required, are identified. Then, alternative logical designs for each cell are developed, with appropriate consideration of production and inventory control, maintenance, personnel, and computer integration strategies, from the process plans and the identified equipment. Simulation models are constructed, evaluated and the best logical design for each cell selected. The logical cell designs are then linked to determine inbound/outbound buffer storage requirements. The part families, production quantities, part-machine routings, incell buffer capacities, material handling plan, and personnel requirements are passed as input to the physical cell design stage. In the Physical Cell Design (PCD) stage, alternative machine and equipment layouts for each cell are developed using layout methods and CAD systems. Personnel work space, part staging areas, and in-cell buffer space, are determined. Alternative layouts for each logical cell design are developed and evaluated, and the best physical cell layout for each logical cell design is chosen. Part families, inter-cell quantities, inter-cell routings, required machine services, and the list of cells and space requirements for each cell, are provided as input to the physical system design stage. In the Physical System Design (PSD) stage, the objective is to minimize inter-cell material movement. System layout is now the dominant factor and therefore layout design precedes the logical system design. The individual cell space requirements are aggregated and compared with the total space available. Space improvements are sought by integrating the separately created cells into a single interdependent design. Inter-cell material handling plans are developed and alternative physical system layouts are developed. For this stage, layout methodologies (SLP or FactoryPLAN) may be used. In the Logical System Integration (LSI) stage, the logical cell models are integrated with intercell material transport to develop logical material flow models for each alternative physical system design. These simulation models are used to determine the best inter-cell material transport system, to determine the best flow logic for the system, and to obtain estimates of total system performance. The final stage of the Embodiment phase is the OverallSystem Evaluation (OSE) stage. The alternative physical system designs together with their respective logical flow systems constitute the alternative embodiment designs. These designs are evaluated against corporate performance criteria and the best is selected for detailed design and implementation. This stage includes the following activities: 1) defining evaluation criteria, 2) evaluating alternative systems, 3) selecting
137 9
ST,
9PART FAMIUES 9QUANTITIES 9 PROCESSES
9MACHINE CAPABIUTIES 9PRODUCTION CONTROL STRATEGIES 9INVENTORY STRATEGIES 9PERSONNEL STRATEGIES 9OTHER CONSTRAINTS
out
LOGICAL CELL DESIGN STAGE SIMULATION I~i
PART rAJIII.IE$ QUANTITIES PROCESSES
. . . . . . . . . .
9MACHINE SIZES 9HAHDUNG EQUIPMENT SIZES * BUFFER SPACE 9PERSONNEL WORK SPACE 9OTHER CONSTRAINTS
,, . . . . . . . . .
, . ~ ~ L B ~
9EQUIPMENT 9ROUTING (IN-CELO 9BUFFER CAPACITIES 9MATERIAL HANDUNG PLAN 9STAFFING
~PHYSICAL
i i i I,,11. . . . . . . . . . i | i
, '| , '
:
' |
PART FAMILIES QUANTITIES PROCESSES s ts ROUTING (IN-CELL) BUrrER CAPACITIES MATs HANDLING PLAN STArrING
y
9INTER-CELL RELATIONSHIPS 9INTER-CELL ROUTINGS 9SPACE AVAILABLE
9 OTHERCONSTR~UNTS
9CELL SPACE REQUIREMENTS * CELL SERVICES RE,QUIRED
|
9PRODUCTION SCHEDULING & CONTROL STRATEGY
' n . ~ L .
9OTHER CONSTRAINTS
i |
out
CELL DESIGN STAGE CAD
I" S~'TE.
I
LAYOUT "
L 9INTER-CELL INTER-C!
HANDUNG PLAN
i In il ~,cA. sYsTE.s,.ua.o.,~a..r,o. SrAOEI *.t
J a
I I " EVALUATION CRITERIA
I
9SYSTEM FLOW LOGIC 9OVERALL SYSTEM PERFORMANCE
9EVALUATION METHOD
I ~
In tOVERALL SYSTEM EVALUATION STAGE I o u t
;n/mmot;on time - - - -,-.
Des;(In ;terot;on
9
S'l
Figure 2. Five stages of the embodiment design phase.
138 the "best" system, and 4) documenting the rejected alternatives. For the establishment of evaluation criteria, three major groups of factors are to be considered. The primary group of performance measures should be externally oriented (cost to customer, lead time, delivery reliability, and flexibility) and should be linked to corporate strategic objectives. The second group should be the traditional internally oriented etticiencies (materials, labor, delivery, and scrap costs, equipment and labor utilization, and investment indices such as payback, NPV, IRR, ROI). The third set of factors should be other considerations that may be relevant. Appropriate weights should be allocated to each factor based on the importance accorded the specific factor. 3.7. Detailed design phase The purpose of the Detailed Design Phase is to finalize all specifications and dimensional details of the selected "best" design from the OSE stage of the embodiment phase. The output from the activities in this phase will be design documentation consisting of detailed design drawings and specifications. 3.8. Techniques and Tools Regarding the proposed CMS design methodology, it is proposed that simulation modeling may be used during the logical cell and logical system design stages. Simulation modeling may be used first at the logical cell design stage. In this stage, efficient material flow is usually one of the main design objectives, and simulation is an excellent tool for the analysis and design of the in-cell material flow. The procedural or management considerations such as maintenance policies or inspection policies can be included in the model and the cell performance evaluated as a whole. The results of the simulation model are then used as input information to the physical cell design process. In the final stage, simulation models, in which the simulation models of the individual logical cell designs are integrated, can be developed. These integrated models of the total system can be used to determine the inter-cell material transport system, the material flow logic for the total system, and to obtain performance estimates of the total system. Thus, the complete system design is evaluated as a whole integrated system. 3.9. Important Features and Benefits Some important specific features of the proposed design methodology are: 9 By abstraction of the application requirements, the design problem is reduced to general, solution-neutral terms. 9 The problem is broken into subproblems and more than one solution is sought for each part. 9 By combining sub-solutions, a number of solutions (variants) for the problem can be produced. 9 Emphasis is placed on selecting the best physical processes upon which the design is based. 9 At each stage, a number of alternatives are generated from which a choice is made after an evaluation procedure. Some anticipated benefits that may be derived from the use of the methodological design process are: 9 Easy mastery of complex manufacturing cells and systems. 9 Shortened total system development times and reduced systems design cost.
139 High probability of finding good solutions. Enhanced creativity. Less training for young engineers. Improved information flow within a company and between companies. 4. ILLUSTRATIVE CASE STUDIES An action/case study research approach (Checkland 1981) was used for evaluating and improving the proposed cell/system design methodology. This approach involves spiral cycles of interaction between the formulation of the methodology and the testing/evaluation of the methodology by its application on real-life cases. The apply/evaluate/refine cycle required the identification of multiple case scenarios from academia and industry, the collection of relevant data, the application of the proposed cell/system design methodology, and the evaluation of its performance (Meredith, Raturi, Amoako-Gyampah, and Kaplan 1989). Additional perspective can be obtained through the natural and existential research methodologies since engineering design is not usually the subject of theoretical formulations (Dixon 1987). The cases included in this study represent the range of problems and levels of complexity typically encountered by designers of manufacturing systems. Three cases were selected and these are classified according to (1) the area of application (academia, quasi-industry, and industry), and (2) level of system complexity (low, medium, and High). Figure 3 shows the case classifications. Case profiles are provided in Table 5. Figure 4 shows the three cases superimposed on the action research cycle and illustrates the outward spiral of increasing level of difficulty in the real-world cases to which the methodology is applied. The application of the methodology confirmed its usefulness as an aid to the manufacturing system design process. Several design concepts emerged from the use of the methodology and these had a favorable impact on the system design process. SIMFACTORY, a simulator, was used to develop simulation models at the Logical Cell Design stage to develop efficient incell material flows. At this stage an assessment of gross APPLICATION AREA operating characteristics was required and the simulation QUASIresults gave a fairly realistic ACADEMIA INDUSTRY INDUSTRY view of cell operation and High provided good estimates of The Marathon their performance. Electric Procedural or management Plant considerations were included in the models and by linking SYSTEM The COMPLEXITY the cells, estimates of buffer UMR F'MS capacities were determined. In the Physical Cell Design stage, a CAD system was The DemMaTec used for drawing and F'CIM Low arranging the machines and equipment within the cells. Workplace layout design Figure 3. Classification of cases included in study.
140 principles were incorporated to produce effective i),vtlop workplace designs. Not only Theoretical Framework was space allocated to machines and equipment, but space was also provided for operators, the material to be worked on, and the work completed. However, at this Leom stage, the cell designs did not Develop from Medhodology use of the make efficient use of total Methodology available space. In the Physical System Design stage, the cells were Apply / ' arranged and integrated to M~hodolagy In form complete systems within UMR FMS c a s e the total space available. This design-integrating step improved the effectiveness of Apply space utilization and Methodology In established inter-cell flow DomMaTec FCIM patterns. Some of the available layout planning methodologies were Apply effectively used in this step. Methodology In In the Logical System Mamthan Electric Integration stage, individual cells were linked by an interPropase Hypotheses cell material transport systems and detailed simulation models were Figure 4. Action research/case study learning cycles. proposed. These integrated models of the total system can be used to determine inter-cell material flow logic and to obtain detailed performance estimates of the total system. However, in two of the cases, the systems did not require very complex inter-cell material transport, and control and distribution logic. In these two cases, the Logical System Integration Stage was unnecessary and was omitted. One case, the Marathon Electric project, involved a fairly complex inter-cell material transport system. The Logical System Integration Stage was applied and the results obtained showed that what appeared to be a sound Physical System Design was in effect below the performance targets set for system throughput. The University of Missouri-Rolla Flexible Manufacturing System (UMR FMS) and the DemMaTec Flexible Computer-Integrated Manufacturing (FCIM) projects were of lower complexity (two and four cells respectively) and the methodology was quite easily applied. Its application to the complex Marathon Electric reorganization project (60 cells) was much more difficult and time consuming. However, in this case the benefits of using the methodology were
141 most clearly evident. A performance deficiency was discovered in the Physical System Design developed. These results were used to stimulate the search for setup reduction by process and methods engineers in a genuine Concurrent Engineering effort. In relation to the design axioms upon which the methodology is based, the design process started with an overall system conceptualization. This was followed by dividing the conceptual system into cells and alternative solutions were sought for each of the cells. The individual cells were then arranged and integrated to form complete systems which were then evaluated as whole systems. The approach stimulated the generation of good design alternatives and facilitated their progressive refinement. Finally, the methodology institutionalizes a concurrent documentation approach instead of the traditional develop-then-document approach. In this way, the design documentation is generated as a by-product of the design process. Table 5 Case profiles for methodology, evaluation Case Parameters Case Title
Case 1
Case 2
Case 3
UMR FMS
DemMatec FCIM Facility
Marathon Electric
Number of cells
60
Floor area (sq.ft)
900
4,500
67,200
Product(s)
Toys and Souvenirs
Metal machining services
Electric motors
In-cell Material Handling
Robot
Manual/ Automatic pallet changer
Manual/Hoist assisted
Inter-cell Material Handling
Conveyor
Manual cart/ Forklift
Conveyor/ Manual cart/ Forklift
Design team
1-person
1-person
4-person team
System review
UMR CIM Lab Committee
Center for Technology Transfer
Marathon Electric staff
Duration
One Semester
Jun - Oct, 1991
Mar - Dec, 1992
Design effort (man hrs)
80
160
960
System acceptance
UMR CIM Lab Committee
Board of Directors, DemMatec Foundation, Inc.
Senior engineering managers of Marathon Electric
The key benefit of the methodology lies in its systematic structuring of the activities that need to be considered at certain stages of the system development process. Using the methodology
142 ensured that all design issues were attended to at the fight time, and no important issues were ignored. The development process was not constrained to follow lock-step a sequence of tasks or procedures, but the attention of project participants was focussed on issues that otherwise may not have received adequate attention. 5. CONCLUSION The holistic design methodology, developed in this paper, addresses all of Hitomi's three fundamental aspects of manufacturing systems and provides a structured, systematic framework for the design of Cellular Manufacturing Systems. Six design axioms were identified and incorporated into the methodology. The system design process was structured in a four phased framework for analysis, conceptual design, embodiment design, and detailed design. A five stage approach was developed to facilitate design activity at the embodiment phase. The five stages of this approach were logical cell design, physical cell design, physical system design, logical system integration, and overall system evaluation. The approach was successfully applied in three cases from academia and industry, and was found to be applicable to the design of new cells and systems, as well as improvement in the operation of existing ones. This research effort was limited to the design of the physical production system (parts, machine tools, handling equipment). Further research is in progress to extend the methodology's capabilities to incorporate the formal information system (computer support, customer orders, work orders), and the human system (people, working environment, informal information system). Also of interest would be an examination of the hypotheses that the methodology would be applicable to a much larger cross section of industry types and would be quite effective used in the design of large scale Cellular Manufacturing Systems. Although it has been shown to be applicable to a range of situations typically encountered by manufacturing systems designers, further testing and evaluation of the methodology would be beneficial. The extension of the Overall System Evaluation stages to include multi-criteria decision support techniques such as the Analytic Hierarchy Process (AHP) may enhance system acceptance by management. This technique permits the direct participation by decision makers in the evaluation process so they can better understand how results are produced and should thus find it easier to accept those results. REFERENCES
Babcock, D.L., 1991, Managing Engineering and Technology: an introduction to management for engineers, Englewood Cliffs, NY:Prentice-HaU. Black, J.T., 1988, The Design of Manufacturing Cells (Step one to Integrated Manufacturing Systems), Proceedings of Manufacturing International '88, 143-157. Blanchard, B. S. and Fabrycky, W.J. 1981. SystemsEngineering and Analysis, Englewood Cliffs, NY:Prentice-Hall. Brown, M.C., 1986, The Cell Design Aid: An Automated Tool for Designing Group Technology Cells, Capabilities of Group Technology, Dearborn, Michigan: Society of Manufacturing Engineers. Burbidge, J.L., 1989, Production Flow Analysis For Planning Group Technology, Oxford: Oxford University Press. Chan, H.M. and ~alner, D.A., 1982, Direct Clustering Algorithm for Group Formation in Cellular Manufacture, Journal of Manufacturing Systems, 1 (1), 65-74.
143 Chandrasekharan, M.P. and Rajagopalan, R., 1986, MODROC: an extension of rank order clustering for group technology, International Journal of Production Research, 24 (5), 1221-1233. Checkland, P.B., 1981, Systems thinking, systems practice. New York, NY: Wiley. Choobineh, F., 1988, A framework for the design of cellular manufacturing systems, International Journal of Production Research, 26 (7), 1161-1172. Chryssolouris, G., Lee, M., and Pierce, J., 1990, Use of neural networks for the design of manufacturing systems, Manufacturing Review, 3 (3), 187-194. Cimtechnologies Corporation, 1992, FactoryPLAN Release 3. O: Tutorial and Reference Manual Ames, IA: Cimtechnologies. Co, H.C. and Araar, A., 1988, "Confgufing cellular manufacturing systems," International Journal of Production Research, 26 (9), 1511-1522. Dixon, J.R., 1988, "On Research Methodology Towards a Scientific Theory of Engineering Design," Artificial Intelligence m Engineering Design, Analysis and manufacturing, 1(3), 145-157. Dumolien, W.J. and Santen, W.P., 1983, Cellular Manufacturing Becomes Philosophy Of Management At Components Facility, Industrial Engineering, 15 (11), 72-76. Faber, Z. and Carter, M.W., 1986, "A New Graph Theory Approach for Forming Machine Cells in Cellular Production System," Flexible Manufacturing Systems: Methods and Studies, New York, NY: Elsevier. Green, T.J. and Clean], C.M., 1985, Is Cellular Manufacturing Right for you? Proceedings of the 1985 Annual International Engineering Conference, IIE, 181-190. Ham, I., Hitomi, K., and Yoshida, T., 1984, Group Technology, Kluer/Nijhoff. Hitomi, K., 1979, Manufacturing systems engineering, Bristol, PA: Taylor and Francis. Holtz, R.D., 1978, "GT and CAPP Cut Work-in-Process Time 80%," Assembly Engineering, (June, 1978), 24-27. Hundal, M.S., Research in Design Theory and Methodology in west Germany, Design Theory and Methodology - DTM '90, 235-238. Hyer, N.L. and Wemmerlov, U., 1985, Group Technology Oriented Coding Systems: Structures, Applications, and Implementation, Production and Inventory Management, 67-84. Khosravi-Kamrani, A. A Methodology For Forming Machine Cells in Computer Integrated Manufacturing Environments Using Group Technology. Unpublished Doctoral Dissertation, University of Louisville, KY, 1991. King, J.R., 1980, "Machine-Component Grouping in Production Flow Analysis: An Approach Using a Rank Order Clustering Algorithm," International Journal of Production Research, 20 (2), King, J.R. and Nakomchai, V., 1982, "Machine-component group formation in group technology: review and extension," International Journal of Production Research, 20 (2), 117-133. Kinney, H.D. and McGinnis, L.F., 1987, "Design And Control Of Manufacturing Cells," Industrial Engineering, 19 (10), 28-38. Kochar, A.K. and Pegler, H., 1991, "A Rule-Based Systems Approach to the Design of Manufacturing Cells," Annals of the CIRP, 40 (1), 139-142. Lee, Q., Computer Aided Plant Layout with Real Time Material Flow Evaluation, The MTM Journal of Methods - Time Measurement, 14, 33-38. Logendran, R., 1991, Impact of sequence of operations and layout of cells in cellular
144 manufacturing, International Journal of Production Research, 29 (2), 375-390. McAuley, J., 1972, "Machine Grouping for Efficient Production," The Production Engineer, 21 (2), 53-57. MeUichamp, J.M. and Wahab, A.F.A., 1987, "An expert system for FMS design," Simulation, 48 (5), 201-208. Mellichamp, J.M., Kwon, O, and Ahmed, F.A, 1990, "FMS Designer: An expert system for flexible manufacturing system design," International Journal of Production Research, 28 (11), 2013-2024. Meredith, J.1L, Raturi, A., Amoako-Gyampah, K., and Kaplan, B., 1989, "Alternative Research Paradigms in Operations, "Journal of Operations Management, 8 (4) 297-326. Montreuil, B. and Nof, S.Y., 1988, "Approaches for logical vs. physical design of intelligent production facilities,"Manufacturing Research and Technology 6: Recent Developments in Production Research. New York, NY: Elsevier. Muther, R. and Hales, L., 1979, Systematic Planning of Industrial Facilities, Vol. I and II. Kansas City, MO: Management & Industrial Research Publications. Nance, R.E. and Arthur, J.A. 1988, The methodology roles in the realization of a model development environment,Proceedings of the 1988 Winter Simulation Conference, 220225. Nof, S.Y., 1982, "On the structure and logic of typical material flow systems," International Journal of Production Research, 20 (5), 575-590. Nolen, J., 1991, Course notes: Effective Manufacturing Cells. SME Special Course, St. Paul, Minneapolis, MN. Opitz, H. and Wiendahl, H.P., 1971, "Group Technology and Manufacturing Systems for Small and Medium Quantity Production," Capabilities of Group Technology. Dearborn, MI: Society of Manufacturing Engineers, 85-100. Pahl, G. and Beitz, W., 1988, Engineering Design: a systematic approach. New York, NY: Springer-Vedag. Rajagopalan, R. and Batra, J.L., 1975, Design of CellularProduction Systems: A Graph-Theoretic Approach, International Journal of Production Research, 13 (6), Steudel, H.J., 1991, "The Role and Design of Workcells for World-Class Manufacturing," The Journal of Applied Manufacturing Systems, Winter 1991, 47-55. Sundaram, R.M. and Lian, W., 1990, "An Approach for Designing Cellular Manufacturing Systems," Manufacturing Review, 3 (2), 91-97. Suri, R. and Hildebrant, R.R., 1984, "Modelling Flexible Manufacturing Systems Using Mean-Value Analysis," Journal of manufacturing systems, 3 (1), 27-38. Takeda, H., Tomiyama, T., and Yoshikawa, H., Logical Formulization of Design Processes for Intelligent CAD Systems, Intelligent CAD, II, New York, NY: Elsevier. Wemmerlov, U. and Hyer, N.L., 1987, Research Issues In Cellular Manufacturing, International Journal of Production Research, 25 (3), 413-431. W'dson, B. 1984, Systems: Concepts, methodologies, and applications, New York, NY: Wiley. Wu, B., 1992, Manufacturing systems design and analysis. New York, NY: Van Nostrand Reinhold. Yin, Robert K., 1989, Case Study Research: Design and methods. Newbury Park, CA: Sage Publications.
PART TWO
P E R F O R M A N C E MEASURE AND ANALYSIS
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
147
Measuring cellular manufacturing performance D.F. Rogers" and S.M. S h a f e r b
aDepartment of Quantitative Analysis and Operations Management, College of Business Administration, University of Cincinnati, 531 Carl H. Lindner Hall, Cincinnati, Ohio 45221-0130, USA" bDepartment of Management, College of Business, Auburn University, 415 West Magnolia, Auburn, Alabama 36849-5241, USA A large amount of research has focused on developing new cell formation procedures. Only recently, however, has research focused on comparing and evaluating alternative cell formation procedures. Perhaps the most significant hurdle associated with conducting studies for comparing and evaluating cell formation procedures is the absence of meaningful performance measures. A performance measure is considered meaningful when it is related to one or more of the design objectives associated with cellular manufacturing. In this paper several design objectives associated with cellular manufacturing are identified. Then, based upon these design objectives, appropriate performance measures are discussed and compared. Also included is a review and critique of performance measures used in previous studies for comparing cell formation procedures. 1. I n t r o d u c t i o n
It is widely acknowledged that the environment confronting manufacturers is becoming increasingly competitive as markets become more globalized. As a result, producers of goods are under constant and intense pressure to quickly and continuously improve their operations. Areas often targeted for improvement are productivity, quality, and responsiveness. In recent decades, Cellular Manufacturing (CM) has emerged as a promising approach for improving operations in batch and job shop environments, particularly in situations for which the divisions of the production processes are distinct. For CM, parts with similar processing requirements are identified and grouped together
*Dr. Rogers acknowledges Dr. J.E. Aronson and the Department of Management, College of Business Administration, University of Georgia for the support provided during his sabbatical leave from the University of Cincinnati.
148
to form part families. The equipment requirements for the part families are simultaneously or subsequently determined and usually located together in close proximity. Thus, equipment is located together based upon the processing requirements of part families in CM and thereby differs from traditional layouts where similarly functioning equipment is often located together. Groups of mostly dissimilar machine types that are utilized for CM and located together are referred to as machine cells. The use of CM allows for an increase of local decision making in the factory and this also often results in quality and productivity improvements. There is a wealth of empirical evidence supporting the potential superiority of a cellular layout versus the more traditional functional layout. Schonberger [1] stated that moving the machines into cells was a basic step in the transformation of the General Electric Company's Louisville, Kentucky dishwasher plant into a world class manufacturing showcase. He noted that large numbers of manufacturers are utilizing CM in an effort to become world class competitors. There is also evidence to indicate that CM is not always the appropriate approach to take for certain scenarios. Results from the simulation models of Flynn and Jacobs [2 and 3] indicated that a well-organized job shop can outperform a CM configuration with respect to several criteria such as work-in-process inventory levels and average flow times. In these simulations the machine-component matrices were quite dense and forming distinct separations of the production processes to accommodate CM was not possible. Morris and Tersine [4] further revealed that an ideal environment for a cellular layout is characterized with a high ratio of setup to process time, stable demand, unidirectional work flow within a cell, and a considerable level of material movement times between process departments. In response to the considerable interest in CM, there has been much current research devoted to the development of cell formation procedures, i.e., techniques that facilitate the identification of part families and machine cells. WemmerlSv and Hyer [5] summarized research issues in CM that is also relevant today. Chu [6] provided a summary of general cluster analysis techniques and algorithms for forming manufacturing cells. Rogers, et al. [7] cast clustering into a larger framework for aggregation and disaggregation techniques. Shafer [8] identified over 80 contributions found in the literature associated with the development and comparison of cell formation procedures. Unfortunately, the vast majority of these contributions focused on developing new cell formation procedures. Very little research to date has focused on comparing these techniques. As a consequence, there is little or no guidance available concerning the appropriateness and/or usefulness of cell formation techniques. It may be argued that only profit, revenue, or cost-based measures, i.e., valuebased measures that reflect real improvement to investors, should actually be employed for judging the desirability of CM. This is ultimately true for almost any significant change in a firm, especially changes in production functions. Askin and Chiu [9] recognized this and incorporated an objective function of cost minimization for their linear integer programming problem to form machine cells and part families. The cost components of this objective function were 1) machine overhead - the cost of placing a particular number of a machine type in a cell, 2) group overhead - a fixed cost for using a particular grouping, 3) family tooling costs, and 4) intergroup
149
material handling costs. However, it is often extremely difficult to accurately assess actual profits, revenues, or costs to a manufacturing system prior to actual implementation of the particular CM configuration selected. How, for example, may one reasonably assess the costs of intergroup material handling costs prior to knowing the cell configurations and relative locations? Choobineh [10] also formulated a linear integer programming problem with an objective function of minimizing the total average annual cost of producing all part families in all cells and the cost of providing the appropriate number of machines for each cell. These costs may often be exceedingly difficult to determine. In reality it may take months or even several years before the actual impacts of converting to a CM system may be assessed. It is also often extremely difficult to ascertain the costs of lost flexibility due to employing CM rather than a job shop, to assess the sociological implications of the factory employees as well as their management (see Huber and Hyer [11]), and to appropriately measure the quality increase in the end product that is often accompanied by employing CM. Because value-based methods are not easily implementable for determining part family and/or machine cell configurations we must typically judge the quality of a CM solution with measures that are not based upon value but nonetheless are quite good surrogates for value-based objectives. These surrogate measurements for CM evaluation should ideally coincide with the basic objectives of CM which were originally developed to coincide with value-based objectives. Therefore, the emphasis in this article will be to use measures for judging various CM configurations that appropriately match the objectives by which these cells are to be formed. An objective in this paper is to offer a framework for comparing alternate cell formation procedures. In the next section a review of studies for comparing various cell formation procedures is provided. In the following section is a closer examination of the performance measures employed in these studies and other performance measures that may prove to be interesting for future studies. Before useful comparison studies can be conducted, meaningful performance measures must be developed and a comparison of current performance measures is presented. Subsequently, a framework for comparing alternative cell formation solutions is offered. Finally, we provide a summary and discuss avenues for future research.
2. Literature Review of Cell Formation Technique Comparisons In this section studies that have focused on comparing alternate cell formation procedures will be reviewed. Many of these studies involve simple row and column manipulation and/or hierarchical and nonhierarchical statistical clustering of the rows and columns of the machine-component matrix X, where x~j = 1, if machine type i, i=l,...,M, is required for production of component j, j=I,...,N, and x~j=0, otherwise. In Table 1 is a summary of the cell formation solution quality performance measures that are included in these studies and other various gauges of performance that
150
Table 1 Summary of cell formation solution quality performance measures. Performance Measure
Reference
Simple Matching Generalized Matching Product Moment Correlation Coefficient
Anderberg[22] Klastorin[34] Anderberg[22]
M osier[ 12]
Weighted Intercellular Transfers
Mosier[13]
M osier[ 12]
Used In
Avg. & Maximum WIP Shafer & Avg. & Max. Flow Time Meredith[14] Part Travel Distances Extra-Cellular Operations Longest Average Queue Total Bond Energy McCormick, Schweitzer & White[24] Clustering Measure Miltenburg & Zhang[27]
Chu & Tsai[23] Shafer & Rogers[31] Miltenburg & Zhang[27]
Proportion Exceptional Elements Machine Utilization
Chu & Tsai[23] Chandrasehkaran &Rajagopalan[26]
Chu & Tsai[23] Shafer & Rogers[31]
Grouping Efficiency
Chandrasehkaran &Rajagopalan[26]
Chu & Tsai[23]
Global Efficiency Group Efficiency Group Technology Efficiency
Harhalakis, Nagi & Proth[36]
Grouping Efficacy
Kumar & Chandrasehkaran [32] Miltenburg & Zhang[27]
Grouping Measure
Shafer & Meredith[14]
Shafer & Rogers[31] Miltenburg & Zhang[27]
Comments
Solution 9 quality defined in terms of how similar it is to original matrix. Not 9 directly related to goals of CM. =Does not consider part sequencing or volumes. Relatively 9 easy to calculate. Surrogate 9 for intercellular transfers. Does 9 not consider part sequencing but does consider part volumes. Relatively 9 easy to calculate. Directly 9 related to many CM goals. Generally 9 not computationally efficient (requires simulation). Does 9 consider part sequencing and part volumes, ,Not directly related to goals of CM. ,Does not consider part sequencing or part volumes. ,Relatively easy to calculate. ,Insensitive to family & cell definition. ,Surrogate measures related to CM goals. ,Does not consider part sequencing or volumes. ,Relatively easy to calculate. Must 9 select arbitrary weight. Surrogate 9 measure related to CM goals. Weak 9 discriminating power. Does 9 not consider part sequencing or volumes. Relatively 9 easy to calculate. Surrogate 9 measure related to CM goals. Considers 9 part sequencing but not volumes. Relatively 9 easy to calculate. Better 9 discriminating power than grouping efficiency. No 9 need to select an arbitrary weight. Does 9 not consider part sequencing or volumes. Surrogate 9 measure related to CM goals. Relatively 9 easy to calculate.
151
have been developed. Mosier [12] used a mixture model experimental approach and compared seven similarity coefficients and four statistically-based hierarchical clustering procedures. The degree of cluster definition, i.e., the ratio of the non-zero density outside the clusters to the non-zero density inside the clusters, and the density of the non-zero entries within the clusters of the machine-component matrix were additional factors included in the study. The performance measures used were a simple and a generalized matching measure, a product moment correlation coefficient measure, and an intercellular transfer measure (Mosier [13]). Statistics were obtained for the results of the solved problems and the Jaccard similarity coefficient showed promise for these particular problems in spite of being one of the simplest similarity coefficients to calculate. None of the hierarchical clustering procedures appeared to uniformly dominate any of the others in the study. Shafer and Meredith [14] utilized computer simulation to compare cell formation procedures from all three categories of the taxonomy for cell formation procedures proposed by Ballakur and Steudel [15]: 1) part grouping, 2) machine grouping, and 3) simultaneous machine-part grouping. The clustering algorithms that were employed were the rank order clustering algorithm (King [16]), the direct clustering algorithm (Chan and Milner [17] and Wemmerl6v [18]), the cluster identification algorithm (Kusiak and Chow [19 and 20]), a technique for considering the sequence of operations developed by Vakharia and Wemmerl5v [21], and both single and average linkage hierarchical clustering routines (Anderberg [22]). These algorithms were applied to data collected from three companies. Computer simulation models were then developed based upon the solutions generated with the cell formation procedures. The performance measures included in this study were average and maximum work-in-process levels, average and maximum flow times, part travel distances, extra-cellular operations, and the longest average queue. It was found that using clustering algorithms to first form part families and then assigning machines to cells to accommodate the families provided more flexibility and worked best for these scenarios. Chu and Tsai [23] compared the rank order clustering algorithm, the direct clustering algorithm, and the bond energy analysis algorithm (McCormick, Schweitzer, and White [24] and Lenstra [25]) using the following performance measures: 1) total bond energy, 2) proportion exceptional elements, i.e., the proportion of the part types that must be transferred to other cells, 3) machine utilization, and 4) grouping efficiency (Chandrasekharan and Rajagopalan [26]). Eleven data sets from the literature were used for their study and the bond energy algorithm performed best for these problems. However, most of these data sets were extremely small example problems and not enough of the remaining problems were large enough to viably conclude the dominance of any one particular method. Miltenburg and Zhang [27] have performed one of the more extensive studies to compare CM algorithms upon 0/1 machine-part matrices. They tested nine different algorithmic approaches that employed different combinations of the rank order clustering algorithm, the modified rank order clustering algorithm (Chandrasekharan and Rajagopalan [28]), single and average linkage clustering, the bond energy algorithm, and an ideal seed non-hierarchical clustering algorithm
152
(Chandrasekharan and Rajagopalan [26]). The objectives of their study are to form cells such that there is a high usage of machines by parts in each cell (a high cell density) and to form cells that do not tend to allow exceptional elements, i.e., parts that leave the cell and travel to another cell. The primary performance measure that they used to evaluate the clustering algorithms attempted to combine the influence of both of these objectives and may actually be attributed to Stanfel [29]. Two secondary performance measures employed were a measure of closeness of nonzero elements to the diagonal and average bond energy. Results were obtained for 544 solved problems and no significant differences were detected among most of the algorithms. The ideal seed non-hierarchical clustering algorithm showed the most promise for these particular randomly generated problems and performance measures. However, these approaches impose a significant computational burden for larger problems as compared to the other algorithms. Shafer and Rogers [30 and 31] investigated 16 measures of similarity or distance and four hierarchical clustering procedures that may be used for CM problems. The same performance measures as used by Chu and Tsai [23] were employed except that the grouping efficiency criterion was replaced by the improved grouping efficacy criterion developed by Kumar and Chandrasekharan [32]. A total of 704 solutions were derived and it was found that little difference typically existed among the results from different similarity coefficients. Single linkage clustering was, in general, found to be statistically inferior to average linkage, complete linkage, and Ward's clustering methods for the bond energy, machine utilization, and grouping efficacy criteria. However, single linkage clustering was statistically superior when considering the proportion of exceptional elements. 3. Comparison of Performance Measures
A fundamental criterion for evaluating performance measures is how closely related the performance measures are to the basic objectives associated with adopting CM. Shafer and Rogers [33] identified the following four key design objectives associated with CM: 1) setup time reduction, 2) to produce parts cell complete, i.e., minimize intercellular transfers, 3) to minimize investment in new equipment, and 4) to maintain acceptable machine utilization levels. Other important objectives associated with the adoption of CM include improving product quality, reducing inventory levels, and shortening lead times. In the remainder of this section the performance measures introduced in the previous section and summarized in Table 1 will be further discussed and compared. Mosier [12] employed four performance measures, a simple matching measure, a generalized matching measure, a product moment correlation coefficient measure, and an intercellular transfer measure. The first three performance measures (Anderberg [22] and Klastorin [34]) are for gauging how well the clustered solution matches the original machine-component matrix and thus do not allow for the possibility that the clustered solution obtained may be in some sense better than the configuration for the original matrix. The original configuration may not even be a good choice. Although these three measures are relatively easy to calculate, they
153
are not related to the design goals associated with adopting CM, and part volumes or operation sequences are not considered with them. The last measure employed, the intercellular transfer measure, is the volume weighted number of parts that require processing in more than one cell. With this measure one may consider the possibility that the solutions generated by the cell formation procedures were better than the original randomly generated machine-component matrices. This measure does indirectly addresses the CM design objective of minimizing intercellular transfers. However, because operation sequences are not considered, it is only a surrogate measure for the actual number of intercellular transfers. All seven performance measures used by Shafer and Meredith [14], average and maximum work-in-process levels, average and maximum flow times, part travel distances, extra-cellular operations, and the longest average queue, require the development of computer simulation models. Deriving these measures is computationally more complex but they are all directly related to CM design objectives and are measures used for considering both part volumes and operation sequences. Four performance measures were employed by Chu and Tsai [23] and Shafer and Rogers [31]. The first performance measure, Total Bond Energy (TBE), was proposed by McCormick, Schweitzer, and White [24] as the performance measure to assist an algorithmic approach for identifying a permutation of the rows and columns such that the sum of the products of adjacent elements is maximized. This algorithm reveals a block diagonal matrix if one exists but its performance is unpredictable otherwise. TBE is defined as: M N TBE = I; I~ xij(xij+l .i- xij.1 -i- Xi+l J 4" Xi.l,j)12 i=1 j=l
(1)
where Xi0----X0j=XM+l,j=Xi,N+l=0. However, TBE is not directly related to the design goals associated with CM. The other three measures, proportion of exceptional elements, machine utilization, and grouping efficiency (grouping efficacy in Sharer and Rogers [31]), are surrogate measures for the CM design objectives of minimizing intercellular transfers and maintaining acceptable machine utilization levels. Proportion Exceptional Elements (PE) is the ratio of the number of nonzero elements not in the block diagonals (machine-component cells) in the final configuration of the clustered machine-cell matrix to the total number of nonzero elements in X given by: N M N M P E = I; I~ eij I T~ ~ Xi j i=1 j=l i=1 j=l
(2)
where e~j is set to one if element x U in the rearranged X is an exceptional element, zero otherwise. Machine Utilization (MU), also known as the density of the machinepart cells, is the ratio of the number of nonzero elements in the machine-cell clusters
154
to the total number of elements in these cells. Neither of these measure are for evaluating the actual achievement of the CM objectives largely because operation sequences, part volumes, and operation times are not considered. Chandrasekharan and Rajagopalan [26] developed the Grouping Efficiency (GE) measure, a weighted average defined as: GE = q MU + (l-q) ODV
(3)
where the Off-Diagonal Voids (ODV) is the ratio of the number of zero elements in the off-diagonal blocks of X over the total number of elements in the off-diagonal blocks and q is a selected weighting on [0,1] designed to reflect the relative importance the analyst desires to place on MU versus ODV. GE will also always take on values on [0,1] provided that there is more than one machine-cell cluster, GE=I for a perfect block diagonal form and GE=0 for the diametric case. Freedom to select the appropriate weighting q allows the analyst to decide the relative importance between intercellular moves and voids in the cells. Ng [35] scrutinized GE as a performance measure. He noted that nonzero elements inside a diagonal block correspond to work processed in a machine cell with smaller material handling and setup costs. Alternatively, exceptional elements correspond to work that must be processed outside of the cells and thus has larger material handling and setup costs. For a typical example with q=.5 it was shown that the rate of change of GE with respect to a nonzero entry inside a diagonal block was three times that of an exceptional element which is contrary to their relative costs. By adjusting the weighting to q=.2 the same rate of change would decrease from three to 3/4, a more reasonable value but still maybe too large in practice to appropriately reflect the relative costs. It was also revealed for a large and sparse X and using q=.l that this rate of change can still be much larger than 1.0. Furthermore, it was shown that any matrix X with two completely decomposable diagonal blocks will have a grouping efficiency of at least (1-q)+q(MU). If q is to be small to overcome the above-mentioned problems then this minimum value for GE will always be quite large. Smaller values of q should probably be used if calculating GE but the value of its usage is questionable. Kumar and Chandrasekharan [32] also noted that the use of GE as a measure of performance has several shortcomings. They revealed that even quite poor solutions could yield a GE of .75 and thus the efficiency function GE has weak discriminating power. Also, the requirement that a weight be selected, possibly quite arbitrarily, to determine the relative importance of MU and ODV may be a difficulty. They proposed the Grouping eFficacy (GF) measure to overcome these shortcomings while retaining the desirable properties associated with the use of GE: GF = ( 1-PE)I(1 .DV)
(4)
where the Diagonal Voids (DV) is the number of zero elements in the block diagonals in the final configuration of the clustered machine-cell matrix to the total number of nonzero elements in X. GF will also take on values on [0,1].
155
Ng [35] noticed with an example that the value of GF may be larger for a problem with five clusters than an apparently better four-cluster solution for the same problem. He proposed a modification to remedy this problem with GF called the Weighted Grouping eFficacy (WGF) which was derived by placing a weight of q on each entry inside the diagonal blocks and a weight of 1-q on exceptional elements given by: W G F = r(1 -PE)I[PE+r(1 + D V - P E ) ]
(s)
where r=ql(1-q) and q is a weighting just as for GE. If q=.5 then r=l, WGF=GF, and thus WGF is a generalization of GF. WGF will likewise have values on [0,1]. Miltenburg and Zhang [27] utilized one primary and two secondary performance measures. Their primary performance measure, called the Grouping Measure (GM), is also an attempt to overcome some similar supposed weaknesses with using the GE measure and is given by: G M = M U - DV
(6)
The value of GM is bounded by negative and positive unity, i.e., -I_
(7)
where dh(Xij)is the horizontal distance between a nonzero element and the diagonal and dv(X~j)is the vertical distance between a nonzero element and the diagonal. CLM is an overall indicator of the closeness of the nonzero elements to the diagonal, regardless of whether the nonzero elements are in a diagonal block or not. It is the average squared Euclidean distance of a nonzero element from the diagonal. CLM is not directly related to any of the CM design objectives but it may be a good gauge of cohesiveness for some circumstances just as TBE may be. Harhalakis, Nagi, and Proth [36] proposed the following three performance measures which have not yet been used in a study to compare cell formation procedures: 1) global efficiency - the ratio of the number of operations performed within the primary cells to the total number of operations, 2) group efficiencycalculated by subtracting from one the ratio of the actual number of external cells visited to the maximum number of cells that could be visited and 3) group technology efficiency- calculated by subtracting from one the ratio of actual number of intercellular moves required to the maximum number of intercellular moves possible.
156
These measures do include the consideration of operation sequences but do not regard any data for part volumes. Compared to performing computer simulations, they are relatively easy to calculate. Numerous other authors have implemented performance measures implicitly through the objective functions of the mathematical models that they have developed for which there are many possibilities. A representative group of this work includes Stanfel [37, 38, and 39] who formed a linear integer programming problem with the objective of minimizing the total number of intercell transitions made by part groups. Kusiak [40] and Gunasingh and Lashkari [41] both proposed the maximization of total similarity for their linear integer programming formulations and this is a quite reasonable approach, especially if the similarities were developed with volume and/or routing data. Gunasingh and Lashkari [41] additionally used another objective function to minimize the total fixed cost of all machines in all cells less the savings in intercell movement costs that resulted form parts belonging to a particular machine cell. This latter savings appears to be quite difficult to estimate and the resulting performance measure is directly related to the CM objective of minimizing investment in new equipment but only slightly related to the objective of producing parts cell complete. Wei and Gaither [42] suggested several potential objective functions for their linear integer programming problems, including the minimization of intercell capacity imbalances, intercell shipments, and capacity underutilization. Shafer and Rogers [33] utilized goal programming to minimize deviations from several goals that included 1) minimize the amount setup times exceed zero, 2) maximize or minimize capacity utilization in each cell, 3) maximize or minimize the funds available to purchase new equipment, and 4) minimize the number of intercellular moves. In Table 2 is a categorization of the performance measures listed in Table 1 based upon the information the measures required for their calculation. Performance measures are categorized on the basis of whether or not they require part volume data and/or operation sequence data. Table 2 Categorization of p e r f o r m a n c e m e a s u r e s Part Volumes and Sequencing Not Considered
Part Volumes Considered
Simple Matching GeneralizedMatching
Weighted Intercelluar GlobalEfficiency Transfers Group Efficiency
Product Moment Correlation
Coefficient
Total Bond Energy Proportion ExceptionalElements Machine Utilization Grouping Efficiency Grouping Efficacy Grouping Measure .Clustering Measure
Part Sequencing Considered
Both Part Volumes and Sequencing Considered
AverageWIP MaximumWIP Group Technology Average FlowTime MaximumFlowTime Efficiency Part Travel Distances Extra-Cellular Operations Longest Average
Queue
157
The example machine-component matrices shown in Figure 1 were developed to help illustrate the limitations of some of the performance measures listed in the first column of Table 2. The simple matching, generalized matching, and the product moment correlation coefficient performance measures were not included in this comparison because only the final configuration of the machine-component matrix was developed and the value of calculating it is questionable unless the analyst has prior knowledge of an ideal configuration. In addition, the GE measure was not included because of its weak discriminating power and other evident problems associated with its use. Rather, the dominant measures of GF, WGF, and GM will be considered. 1
2 3
4
Matrix A
2
3
4
1 Iiiii~iiiiiiiilo o 1 1 0 !!i!i~i!i!i!i!il o o o o o liiii~iiiiiilil~
1
2
Matrix D 3
4
iii~iiiiiiii~iiiiiii~iii~o !.i.!~!.!.i.i.il.ii~.i!i!ii!~.iii~i~ii o
Matrix B
1 2 3 4 5 i~i~i~ii~i~i~i~ii~1ii~ii!i!iii:1~iiiiii~ii~iiiiiii~i!ii~
5
5
o o
!ii..liii~ii~i~i~ii!ii!i!i!i~i!iii1 0 0 1 1 -iiii:li!i!i!i!i!i!iiii~ii!iiiiii 0 0 1 -i!!!~ii!iliiiiiiiiii~l!~ii!i!i . . . . . . . . . . . . . . . . . . .
2 ii~iiii~iii~iiiiiiii~iiii~iii~ili!ii!~iii~iiii 3
4
Matrix C
1 2 3 4 iiii.:t!iiiiiiiiiiiiii~iiiiiiiiiiii!ii.']ii!iiiii! 1 iiiii..11iiiiiiiiiiiiii~iiiiMiiiiii~iiii!ii0 !i!ii~iiiiiiiiiiiiiii~iiiiiiiiiiiiii~iiiMio
o 0
1
Matrix E 2
3
4
I ~ii!i~i!iiiilili~iiii!iiii~iiiio!i
o 0
5 0 1 o
o !iii~ililililililili!~!ili!ii! . . . . . . . . . . . . . . . . . . .
1 ~!i!i:ti~i~i~i~i!i~i~i~;t~i~i~i~!]
5
o
2 iiiii~iliiiiiiii!iiii~i!iii!i!iiiiiii;'liiiiii~i0 0 3 iiii!~i!iiiiii!!iiiii..l.!i!iiiiiii!i!i!i~i!iiiii!0 0 4 0 0 1 iiiii~iiiiiiiiiiiiii!~iiiiiiiii 5 0 0 0 i~!~i~i~i~i~i~i~i~i~i~i~!
Performance Measure Machine Utilization (MU) Proportion Exceptional Elements (PE) Total Bond Energy (TBE) Grouping Efficacy (GF) Diagonal Voids (DV) Grouping Measure (GM)
Matrix A
Matrix B
Matrix C
Matrix D
Matrix E
1.00 0.64 12.00 0.36 0.00 1.00
0.56 0.00 12.00 0.56 0.79 -0.23
0.85 0.21 12.00 0.69 0.14 0.71
0.77 0.29 18.00 0.59 0.21 0.56
1.00 0.07 18.00 0.93 0.00 1.00
Figure 1. Example machine-component matrices and performance measures The machine-component Matrices A-C in Figure 1 all have the same configuration; however, the part families and machine cells are defined differently as is illustrated with the shaded regions. In Matrix A five small machine cells and part families have been defined, in Matrix B one large machine cell and part family has been defined, and in Matrix C two machine cells and part families have been defined. As is illustrated in Figure 1 for Matrices A-C, MU is often maximized by creating a larger number of small cells while PE may be minimized by creating a fewer number of larger cells. Also, note that the TBE measure is indifferent among the cell assignments shown in Matrices A-C. In fact, TBE is only sensitive to the configuration of the entire machine-component matrix and is indifferent to the actual
158
assignment of part types to families and machine types to cells. This would also be true for CLM. The GF measure is greatest for the more reasonable assignment shown in Matrix C and the GM is greatest and at its upper bound for Matrix A which is probably not a good configuration for CM. Note that GM will always be equal to 1.0 whenever the cells consist of all nonzero elements regardless of the number of exceptional elements. Matrices D and E were developed to illustrate another weakness associated with the TBE measure. Both machine-component Matrices D and E have the same number of rows, columns, and nonzero entries. However, while the machinecomponent Matrix E has more clearly defined clusters, TBE is the same for both matrices. It is interesting to note that the other five performance measures all appear to indicate that the layout in Matrix E is superior. The results for using different values of q for calculating WGF are listed in Table 3. Note that WGF is constant for Matrix B over all selected values of q. This is because WGF reduces to WGF=II(I+DV) whenever PE=0 and thus WGF is insensitive to q. A similar property is not found when DV=0 as seen for the results for Matrices A and E. For the case of DV=0, WGF=r(1-PE)I[PE+r(1-PE)] and is dependent upon r=ql(1-q) and thus remains sensitive to different values for q. Note that for the four matrices with PE>0, the ranking with respect to WGF is consistent over all values of q and thus consistent decisions regarding the relative performance of different cell formation procedures may be made.
Table 3 Weighted grouping efficacy for the matrices from Figure 1. for various values of q q
Matrix A
Matrix B
Matrix C
Matrix D
Matrix E
.5 .4 .3 .2 .1
0.36 0.27 0.19 0.12 0.06
0.56 0.56 0.56 0.56 0.56
0.69 0.63 0.56 0.45 0.28
0.59 0.52 0.45 0.33 0.20
0.93 0.90 0.85 0.78 0.60
In summary, of the performance measures that are not for considering operation sequences and part volumes, the family of WGF measures often appear to be one of the better choices for capturing both features of PE and DV. The value of q can be set to reflect managerial preferences for the relative costs of DV and PE but the relative ranking with respect to WGF for different configurations will usually not be altered. The simple matching, generalized matching, and product moment correlation coefficient are not related to any of the design objectives associated with CM and are only for considering how closely the clustered machine-component matrix matches the original randomly generated machine-component matrix. TBE and CLM are only for considering the final configuration of the machine-component matrix and do not consider how part families and machine cells are defined. In
159
addition, TBE may be considerably insensitive to the degree of cluster definition in the rearranged machine-component matrix. The PE performance measure may have a bias toward a small number of large cells while the MU measure may have a bias toward a large number of small cells. 4. A Framework for Comparing Cell Formation Solutions
In this section a framework for comparing alternate cell formation procedures is suggested. A basic prerequisite for a performance measure employed to compare alternate cell formation procedures should be that it is directly related to specific CM design objectives. The design objectives included in this framework are the following: 1) minimize intercellular transfers, 2) minimize machine setup time, 3) minimize the investment in new equipment, 4) maintain acceptable machine utilization levels, 5) improve quality, 6) reduce work-in-process levels, 7) reduce production lead times, 8) maintain on-time delivery performance, and 9) increase job satisfaction. Also, static and dynamic performance measures will be distinguished in the framework. Static performance measures are analytical or formula based. Dynamic performance measures require computer simulation models and analysis. Finally, performance measures used to specifically assess the achievement of the design goals (i.e., direct performance measures) and measures that approximate or indirectly assess the achievement of the design goals (i.e., surrogate performance measures) will also be distinguished in the framework. In the remainder of this section static, dynamic, direct, and surrogate performance measures for each of the CM design objectives will be discussed. The CM design objective most frequently assessed is the number of intercellular transfers required. Usually it is measured indirectly as the proportion of elements in the final rearranged machine-component matrix that are exceptional elements as defined with PE. Mosier [12] modified PE by including part volume data and derived the Volume Weighted Proportion of Exceptional Elements (VWPE): N M N M VWPE= T. T~V~e~kl3 3V~x~k i=1 k=l i=1 k=l
(8)
where V~ is the annual demand for part i. Likewise, a surrogate measure for assessing the proportion of operations that require intercellular transfers can be developed based upon operation sequence information. The Operations Sequences Proportion of Exceptional Elements (OSPE) is: N 0i-1 OSPE = 3 3 r i=lj=l
N i=1
(9)
where (z~j is set to one if operation j and operation j+l on part i are performed in different cells, zero otherwise, and O~ is the total number of operations required by
160
part type i. Finally, a measure for incorporating both operation sequence and part volume data to assess the actual proportion of operations requiring intercellular transfers may be developed. The Actual Proportion of Intercellular Operations (APIO) is:
N Ocl N APIO = Z T_,Via~j I T_,V,O~ i=1 j=l i=1
(10)
The example data shown in Figure 2 was developed to illustrate the four intercellular transfer measures of PE, VWPE, OSPE, and APIO. In Figure 2, machine routings, annual part type volume data, and three alternative machinecomponent assignments are provided. The results for the PE measure, which does not incorporate operation sequence or part type volume data, yields an equal ranking for all three machine-component assignments shown in Matrices A-C. When employing VWPE, the machine-component assignments shown in Matrices A and C are ranked as best. Employing the OSPE measure reveals that the machinepart assignment shown in Matrix B is best. The APIO measure ranks the machinecomponent assignment given in Matrix A as best. We thus note that when employing the three surrogate performance measures for the actual number of intercellular transfers, i.e., PE, VWPE, and OSPE, none of them consistently indicated the machine-component assignment shown in Matrix A as did APIO. Part T y p e
cl c2 c3 c4 c5 c6 Matrix A
Volume
10 20 20 10 20 100
Machine Routing
ml-m2-m3-m4 ml-m2-m4 m3-ml-m3-m2-m3 m6-m5-m4-m3 m6-m5-m3 m4-m5-m6 Matrix C
Matrix B
cl c2 c3 c4 c5 c6 cl c2 c3 c4 c5 c6 cl c2 c3 c4 c5 c6 0 0 ml i~i~i.lii~ii~i~i~i~i~i~:l~i~ii~i~i~i~ii~i~l~0ii~ii~i~ 0 ml iiiii~iiiiiiiiiiiiiiii!~iiiiiiiiiii!!iiiii~iiiiiiiiiil0 0 0 ml ::::::::?::?:::::::::::::::::::::::::0:::::::::::0 iiiiiililiiiii~iiiiiiiiiiii filialiiiiiii] 0 0 0 m2 ilili~iiiiiiiiiiiliiiiii"..iiiiilililiiiiiiiili~iiiiiiiiilo o o m2 iilii~ m2 i!ili]iiiiiiiiiiiiiiiii~iiiiiiiiiiiiiiiiii~iiiiiiiiiii 0 0 0 .:...:.:.:.:................................................, o ~ Ii~i~i~i~i~i~i~i~i~i1:::::::::::::::::::::::::::::: m3 .iiiii~::.~i~iiii~i::i::iiii~i!~iiiii::i::i::i::i~iii::i~i~ii m3 iiiiiiiliiiiiiiiiiiiii~iiiiiiiili!!iiii!~iliiiiiii!i 1 1 0 1 1 0 m3 o :::::::::::::::::::::::::::::::::::::::::::::::::::: : 0 1 m4 , m4 1 1 0 i!;i;~i;i;i;i;i;!;i;i;i~;i;!;!;!;i;!;i;!;~;i;i;i;!;i;i;ilm4 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 0 0 0 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: m5 o o o l~i~i~i~i~i~i~!~i~i~i~i~i~i~i~i~i~! mS 0 0 0 |iiii;~iiiiiiiiiiii!iiiii]iliiiiiiiiiiiiiiiii]iiiiiiii;iiiii I m5 mS o o o !!!!!~!ii!i!i!!ii!!iii~ii!i!i!!!!iiiii~i!i!!iiili!] m6 o o o ~ili~iii!i!ilili!iii!i!!i!i!!ii!iii:~i!!!!ii!iilme o o o ~!~i~iii!i!i!i!i~iii!ii!i!i!!!!i!i:ti!iii~
Performance Measure
ProportionExceptionalElements(PE) Volume Weighted ProportionExceptionalElements(VWPE) OperationsSequence Proportionof ExceptionalElements(OSPE) Actual Proportionof IntercellularOperations(APIO)
Matrix A
Matrix B
Matrix C
0.20 0.11 0.18 0.10
0.20 0.25 0.14 0.22
0.20 0.11 0.27 0.18
Figure 2. Example part type routings, volumes & machine-component matrices.
161
In addition to directly measuring the number of intercellular transfers statically using APIO, it can also be measured dynamically using computer simulation. For example, in the computer simulation models developed by Shafer and Meredith [14], a variable was updated each time a part traveled between two cells. Since both direct static and dynamic measures are available, a question that naturally arises is whether one method in some sense outperforms the other. Since computer simulation would tend to require more effort, APIO might be preferred. However, when using APIO one must implicitly assume that the demand for the parts is known and is relatively stable. If these assumptions are met then APIO may perform well for assessing the actual proportion of operations requiring an intercellular transfer. Alternatively, if these assumptions are not realistic, then computer simulation may be a more attractive choice to more accurately assess intercellular transfers. Another design goal associated with CM is to reduce setup times. Assessing setup times statically via an equation is very difficult and to do this a great deal of information such as part volumes, part routings, part processing sequences, and sequence dependent setup times for all parts at all machines is needed (for an example, see Shafer and Rogers [33]). In addition, a number of assumptions must be made. For example, it must usually be assumed that all parts are produced on each production cycle and that the processing sequence of the parts does not vary. As a result, setup times are probably best measured dynamically via computer solution. Measuring setup times directly using computer simulation also requires a great deal of data, most importantly sequence dependent setup times for each machine for all part type pairs processed by a given machine. Because it is often not practical to collect this data, setup times in the near future will most likely be measured indirectly. For example, assumptions about setup time reductions can be built into computer simulation models (see Flynn and Jacobs [3]). Assessing the investment in new equipment is perhaps the easiest CM design objective to measure. A direct static measure is to simply sum over all machine types the number of machines of the type needed times the cost of the machine. Assessing machine utilization levels can be accomplished statically or dynamically. Statically, ratios of capacity needed to capacity available have long been used. Likewise, resource utilization measures are among the most common measures employed in simulation analysis. Caution should be exercised, however, to not overly emphasize the machine utilization objective. For example, assuming adequate machine capacity, a shop's performance should be considered improved if its lead times and work-in-process levels are reduced regardless of the level of machine utilization. Likewise, a shop's performance should be considered improved whenever quality improves and setup times are reduced regardless of the level of machine utilization. Another important consideration related to machine utilization is how the machines are weighted. Average machine utilization with all machines weighted equally may frequently not be very appropriate because in most plants only a few machines are critical. Quality is perhaps the most difficult CM design objective to measure. Presently, only potential improvements in quality can be indirectly measured by considering the
162
relationship of quality to other CM design objectives. For example, reductions in the number of intercellular transfers permits holding workers more accountable for product quality. Similarly, ensuring that machines are not overutilized facilitates higher quality because there is time available for preventive maintenance. Finally, reductions in work-in-process levels and lead times facilitate uncovering hidden problems in the production process and permits the discovery of quality problems sooner so that corrective action can be initiated. Thus, assessing the extent of quality improvements can be approximated by assessing the achievement of other CM design objectives. Finally, assessing work-in-process levels and lead times is best accomplished dynamically via computer simulation. Work-in-process levels can be measured as a time persistent variable and lead times as a simple average of individual lead times over all parts. Analytical models that rely upon statistically-based queueing theory require assumptions about the frequency distribution of arrivals and service times and are too often not easily applicable or impossible to apply to complex situations. 5. Summary and Future Research Choosing a measure of performance is part of the modeling of any production process and is ultimately up to the management's stated objectives. If appropriate value-based objectives of profit, revenue, or cost are available to judge the quality of CM configurations then they should be utilized. However, accurate data for such objectives are difficult to obtain at any point in time and especially difficult to obtain prior to implementing CM. In order to compare competing CM configuration proposals we must often judge them by how aligned they are with the basic objectives of CM. Most of the performance measures in this article may be rationally utilized to gauge a CM configuration depending upon the situation encountered and the managerially-stated objectives and in this respect there is not a truly dominant measure. If, however, no guidance is provided regarding the desired outcomes of converting to CM, other than the basic tenets of CM, then the WGF measure appears to be one of the better choices to incorporate for making static decisions because a tradeoff is made between having exceptional elements outside of the cells and the voids in the cells. WGF is even a better measure if in the formula PE can be replaced by APIO. An analyst may also want to separately consider MU, DV, APIO (or OSPE, VWPE, or PE if the data for APIO is not available), TBE, CLM, GE, and GM but the properties and weaknesses of several of these measures should not be overlooked. A large amount of research has been devoted to the development of cell formation procedures. Recently, a few studies have focused on comparing these cell formation procedures. The purpose of this paper is to review solution quality performance measures available and to offer a framework for comparing alternate cell formation procedures. Before useful comparisons of cell formation procedures can be made, meaningful performance measures must be developed. For a performance to be meaningful it should be related to one or more of the design
163
objectives associated with CM. The following nine design objectives associated with CM were identified: 1) minimize intercellular transfers, 2) minimize machine setup time, 3) minimize the investment in new equipment, 4) maintain acceptable machine utilization levels, 5) improve quality, 6) reduce work-in-process, 7) reduce production lead times, 8) maintain on-time delivery performance, and 9) increase job satisfaction. Additionally, performance measures were classified as direct or surrogate, and as static or dynamic. In terms of assessing intercellular transfers, static and dynamic direct measures are available. A static measure may be more appropriate when product volumes are known and the product mix is relatively stable. Otherwise, computer simulation should be used to directly measure intercellular transfers. For assessing setup times, machine utilization, work-in-process levels, and lead times dynamic measures may often perform best. Because of the large amount of data required, setup times probably can only be practically measured indirectly, while the other three measures can be easily assessed directly. Measuring the investment required for new equipment can be measured directly via static measures. Finally, at present, the recommended approach for assessing potential quality improvements is to combine several performance measures such as machine utilization, work-in-process levels, lead times, and intercellular transfers. Combining these measures may often provide an indication of how much the manufacturing environment will support and enhance quality improvement activities. REFERENCES 1. R.J. Schonberger, World Class Manufacturing, Free Press, New York, 1986. 2. B.B. Flynn and F.R. Jacobs, "A simulation comparison of group technology with traditional job shop manufacturing", Int. J. of Prod. Res., 24 (1986) 1171. 3. B.B. Flynn and F.R. Jacobs, "An experimental comparison of cellular (group technology) layout with process layout.", Dec. Sci., 18 (1987) 562. 4. J.S. Morris and R.J. Tersine, "A simulation analysis of factors influencing the attractiveness of group technology cellular layouts", Man. Sci., 36 (1990) 1567. 5. U. WemmerlOv and N.L. Hyer, "Research issues in cellular manufacturing", Int. J. Prod. Res., 25 (1987) 413. 6. C-H. Chu, "Cluster analysis in manufacturing cellular formation", OMEGA Int. J. Man. Sci., 17 (1989) 289. 7. D.F. Rogers, R.D. Plante, R.T. Wong, and J.R. Evans, "Aggregation and disaggregation techniques and methodology for optimization", Oper. Res., 39 (1991) 553. 8. S.M. Shafer, "Cellular manufacturing: a selected bibliography", Working Paper (1994) Department of Management, College of Business, Auburn University, Auburn, Alabama, USA.
164
9. R.G. Askin and K.S. Chiu, "A graph partitioning procedure for machine assignment and cell formation in group technology", Int. J. Prod. Res., 28 (1990) 1555. 10. F. Choobineh, "A framework for the design of cellular manufacturing systems", Int. J. Prod. Res., 26 (1988) 1161. 11. V.L. Huber and N.L. Hyer, "The human factor in cellular manufacturing", J. of Oper. Man., 5 (1985) 213. 12. C. Mosier, "An experiment investigating the application of clustering procedures and similarity coefficients to the GT machine cell formation problem", Int. J. Prod. Res., 27 (1989) 1811. 13. C. Mosier, "Weighted similarity measure heuristics for the group technology machine clustering problem", OMEGA Int. J. Man. Sci., 13 (1985) 577. 14. S.M. Shafer and J.R. Meredith, "A comparison of selected manufacturing cell formation techniques", Int. J. Prod. Res., 28 (1990) 661. 15. A. Ballakur and H.J. Steudel, "A within-cell utilization based heuristic for designing cellular manufacturing systems", Int. J. Prod. Res., 25 (1987) 639. 16. J.R. King, "Machine-part grouping in production flow analysis: an approach using a rank order clustering algorithm", Int. J. Prod. Res., 18 (1980) 213. 17. H.M. Chan and D.A. Milner, "Direct clustering algorithm for group formation in cellular manufacture", J. Man. Sys., 1 (1982) 65. 18. U. Wemmerl(~v, "Comments on direct clustering algorithm for group formation in cellular manufacture", J. Man. Sys., 3 (1984) vii. 19. A. Kusiak and W.S. Chow, "Efficient solving of the group technology problem, J__. Man. Sys., 6 (1987) 117. 20. A. Kusiak and W.S. Chow, "An efficient cluster identification algorithm", IEEE. Trans. Sys., Man and Cyb., SMC-17 (1987). 21. A.J. Vakharia and U. WemmerlOv, "Designing a cellular manufacturing system: a materials flow approach based on operation sequences", liE Trans., 22 (1990) 84. 22. M.R. Anderberg, Cluster Analysis for Application, Academic Press, New York, NY, 1973. 23. C-H. Chu and M. Tsai, "A comparison of three array-based clustering techniques for manufacturing cell formation", Int. J. Prod. Res., 28 (1990) 1417. 24. W.T. McCormick, P.J. Schweitzer, and T.W. White, "Problem decomposition and data recognition by a clustering technique", Oper. Res., 20 (1972) 993. 25. J.K. Lenstra, "Clustering a data array and the traveling salesman problem", Oper. Res., 22 (1974) 413. 26. M.P. Chandrasekharan and R. Rajagopalan, "An ideal seed non-hierarchical clustering algorithm for cellular manufacturing", Int. J. Prod. Res., 24 (1986) 451. 27. J. Miltenburg and W. Zhang, "A comparative evaluation of nine well-known algorithms for solving the cell formation problem in group technology", J. Oper. Man., Special Issue on Group Technology and Cellular Manufacturing, 10 (1991)44.
165
28. M.P. Chandrasekharan and R. Rajagopalan, "MODROC: an extension of rank order clustering for group technoiogy", Int. J. Prod. Res., 24 (1986) 1221. 29. L.E. Stanfel, "Machine clustering for economic production", Eng. Costs and Prod. Econ., 9 (1985) 73. 30. S.M. Shafer and D.F. Rogers, "Similarity and distance measures for cellular manufacturing part I: a survey", Int. J. Prod. Res., 31 (1993) 1133. 31. S.M. Shafer and D.F. Rogers, "Similarity and distance measures for cellular manufacturing part I1: an extension and comparison", Int. J. Prod. Res., 31 (1993) 1315. 32. C.S. Kumar and M.P. Chandrasekharan, "Grouping efficacy: a quantitative criterion for goodness of block diagonal forms of binary matrices in group technology", Int. J. Prod Res., 26(1990) 233. 33. S.M. Shafer and D.F. Rogers, "A goal programming approach to the cell formation problem", J. Oper. Man., Special Issue on Group Technology and Cellular Manufacturing, 10 (1991 ) 28. 34. T.D. Klastorin, "The p-median problem for cluster analysis: a comparative test using the mixture model approach", Man. Sci., 31 (1985) 84. 35. S.M. Ng, "Worst-case analysis of an algorithm for cellular manufacturing", Eur. J. Oper. Res., 69 (1993) 384. 36. G. Harhalakis, R. Nagi, and J.M. Proth, "An efficient heuristic in manufacturing cell formation for group technology applications", Int. J. Prod. Res., 28 (1990) 185. 37. L.E. Stanfel, "A successive approximations method for a cellular manufacturing problem", Ann. Oper. Res., 17 (1989) 13. 38. L.E. Stanfel, "Successive approximations procedures for a cellular manufacturing problem with machine loading constraints", Eng. Costs and Prod. Econ., 17 (1989) 135. 39. L.E. Stanfel, "lterative determination and matching of part groups and machine cells: optimization and successive approximations", Int. J. Prod. Econ., 23 (1991) 213. 40. A. Kusiak, "The generalized group technology concept", Int. J. Prod. Res., 25 (1987) 561. 41. K.R. Gunasingh and R.S. Lashkari, "Machine grouping problem in cellular manufacturing systems - an integer programming approach", Int. J. Prod. Res.,27 (1989) 1465. 42. J.C. Wei and N. Gaither, "An optimal model for cell formation decisions", Dec. Sci., 21 (1990) 416.
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
167
Performance o f manufactmfng cells for group technology: a parametric analysis A. Agarwal, F. Huq, and J. Sarkis Department of Information Systems and Management Sciences, Box 19437, The University of Texas at Arlington, Arlington, Texas 76019 Cellular Manufacturing (CM) is increasing in importance as a philosophy with broad applicability in domestic manufacturing companies. Lately, CM is being criticized by the academic community for its lack of superior performance relative to a functional layout (FL). Additionally, there exists a lack of consensus among researchers on the appropriate manufacturing environment for the applicability of CM. Not only that, there have been very few research efforts to study the behavior of CM systems as a function of setup time, processing time and their ratio. This ratio is found to be a critical parameter in identifying the proper manufacturing environment for the suitability of CM. This paper uses an analytical model to investigate the relative performance of a partitioned (cellular) system compared to an unpartitioned (functional) system as a function of the ratio between setup time and processing time per unit, varying over a large domain of values. The performance criteria used are flow time and work-in-process. The study provides insights into the conditions under which partitioned, unpartitioned, or both systems are feasible and also when one strategy out performs the other. 1. INTRODUCTION The value of Group Technology (GT) and Cellular Manufacturing (CM) in discrete parts manufacturing industries has now come to the attention of U.S. manufacturers. CM involves processing of part families on dedicated clusters of dissimilar machines (cells) with the objective to capitalize on similar, recurrent activities [1]. CM represents a major technological innovation to most manufacturing organizations with profound benefits [2]. CM has come under criticism from both practitioners and academicians alike. In a recent survey of 23 American companies, about 40% indicated that they experienced failure in the results of CM implementation [3]. In companies where CM implementation has been successful, managers were concerned whether the full potential of CM was being achieved. In the last decade, researchers such as Flynn [4], Flynn & Jacobs [5], Leonard & Rathmill [6], Morris & Tersine [7] have challenged Burbidge's [8,9] long held view point about superior performance of CM over functional systems under several parameter ranges. In their simulation studies, the former authors
168 have found that even though CM showed superior performance characteristics in setup time and move time, yet it resulted in longer wait times, higher work-m-process (WIP) and longer total flow times. Thus, not only is industry facing such diverse results of CM implementation, but the academic community is facing a lack of general consensus on the applicability of CM. This has raised a number of questions about CM implementation. One question that should be addressed is the type of environment where CM may perform better than other types of manufacturing systems. Similarly,Wemmerlov & Hyer [1] point out that it has become imperative to not only know where CM is a feasible alternative, but also in which situations it is preferable to other manufacturing systems. The present CM industrial implementation methodologies are partly responsible for the above problems and concems because of their failure to recognize setup time reduction as the key component of CM. According to Jordan & Frazier [10], the current cell formation techniques either do not focus on the primary objective of CM, i.e. setup reduction, or confuse this objective with other objectives. Setup time reduction plays a pivotal role in fuahering the benefits of CM [5,11 ]. In support of this claim, Jordan & Frazier [ 10] point out that out of 16 benefits attributed to CM, eight are a direct result of setup time reduction. Given the precursory role of setup time reduction in achieving other benefits, it follows that setup time reduction should be the primary objective of cell formation and scheduling. Despite the recognition by academicians of potential economies of setup time reduction offered by CM, there have been very few and fragmented research efforts to study the behavior of CM systems as a fimction of setup time, processing time and their ratio. Even alter three decades, the conditions for the economic viability of CM has remained unresolved. The objective of this paper is to use an analytical model to investigate the relative performance of a partitioned system (PS) and unpartitioned system (UPS) as a function of the ratio between setup time (T) and processing time per unit (t) over a large set of domain values. This study finds that the ratio between setup time and processing time per unit in combination with the setup time value impacts the performance measures of flow time and WlP for both PS and UPS. We also identify the domain values over which both systems are infeasible as well as the values at which feasibility exists. Additional insights into the performance of each strategy are presented. Conditional characteristics concerning the relative performance of the two strategies are identified. The results of this study will provide practical insights that can be used by manufacturing managers for a more informed evaluation of cellular versus fimctional layouts. 2. ANALYTICAL MODELS FOR PERFORMANCE EVALUATION Analytical models for performance evaluation of PS and UPS are virtually non-existent. Most studies within this area have relied on either simulation models or empirical research. This is due to the complexity of these manufacturing environments. A number of researchers [3,12,13, 14] have stated the need for developing analytical tools to evaluate the performance of CM. Some models from one class of analytic models (aggregate dynamic models (ADM) [15]) are briefly described. It is from this class of analytical models that we propose to borrow a model for this study.
169 ADMs have been defined as analytical techniques incorporating stochastic processes, queueing theory, queueing networks, and reliability theory. According to Suri, et al. [15], even though some of the assumptions in these models may be restrictive for certain manufacturing configurations, they tend to give reasonable estimates of performance and are very efficient. The simplest of stochastic production systems models are the "single queue systems" which form the foundation for more complex analytical models and systems. A queueing network is composed of individual queues which are visited in a deterministic or stochastic routing pattern by customers. This models a manufacturing system where each node (station) represents a set of functionally identical resources (machines, operators, cranes etc.) called "servers" and each job (customer) represents a workpiece or batch of workpieees. A single node in the network is characterized by its arrival process, the service time distribution, the number of servers at the node, the queue capacity, and the queue discipline. Jackson [16] is the earliest seminal study on analytical models for a job-shop system. He examined an open network of M/M/1 queues, i.e. queues with poisson arrivals and exponential service times for jobs. Karmarkar [17] and Kammrkar et al. [18] used Jackson's results to model a single queue with single class products and showed analytically that batch size has a significant effect on queue-related measures such as flow time and WIP. Suresh [19] used Karmarkafs model and developed analytical approximations for a more general GT context to address the effects of partitioning work centers to implement GT on performance measures like flow time, WIP, and ufflization. His study showed that partitioning and setup reduction lead to significant advantages in the cell component, but also significant adverse effects in the remainder cell. The net effects for hybrid situations were clearly unfavorable when compared to the functional layout, even with a high degree of setup reduction. In an effort to compare the relative performance of functional and cellular systems, Suresh [20] illustrated the effects of introducing setup reduction while converting from a functional to a cellular system by partitioning. He showed that partitioning a work center leads to adverse effects on flow characteristics. For a specified level of flow time and WIP, machine utilization may be lower in the partitioned system. Any setup reduction introduced has to first overcome this performance deterioration, before leading to the benefits of CM. He concluded that for a given lot size, setup reduction above a certain threshold value has to be achieved before a CM (partitioned) system would outperform a corresponding functional (unpartitioned) system. Suresh recommended that an extension to his study would be an analysis of parameters including setup and processing time to determine an optimal range of values for these parameters for exploiting the benefits of CM. This study helps to achieve this goal. Figure 1 graphically identifies the two systems relevant to this study [20]. In Figure 1, the unpartitioned system (UPS) has all products flowing into the system for processing, with no specific product types being assigned to any machine. The partitioned system (PS) shows that the product types are grouped and allowed to flow into specified machining cells.
170
D/q
Vrq ---q-zq --.Vrq
D/(qe) ~.["~ D/(qe)
D/(qe) W ~ D/(qe) ~ I F ~ Partitioned System (PS)
Unpartitioned System (UPS)
Figure 1. Unpartitioned and partitioned systems representations. The expressions in this study use the following notations: c = No. of similar machines D = Demand Rate Dr =Demand Rate for part i q = Lot size qi = Lot size for part i T= Average setup time t = Average processing time per unit 6 = setup reduction factor M/M/C--> Multi-server queuing model is used for UPS M/M/l--> c single server queuing model is used for PS Mean job Arrival Rate = ~ = EZa= ED/qi = D/q X ~ = D/q = D/(qc)
(Poisson Distributed) (Poisson Distributed)
Mean Service Time = l . f ~ = ]fll~ -- (T,Y~-tq)
(Exponentially Distributed)
Using these ;~ and Ix values, a summary of the expressions used to calculate the various performance measures - flow time and WIP - are provided in Table 1 [18, 20]. p = L/iu = D(T6+ tCl)/q = DT(6+tq/T)/q
Let Ratio ~ ) = T / t Hence, p = D T ( 6 + q ~ ) / q
(1)
171 Table 1: Expressions used to calculate performance values of PS and UPS systems. Arrival Rate (~) Service Rate (It) Ufflization (p) Jobs in Queue (Lq) Jobs in system (L0 Flow Time
UPS
(D/q)(Td+ tq) ifl/(1-p)
DIq (T6+tq) s (D/q)(Td+ tq) pC+S.po/(e_l)t.(c_p)~
[(Tt~+tq)-S-D/~ 1 L~*q
(Lq/fl~+1//,t) L:*q
(T6+tq) -1
(Lq+p)
WIP
Where:
PS
Dl(qc)
o-1 ( _,,
PO - -
"~
(Zq+p)
]
(( e)lJ o
| I
~/~=ok n! ,J
c! 1-
Since the performance measures are a function of p, it means they are not dependent just on the ratio between setup time and processing time per unit (R), but also on the absolute value of setup time (T). It is found that for a value R, and various T values, the performance behavior of PS and UPS varies substantially. This study will study the performance relationship of both PS and UPS as a fimction of R and T.
3. BACKGROUND The analytical study by Suresh [20] on relative performance of CM and FL systems indicates the achievemem of a threshold level of the setup reduction factor before PS begins to outperform the UPS. This setup reduction factor appears as (6) in the expression for Service Rate as defined in Table 1. For a UPS, 5 = 1, whereas in a PS, 0 <6 < 1. Suresh's [20] results, as shown in Table 2, considered a single value for setup time (T=3) and processing time per unit (t=-0.1) providing a ratio (R) of 30. The values are calculated from the expressions in Table 1. It is evident from this table that for UPS, the best values for flow time and WIP are 13.6 and 272 (identified by asterisks) and occur at q=60. The first point at which PS can perform better than UPS occurs at 6=0.3 (i.e. 70% reduction in setup time) and q=15 with a value of 12.0 for Flow time and 240 for WIP respectively (idemified by asterisks). As a result, Suresh concluded that for a given lot size, setup reduction above a certain threshold value (6-=0.3 in this case) needs to be achieved before a PS system would out perform a corresponding UPS system.
172
Tahle 2 9Performance of P~qand IlP~q ~qv~emsfor T=q t--O1 R=30 (From Snr~sh [201) " Lot size . . . . System 8 5 15 20 32 40 50 60 80 Flow Time UPS PS
Wq~ UPS PS
100
1.0 1.0 0.8 0.6 0.5 0.4 0.3 0.2 0.1
38.0 8.0 3.7
27.0 12.0" 7.0 4.5
76.0 28.0 16.0 10.6 7.4 5.4
52.4 198.4 44.8 22.9 17.7 14.1 11.4 9.4 7.7
17.3 56.0 32.0 21.1 17.6 14.9 12.7 10.8 9.3
14.0 40.0 28.5 21.3 18.6 16.3 14.4 12.7 11.3
13.6" 36.0 28.0 22.3 20.0 18.0 16.2 14.7 13.3
14.6 35.2 29.7 25.3 23.4 21.7 20.1 18.6 17.3
16.3 37.1 32.6 28.8 27.1 25.5 24.0 22.6 21.2
1.0 1.0 0.8 0.6 0.5 0.4 0.3 0.2 0.1
-760 160 74
540 240~ 140 90
1520 560 320 211 149 108
1048 3968 896 457 354 282 228 187 154
347 1122 640 422 352 297 253 216 186
279 800 569 425 371 326 288 255 226
272~ 700 560 446 400 360 325 293 265
292 704 594 506 468 433 401 372 345
326 743 653 576 541 509 479 451 425
W e shall focus our study on determining whether Suresh's conclusions are valid for all values o f T and R or does such validity exist only for a very narrow range of values for T and R. On considering some other values for T and R, we found that the level o f setup reduction had no beating on the relative superior performance of one system over the other. Either UPS or PS was always found to be dominant in performance, irrespective of the level of setup reduction. As an example using the expressions from Table 1, we arrive at Tables 3, 4, and 5 which show the performance measures for PS and UPS systems under three different parametric values for T andR. Table 3 (T=10, R=0.01) shows that both PS and LIPS are infeasible for all levels of 8 and q. Table 4 (T=0.001, R=0.01) represents a scenario where, even at a value o f 8 as low as 0.1 for PS, the UPS system performs better, i.e. any level of set up reduction does not improve the performance o f PS over LIPS. Table 5 (T--0.01, R=1000) indicates a case where PS is as good as UPS without any setup time reduction and a slight reduction in setup time allows PS to out perform UPS significantly. The above scenarios (for the T and R parametric variables) show some interesting results and provide insights for selection of the appropriate type of production layout strategy under various process and setup time requirements. These few points represent just a sampling o f the appropriate layout strategy to select. To gain further insights, a large range of points and mapping o f these points is presented in the next section.
173
Table 3: Performance Results of PS and UPS Systems for T=10, t=1000, R=0.01 Lot size
System Flow Time UPS PS1.0
8
5
15
20
32
40
50
60
80
100
1.0
NO FEASIBLE VALUES 0.1 UPS PS1.0
1.0
NO FEASIBLE VALUES
Q
0.1
Table 4: Performance Results of PS and UPS Systems for T=0.001, t--0.1, R=0.01
Lot size System Flow Time
8
5
15
20
32
40
50
60
80
100
UPS PS
1.0 1.0 0.8 0.6 0.5 0.4 0.3 0.2 0.1
0.5" 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1.6 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0
2.2 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0
3.5 6.4 6.4 6.4 6.4 6.4 6.4 6.4 6.4
4.4 8.0 8.0 8.0 8.0 8.0 8.0 8.0 8.0
5.4 10.0 10.0 10.0 10.0 10.0 10.0 10.0 10.0
6.5 12.0 12.0 12.0 12.0 12.0 12.0 12.0 12.0
8.7 16.0 16.0 16.0 16.0 16.0 16.0 16.0 16.0
10.9 20.0 20.0 ~.0 20.0 20.0 20.0 20.0 20.0
UPS PS
1.0 1.0 0.8 0.6 0.5 0.4 0.3 0.2 0.1
11" 20 20 20 20 20 20 20 20
33 60 60 60 60 60 60 60 60
44 80 80 80 80 80 80 80 80
70 128 128 128 128 128 128 128 128
87 160 160 160 160 160 160 160 160
109 200 200 200 200 200 200 200 200
130 240 240 240 240 240 240 240 240
174 320 320 320 320 320 320 320 320
217 400 400 400 400 400 400 400 400
174 Table 5: PerformanceResults of PS and UPS Systemsfor T=0.01, t=0.00001, R=1000 Lot size System d 5 15 20 32 40 50 60 80 Flow Time UPS PS
WIF UPS PS
100
1.0
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.01
1.0 0.8 0.6 0.5 0.4 0.3 0.2 0.1
0.01 0.008 0.006 0.005 0.004 0.003 0.002 0.001
0.01 0.008 0.006 0.005 0.004 0.003 0.002 0.001
0.01 0.008 0.006 0.005 0.004 0.003 0.002 0.001
0.01 0.008 0.006 0.005 0.004 0.003 0.002 0.001
0.01 0.008 0.006 0.005 0.004 0.003 0.002 0.001
0.01 0.009 0.007 0.006 0.005 0.004 0.003 0.002
0.01 0.009 0.007 0.006 0.005 0.004 0.003 0.002
0.01 0.009 0.007 0.006 0.005 0.004 0.003 0.002
0.01 0.009 0.007 0.006 0.005 0.004 0.003 0.002
1.0 1.0 0.8 0.6 0.5 0.4 0.3 0.2 0.1
0.20 0.20 0.16 0.12 0.10 0.08 0.06 0.04 0.02
0.20 0.20 0.16 0.12 0.10 0.08 0.06 0.04 0.02
0.20 0.20 0.16 0.12 0.10 0.08 0.06 0.04 0.02
0.21 0.20 0.17 0.13 0.11 0.09 0.07 0.05 0.03
0.21 0.21 0.17 0.13 0.11 0.09 0.07 0.05 0.03
0.21 0.21 0.17 0.13 0.11 0.09 0.07 0.05 0.03
0.21 0.21 0.17 0.13 0.11 0.09 0.07 0.05 0.03
0.22 0.22 0.18 0.14 0.12 0.10 0.08 0.06 0.04
0.22 0.22 0.18 0.14 0.12 0.10 0.08 0.06 0.04
4. AN E X T E N D E D P A R A M E T R I C ANALYSIS This study extends the work of Suresh [1992] by calculations over a broad range o f values for T and R and then investigates the relative performance superiority of CM and FL systems as a function of the level of setup time reduction. It is to be noted that Suresh's work was based on a very specific value for T = 3 hr. and t = 0.1 hr. Table 6 shows the data used for the analytical expression for performance measures. As can be seen from Table 6, setup (T) and processing time (t) per unit values are meant to cover a large range of values. Using these parameter values, calculations from the analytical models in Table 1 are generated. These calculations are then mapped on the graphs as shown in Figures 2 through 3.
.....
T a b l e 6: Ranges of parameters used for parametric analysis
D
= Annual demand/Number of production hours = 72000/3600 = 20 per hour C =4 No. of part subfamilies = 4 q = { 1, 5, 10, 15, 20, 32, 40, 50, 60, 80 100, 200} T = 10-3 to 103 t = 10"9to 103 R = 1012 to 1012
175 4.1 Results and discussion of extended parametric analysis Figure 2 shows flow time as a function of R and T. The plot identifies five distinct regions representing the feasibility and superiority of the partitioned or unpartitioned system using flow time and WlP performance measures. The bottom portion of the plot (Region 1) is a region that has both PS and UPS infeasible. This infeasibility at larger setup values and ratios can be explained by the limitation of time for actual processing of all the part types necessary to meet the demand requ'trements. In Region 2, which consists of low setup values (T = 10-3 to 1) and low Ratio values (R= 10-3 to 101), UPS (functional) always outperforms the PS systems irrespective of the level of partitioning in the latter system. A reasonable practical explanation for this phenomenon is that fimctional systems can efficiently handle parts which require very low setup time and at the same time provide the added advantage of flexibility. Computer numerical control systems with multiple spindles, heads, etc. may be manufactm-ing system examples of these types of systems. However for the same range of setup time values but higher Ratio values (meaning t << T) as in Region 3, both PS and UPS are feasible but the former always outperforms the best of the latter system. Examples of the types of processing systems in this region are dynamic reconfigurable assembly systems and fabrication lines in semi-conductor industry settings, hence PS (cellular manufacturing) may more appropriate for these systems, based on the performance measurements used in this study. Region 4 consists of high setup values (10 to 1000) and higher Ratio. Here again PS appears to dominate in all cases. The reason for this dominance, may be due to the economies of setup reduction offered by the partitioned system which outweigh the UPS performance. This environment would have characteristics of high throughput, but also, high setup time. Thus, the primary constraint for total operation time is due to setup time availability. Partitioning would best reduce the necessary time requirements for setting the equipment. Thermosetting and thermoplastic operations are the prime candidates for this region since they have very high setup and low processing time. The middle portion of the graph (Region 5) is a thin band of "mixed" region where both PS and UPS systems can outperform the other depending on the lot size (q) and setup reduction factor (6). Previous work by researchers (see[19] and [20]) has been a very specific ease of this region. Clearly these are the most interesting cases. Since it is within these areas that the determination of setup cost reduction needs to be determined to justify the implementation of one system over another. Figure 3 shows the comparative behavior of flow time for UPS and PS as a function of T and R for a slice of points from Region 5 in Figure 2. It is evident from the plot that for a given setup time, as R decreases, a high level of setup reduction (low 6 value) is needed before the flow time for PS gets better than the best value for UPS. As mentioned above, an issue of practical importance relating to the Region 5 and Figure 3, is the economic and technical feasibility for reduction in the 6 value. That is, the justification of the conversion from a UPS to PS environment can now be related to the reduction in setup time that
176 can be achieved. The benefits associated with reduction in setup time versus the cost of this reduction must be considered when deciding on the design and planning of a LIPS or PS manufacturing strategy. This has a direct and profound set of implications on models that would tend to utilize financial and other performance criteria for the design of CM [21,22]. Figure 3, can be used to gain insights into some of the requirements of setup time reduction needed for a particular R value. For example, for a point (T=I, R=10) in Region 5 of Figure 2, 5 < 0.3 is needed to justify the switch from a LIPS to a PS. The graph also shows that as R increases, the best value of flow time for both PS and UPS decreases over the entire range of T. The ~5 values used here are at decimal increments, but for a given R value, there is clearly a threshold point where it would be more beneficial to switch from a UPS to a PS strategy, or vice versa. This threshold ~5 value can be calculated by setting the flowtime (UPS) equal to the flowtime (PS) expressions and solving for 5. An analysis of the WlP performance metric has shown very similar results, with the five regions have similar relationships.
5. CONCLUSION AND EXTENSIONS This research investigated the behavior of CM and functional systems as a function of a time ratio (T/t) and value of setup time (7) over a large range of parameter values. The study identified five distinct regions within the defined ranges, where PS and/or UPS would be feasible and also when one strategy would dominate the other in the selected performance measurements. These regions included segments where PS and UPS were both found to be unstable, UPS always dominated PS based on the performance measures used, PS always dominated UPS in performance, and a "mixed" region where both PS and UPS can outperform each other depending upon the level of setup time reduction and lot size values. The practical implications for manufacturing organizations from this study, is that, given different manufacturing processes and environments, and having varying requirements for T, t, and R, the results provide them with a guideline for making a choice between CM (PS) and FL (UPS) strategies. Within the regions of parameters where setup time reduction plays a significant role, firms need to determine whether the cost and technical feasibility of achieving setup reduction justifies the switch from a FL to a CM system, or a CM system back into a FL. The research in this paper is an initial effort to help gain insights into the pooling synergies associated with partitioned systems and environments. These results are based on simplified queuing models with a number of assumptions. Reality is more complex. To help model this complexity more effectively, simulation models for the various envar"onments and parameter values can provide additional insight. An investigation into the applicability of response surface methodology and meta-modeling for these models can also be carried out. Characteristics involving other parametric changes could also be studied.
177
Figure 2. Mapping of parametric analysis for flow time performance measure.
178
10 =-
W
-
+
1 --
3 -
3
0
1 G
--
-
-
I
10
-2
I
1
1
1 1 1 1 1
10
-'
I
1
1
1
SET-UP
1 1 1 1 1
1
I
I
1
1
1
1 1 1 1
10
Figure 3. Graph of dominance threshold setup reduction values (6) between PS and UPS for flow time performance measure
179 REFERENCES
1. Wemmerlov, U., and Hyer, N., Research issues in cellular manufacturing, International Journal of Production Research, 1987, 26(4), 413-431. 2. DeVries, M., Harvey, S., and Tipnis, V., Group technology: An overview and bibliography, Cincinnati, Ohio, Machinability Data Center, 1976. 3. Wemmerlov, U., and Hyer, N. L., "Cellular Manufacturing in the U.S. Industry: A Survey of Users", InternationalJournal of Production Research, 27(9), 1989, pp. 1511-1530. 4. Flytm,B.B, Repetitive lots: The use of sequence-dependent setup time scheduling procedure in group technology and traditional shops, Journal of OperationsManagement, 1987, 7(2), 203216. 5. Flynn, B.B., & Jacobs F.1L, An experimental comparison of cellular (group technology) layout with process layout. Decision Sciences, 1987, 18(4) 562-581. 6. Leonard, R., and Rathmill, K., "Group Technology- A Restricted Manufacttuing Philosophy", CharteredMechanical Engineer, Vol. 24, pp. 42-46, 1977. 7. Morris, J.S., & Tersine, R.J., A simulation analysis of factors influencing the attractiveness of group technology cellular layouts. Management Science, 1990, 36(12), 1567-1578. 8. Burbidge, J.L., The introduction of group technology, New York, W'tley, 1975. 9. Burbidge, J. L., "Change to Group Technology: Process Organization is Obsolete", International Journal of Production Research, Vol. 30, No. 5, pp. 1209-1219, 1992. 10. Jordan, P. C., and Frazier, G. V., "Is the Full Potential of Cellular Manufacturing Being Achieved", Production and lnventoryManagement Journal, Vol. 34, No. 1, pp. 70-72, 1993. 11. Mahmoodi, F., Dooley, K. J., Starr, P. J., "An Investigation of Dynamic Group Scheduling Heuristics in a Job Shop Manufacauing Cell", International Journal of Production Research, Vol. 28, No. 9, pp. 1695-1711, 1990. 12. Gupta, R.A., & Tompkins, J.A., An examination of the dynamic behaviour of part families in group technology. InternationalJournal of Production Research, 1982,
180 13. Hyer, N. L., and Wemmerlov, U., "Group Technology in the U.S. Manufacturing Industry: A Survey of Current Practices", International Journal of Production Research, Vol. 27, No. 8, pp. 1287-1304, 1989. 14. Pullen, 1L D., "A Survey of Cellular Manufacturing Cells", The Production Engineer, Vol. 55, pp. 451-454, 1976. 15. Suri, R., Sanders, J. L., and Kamath, M., "Performance Evaluation of Production Networks", Handbooks in OR &MS, Vol. 4, pp. 199-285, 1993. 16. Jackson, J. 1L, "Jobshop-Like Queueing Systems", Management Science, Vol. 10, No. 1, pp. 131-142, 1963. 17. Karmarkar, U. S., "Lot Sizes, Lead Times and In-Process Inventories", Management Science, Vol. 33, No. 3, pp. 409-418, 1987. 18. Karmarkar, U.S., Kekre, S., Kekre, S., & Freeman, S., Lot sizing and lead time performance in a manufacturing cell. Interfaces, 1985, 15(2), 281-294. 19. Suresh, N.C., Partitioning work centers for group technology: Insights from an analytical model. Decision Sciences, 1991, 22(4), 772-791. 20. Suresh, N.C., Partitioning work centers for group technology: Analytical extension and shoplevel simulation investigation, Decision Sciences, 1992, 23(2), 267-290. 21. Askin, IL, and Subramanian, S., A cost-based heuristic for group technology configuration, lntetrgttional Journal of Proa~ction Research, 1987, 25(1), 101-114. 22. Kusiak, A., and Chow, W.S., Efficient solving of the group technology problem, Journal of Manufacturing Systems, 1987, 6(2) 117-124.
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
181
Design of A Manufacturing Cell in Consideration of Multiple Objective Performance Measures T. Parka and H. Leeb a Organization and Management, San Jose State University, One Washington Square, San Jose, CA 95192 b Department of Management Information Systems, Korea Advanced Institute of Science and Technology, 207-43 Cheongryangridong, Dongdaemoongu, Seoul, Korea This paper presents a new approach to the design of a manufacturing cell (MC) with multiple performance objectives via the simulation-based design of experiments and Compromise Programming. The Taguchi Method is employed to establish an efficient design of experiments which can reduce the computational efforts while still obtaining enough statistical information for designing the MC. After conducting all simulation-based experimental runs, multiple performance objectives are formulated through a regression analysis for the results of the experiments. Finally, design parameters associated with the multiple objective MC problem are determined using Compromise Programming. Numerical results from an application of the method to an MC with six workstations have showed that it allows MC designers to interactively compromise among conflicting performance objectives while determining system parameters with significantly reduced number of simulation experiments. 1. INTRODUCTION Group Technology (GT)-based cellular manufacturing (CM) has recently emerged as an important manufacturing strategy that changes manufacturing practices from functional production to small- to medium-sized batch production. Some of the benefits reported from applications of the CM in industry include reduced work-in-process inventories, shortened production lead times, improved product quality, better labor management, reduced paperwork, and so on (Hyer (1987), Kusiak and Chow (1988), and Hyer and Wemmerlov (1989)). Manufacturing cells composing a cellular manufacturing system (CMS) are clusters of machines or processes, located in close proximity, which are associated with the manufacture of a part family (or, part families). Parts in a part family are similar in their processing requirements (e.g., required operations, tolerances, machine tool capacities, etc.) and/or geometrical shapes (Wemmerlov (1988)). Designing the CMS involves two major decisions: cell formation and a system design of each manufacturing cell. The cell formation which is the first step in the design of a CMS is to
182 group parts (machines) into part families (workcells) and to assign part families to the cells. This task is based upon job routings, workloads, design attributes, and so forth. During the past two decades, considerable research has developed several different methods to solve the cell formation problems. These methods which are classified into five categories by Chu (1989) are presented in the following papers: (1) Array-based methods: King (1979, 1980), Chandrasekharan and Rajagopalan (1986a), Chan and Milner (1981), Kusiak and Chow (1987), (2) Similarity-coefficient-based methods: McAuley (1972), Mosier and Taube (1985), Seifoddini and Wolfe (1986), Waghodekar and Sahu (1984), Wei and Kern (1989), Luong (1993), Chow and Hawaleshka (1993), (3) Cluster Analysis: Stanfel (1985), Chandrasekharan and Rajagopalan (1986b, 1987), Srinivasan and Narendran (1991), (4) Graph Theoretical Methods: Rajagopalan and Batra (1975), Vanelli and Kumar (1986), Ballakur and Steudel (1987), and (5) Mathematical Programming Methods: Kusiak (1987), Choobineh (1988), Shtub (1989), Gunasingh and Lashkari (1989), Rajamani et al. (1990), Srinivasan et al. (1990). Besides the above five categories, expert system (Kusiak (1988)), fuzzy mathematics (Xu and Wang (1989)), and heuristic algorithm (Tabucanon and Ojha (1987) and Harhalakis et al. (1990)) approaches have been also applied to solving the cell formation problems. The second design stage, which is to design the details of each individual manufacturing cell created at the first stage of cell formation, includes determination of (1) tools, fixtures, and pallets, (2) material handling equipments, (3) an equipment layout, (4) the number of machine operators, (5) assignment of these operators to the workstations (or, machines), (6) the capacity of buffers between workstations, and (7) a machine-setup policy indicating how many machines should be engaged in processing a new job coming in a workstation. In the previous research by Wemmerlov and Hyer (1987), the first three design issues listed above have been addressed along with grouping parts (machines) into part families (cells). Most research on the design of CMS to date has overlooked the effects of the other design issues (especially, operators and their assignment to machines) on system performance. It should be noticed that Wemmerlov and Hyer (1989) reported from their mail survey that of the 32 companies surveyed, 25 had only manned cells, one had only unmanned cells with a high degree of automation, and six had both types of cells. The choice of performance measures in a manufacturing system depends highly on a management policy and decision making. The literature on the design and operation of the CMSs has shown that most past research used only a single performance measure as their objective to achieve. However, optimizing one performance objective function may lead to sacrificing the other objective(s). For example, the objective of minimizing in-process inventory is in conflict with that of maximizing a production rate in some manufacturing environments where workloads among workstations are not well balanced or machines in workstations are not quite reliable. Thus, the objective of this research is to present a method for designing a manufacturing cell with more than one system performance measure to optimize. The multiple objective design method is composed of three serial decision processes: (1) determining the objective functions of system performance measures via simulation-based experiments designed using the Taguchi Method (TM), (2) constructing a multi-objective
183 mathematical programming problem with the determined objective functions, and (3) determining the values of design parameters in the mathematical programming problem using the compromise programming (CP). For a numerical example, a manufacturing workcell with six workstations is employed, and the following four parameters involved in the second design stage are to be determined: (1) the number of operators, (2) assignment of operators, (3) buffer sizes between workstations, and (4) a machine-setup policy in a workstation. The design problem considers four performance measures simultaneously: (1) minimizing job tardiness, (2) minimizing in-process inventory, (3) maximizing a production rate, and (4) maximizing manufacturing cell utilization. 2. DESCRIPTION PROBLEMS
OF A MANUFACTURING
CELL AND ITS DESIGN
A manufacturing cell treated in this research, as shown in Figure 1, consists of dissimilar workstations grouped together to process a set of similar parts. Since the workloads of workstations are usually varying, workstations might contain more than one functionally identical semi-automatic machine to balance workloads among workstations. Examples of such workstations with semi-automatic machines include machine shops, textile mills, rubber molding shops, tire molding shops, and so on. To further minimize the effects of unbalanced workloads, finite buffer storage typically exists between workstations. Jobs are transported on a piece-by-piece basis from upstream to downstream workstations. Operators in a manufacturing cell are responsible for setting up the machines, and loading and unloading individual workpieces. Since the manufacturing cell is equipped with semi-automatic machines, it enables operators with cross-functional skills to run several machines simultaneously.
Incoming Job Storage
Workstation#1
~~t
Workstation#2
) Outgoing Job Storage
Workstation #N
~1
Workstation #N-1
Figure 1. A General Layout for a Manufacturing Cell
'~~'~1
*
184 As mentioned in Section 1, optimal values of the following parameters should be determined to complete a manufacturing cell design successfully: (1) Tools, fixtures, and pallets, (2) Material handling equipments, (3) Equipment layout, (4) Number of machine operators, (5) Assignment of these operators to the workstations (or, machines), (6) Capacity of buffers between workstations, (7) Machine-setup policy in a workstation. Tools should be standardized to minimize tool changes, resulting in reducing cycle times and increasing product throughput rates. When numerically controlled machines are used, it is necessary to allocate tools to machines properly in order to reduce setup times. (Refer to Sarin and Chen (1987), and Co et al. (1990).) In addition, fixtures and pallets should be used efficiently in light of a rapid setting change. Snead (1989) asserted that the best way to design a material handling system is to determine the following two parameters that the system must accommodate: namely, the load-carrying capacity and the amount of routing flexibility. While large and heavy parts are in general handled with power roller conveyors, free roller conveyors, rail transports, spinning table systems, monorails, and floating pallets, belt conveyors and automatic guided vehicles are usually used for carrying light loads. The physical equipment layout of a cell will also affect its performance: efficiency of the flow of materials within a cell, efficiency of the direct labor, and the flow of product to any off-line process required. Snead (1989) insisted that efficiencies of material flow and labor should be considered simultaneously in forming a cell, which will retain the maximum flexibility for future expansion in both cell size and family accommodation. Logendran (1991) presented a new model for determining machine-part clusters in cellular manufacturing which takes into account the sequence of operations and the impact of a cell's layout. In highly-automated flexible manufacturing systems (FMSs), the operator issue might not be a critical design factor. However, in manned cells with a low degree of automation or high labor intensity, operators are necessary for setting up machines, loading/unloading workpieces, or processing the workpieces on a machine. Black (1983) illustrated a schematic diagram of a manned cell using conventional machine tools - a cell laid out in U-shape and staffed by three multifunctional operators. (See Figure 2.) Stecke (1982) and Stecke and Aronson (1985) addressed some effects of assigning either too many or too few machines to an operator; (1) an overworked operator with a large number of machines can become fatigued, might try to speed up the required tasks (possibly, resulting in quality problems, defective output, and safety problems), or cannot satisfy the output expected by management; and (2) assigning too few machines results in unnecessary labor costs. Under rifles of either operator/machine interference problems or assignment of operators to machines, there is a handful of research in the literature on operator assignment problems in several different manufacturing environments. (Refer to Stecke and Aronson (1985) for a comprehensive review of the operator assignment models.)
185 Direction of part movement within a cell F'-q IH0rizoiatal I 9 9 M/C ~ dlmg
~, ! ui
I,
~>. n
r"i I Vertical I
I~,,i~ I .~. , t 89 l~~ --o
...._--o Lathe
()~--
u I
o!
IGr q
~1 ,--~)
]Inspectio~
Paths of three workers moving within a cell Material movement Figure 2. Schematic of a Manned Cell Using Conventional Machine Tools - Cell Laid Out in U-shape and Staffed by Three Multifunctional Workers (Black 1983). Elsayed and Kao (1990) presented a deterministic model for designing a flow shop production system with manufacturing cells where machines are served by robots. Their models are to determine the number of identical machines needed for each operation, an assignment of operations to each manufacturing cell, the number of robots required in each cell, an assignment of robots to operations in a cell. Although the previous research took into account production interference caused by machines and/or operators, they did not reflect in their problem models production interference resulting from buffer storage between workstations (called "starving/blocking phenomena"). In-process inventories between two workstations (or, operations) serve to smooth and balance work flow in a CMS by decoupling blocking and starving phenomena, both of which are caused by different operation times on machines, service time variability, the inability to process work pieces for a part during the setup period on a machine, machine breakdowns, and
186 so forth. As the size of the buffer storage increases, system performance will be enhanced. However, since larger buffer storage requires more storage space and incurs higher inventory holding cost, an appropriate buffer storage size must be determined in order to reduce manufacturing cost while maintaining a desirable production rate. The finite buffer analysis and design problems of production lines that are associated with high volume production is one of the oldest areas of research in Industrial Engineering/Operations Research. Most research (for example, Gershwin and Schick (1983), Yeralan and Muth (1987), and Choong and Gershwin (1987)) aimed at the development of models for determining the system performance (e.g., production rate) of production lines with a given buffer storage. However, even with a given model for measuring system performance, determining buffer sizes still remains an open research problem. Chow (1987) discussed two sources that cause difficulties in solving the problem: (1) the lack of an algebraic relation between line throughput and buffer sizes, and (2) the nature of combinatorial optimization inherent in the buffer design problem. When there are more than one machine in a workstation of a manufacturing cell, machine setup policy should be specified to indicate how machines will be setup for coming jobs. Two extreme policies can be thought of; all machines in a workstation will be setup to process a job (called "setup policy #1"), or only one machine will be setup for a job (called "setup policy #2). Setup policy #2 is applicable to situations in which the order quantifies are relatively small, the setup times are relatively large, and/or the flow of parts through the cell follows parallel paths with relatively even workloads required in workstations. In essence, the policy #2 can be used at any time when it is inappropriate to setup more than one machine in a workstation for a single job. A variety of mixture policies can be adopted to reduce setup time (eventually, to increase throughput rate), or to meet due dates better. As the number of stages in the production line increases, the allocation of buffer storage will become more difficult to ascertain due to the combinatorial nature of the problem and its unique characteristics, which will be discussed later in detail. Although some researchers applied traditional optimization methods, such as the gradient method (Ho et al. (1979)) and Hooke and Jeeves method (Altiok and Stidham (1983)), to solving buffer allocation, the optimization methods were not appropriate for obtaining global optimal solutions in light of the lack of capability for handling the unique characteristics of the buffer design problem (for example, the discreteness of buffer sizes and the monotonicity of production rate over buffer sizes). Park (1993) presented an efficient two-stage heuristic method for designing finite buffer storage in the production lines by using a dimension reduction method and a beam search method. His heuristic method, however, attempted to achieve a given desired level of a single objective (i.e., throughput rate). The literature on the design and operation of the CMSs has, as mentioned in the previous section, shown that most past research used only a single performance measure as their objective to achieve. However, management's decision process on the design of a cell might not be led based on a single performance measure; it may be a complicated process due to interrelated multiple measures involved. Smith et al. (1986) reported that the following measures are most likely to be used in the flexible manufacturing system (FMS) environment:
187 (1) meeting due dates or minimizing job tardiness, (2) maximizing system utilization, (3) minimizing in-process inventory, (4) maximizing a production rate, (5) minimizing setup time and tool changes, (6) minimizing flow time, and (7) balancing machine usage. (The sequence indicates their importance level ranked by 22 U.S. companies.) In contrast with past research on the design and operation of the manufacturing systems, several researchers have recently applied multi-objective decision making approaches to solving production planning and control problems with more than one objective performance measure. Lee and Jung (1989) and Dean et al. (1990) employed a goal programming (GP) approach to modeling a FMS production planning problem in the context of multi-objectives, such as production rate, machine utilization, throughput time, tool usage time, and value of parts. Unlike the above research of applying the GP to multi-objective FMS problems, Ro and Kim (1990) applied a simple lexicographically-ordering comparison to identify the most efficient process selection rule which could satisfy all multiple objectives, such as makespan, mean flow time, mean tardiness, maximum tardiness, and system utilization. 3. A M A N U F A C T U R I N G OBJECTIVES
CELL
DESIGN
METHOD
WITH
MULTIPLE
After part family/cell formations for the design of a CMS, a method for the second stage of designing the CMS is developed as illustrated in Figure 3. Its first phase identifies design parameters and cell performance measures of interest. Since it is often very difficult to develop a mathematical (or, analytical) model for a manufacturing cell due to complicatedlyinterrelated effects of the design parameters on cell performance, simulation methods are usually used in the second phase for modeling the manufacturing cell with a given set of design parameters considered. Then, values of the cell performance measures are collected by running the manufacturing cell model via a design of experiments, such as the TM. After that, statistical tests are conducted by ANOVA (analysis of variance) to identify significant design parameters. At the next phase, the cell performance measures are formulated with identified significant design parameters through a regression analysis, resulting in a multi-objective mathematical programming problem. Finally, the values of design parameters in the mathematical programming problem will be determined using the CP. After discussing reasons for employing the TM and CP for the multi-objective cell design problem in this section, the details of the methods will be explained in the next section. Computer simulation models have been widely used to alleviate the restrictions in use of the analytical models for designing and analyzing the CMS. Welke and Overbeeke (1988), and Harshell and Dahl (1988) used simulation models to determine the number of operators, and the number and capacity of machines for a CMS. Although the computer simulation model can provide decision makers with more accurate insight into operations of a system, the major drawback of this methodology lies in its high cost of computer time, which in general increases exponentially with the number of decision variables. To reduce the computational efforts while still obtaining enough statistical information for designing a manufacturing system, Schmidt and Meile (1989), and Chanin et al. (1990) advocated the use of TM which is a variation of the fractional factorial design of experiments. Fm'thermore, Chanin et al. (1990) compared three major experimental design techniques (i.e., full factorial design, fractional
188 factorial design, and the TM) in terms of five criteria, including the number of runs, ease of implementation, flexibility of design, desired confounding pattern, and ease of analysis. They observed that the TM can be easily understood and implemented by practitioners, as well as researchers who have limited statistical knowledge. In addition, greater savings on the number of test runs can be achieved by the TM, compared with the full factorial design of experiments. Identify design parameters and cell performance measures of interest.
Develop a model (e.g., a simulation model) for evaluating performance of a manufacturing cell with given values of design parameters.
Run the manufacturing cell model via a design of experiments technique, such as the Taguchi Method, in order to collect data for the cell performance measures of interest. ! Determine statistically significant design parameters by the analysis of variance (ANOVA).
Formulate the cell performance measures in a mathematical form of the identified significant design parameters through a regression analysis.
r
Construct a compromise programming problem using the formulae of the multiple cell performance measures. T Determine the design parameters by solving the compromise programming. i
Figure 3. A Procedure for Designing a Manufacturing Cell.
189 Mathematical models with multiple objectives, such as a goal programming (GP), multiobjective linear programming (MOLP), compromise programming (CP), etc. have been developed to resolve the dilemma of the conflicting objectives occurring in many areas: manufacturing, engineering design, publishing, tax shelters and investment, and capital budgeting. The CP is a relatively recent methodology. It is much more flexible than GP and MOLP in that it combines the best and most useful features of both. It is not limited to linear cases; it can be used for identifying non-dominated solutions under the most general conditions; it allows prespecified goals; and most importantly, it provides an excellent base for interactive programming. (Refer to Lee et al. (1994) for an application of the CP to a wide area telecommunication network.)
4. TAGUCHI METHOD The Taguchi Method (TM) for product and process design has been widely spread in Japanese industry, and has received much attention in the United States since the mid-1980s. Although the TM is based on the statistical design of experiments, this approach to optimizing design parameters differs from the traditional design of experiments in terms of the use of orthogonal arrays for initial exploration and the use of signal-to-noise ratios for optimization. (See Barker (1986), Kackar (1985), and Taguchi (1987) for further details of the TM.) One major contribution of the TM is that it allows a designer to obtain the same effective information as a full factorial design of experiments, with significantly fewer experimental runs. For example, the TM with two treatment levels requires only 8 test runs for experiments with 7 design parameters, instead of 128 test runs that a full factorial design usually needs. As the number of design parameters increases, the number of experiments required is reduced dramatically. Although there are some concerns and critiques of the TM, many practitioners in a variety of industries have used the method and have published their success stories. (See Box (1988) for concerns and criticism.) The heart of the TM lies in establishing the design of experiments by using an orthogonal array and linear graphs. An orthogonal array (OA) is a matrix indicating how system analysts ought to set up the design of experiments to obtain the necessary information in a systematic and economical way. It provides the main effects of design factors as well as interaction effects. Many statisticians have developed complicated OAs in order to accommodate studies on main and interaction effects of many factors. Most applications of the TM have, however, used only a limited number of arrays such as L4, L8, L9, L12, L16, L27, and L32, where Lk indicates that k number of experiments need to be run. (L8 is depicted in Table 1.) Factors involved in an experiment may not only have main effects on the performance measure, but also significant interaction effects. The interaction effects occur when changes in the level of a factor affect the influence of another factor on the experimental results. In the TM, two-factor interaction effects can be designed in an OA via linear graphs. A linear graph is a very efficient tool for identifying columns in an OA that are appropriate for the interaction effects of factors. Triangular tables are also often used in equivalence with the linear graphs. Table 2 shows an example of a triangular table for L16.
190 Table 1 Orthogonal Array of L8. Factors
Expt. No.
A
B
C
D
E
F
G
1
1
1
1
1
1
1
1
2
1
1
1
2
2
2
2
3
1
2
2
1
1
2
2
4
1
2
2
2
2
1
1
5
2
1
2
1
2
1
2
6
2
1
2
2
1
2
1
7
2
2
1
1
2
2
1
8
2
2
1
2
1
1
2
Numbers of 1 and 2 within the above table indicate low and upper levels of each factor.
Table 2 A Triangular Table for an Orthogonal Array L16. 1
2
3
4
5
6
7
(I)I 3
Z
S
4
"7
6
(3)1 v
6
(2)11
6
'7 4
<4>1 1
s
2
8
9
10
11
12
13
14
15
9 8 II I0 13 12 IS 14 8 9 14 15 12 13 11 10 9 8 15 14 13 12
S
10 11
3
12 13 14 15
4
(6)I21 l l ~14S 12 15
.__~..(5>1 3
p
8
15 14
9
12 13
14 13 12
2
3
1
2
(I0>II
8
11
I0
10
9
8
4
5
4
6
7
8
6
7
4
9
7
6
S
(11)1 7 6 s 4 (lz) l 1 2 3 (13) 1 3 z (14)I 1 (lS) I
Indicating that column 13 is for eat interaction effect of factors ~ssigne4 to columns 5 and 8.
11
11
5
I
10
10 11
3
9
191 5. C O M P R O M I S E
PROGRAMMING
(CP)
The objective of the CP technique is to define the human behavior in seeking their goals under multiple objective situations. It should be noted that human decision making is not just to maximize or minimize a goal, but to search for stable patterns of harmony among the goals because of several conflicting goals Zeleny (1992). Each goal, in a decision making process, is expressed as a function with respect to the decision variables. Once target values of the goals which a decision maker attempts to achieve are set, CP is used to reach the best decision through an iterative target setting process which helps to reduce deviations of goal values from their target values. (For further details of the CP, readers are referred to Romero (1991), Shi and Yu (1989), Yu (1985), and Zeleny (1974).) The CP model is to minimize a regret function which combines all deviations of goals from their target values. For given n goals, suppose that a vector y = (Yl ..... Yr0 is a set of goal functions. Let y* be a target vector which is initially set by a decision maker. The regret of having y instead of achieving the target y* is represented by the distance between y and y*. Thus, the regret function is defined by r(y) = II y - y* II. It is thereby presented in the following form of Lp metric (p > 1) which represents a distance with p as a parameter defining the family of distance functions, 1
r(ylp) =
, where ki is a normalization value for i- th goal measure. i=l
ki
Since the goals in the decision-making process often have different degree of importance, n
importance weights (mi's) should be assigned to goals, yi's, where y~ mi = 1. 1
Thus,
i=l
r(ylp,w) = Ii____~lcoiP(lYi - YlI)PlP , where w = (031..... ton). ki
The estimation of the weight vector, w, is not a trivial task. First, each goal should be compared with the others in its importance. The results from all pairwise comparisons are recorded in a matrix /sk = [aij], where aij (i = 1..... n; j = 1..... n) indicates the relative importance of goal i compared to goal j. For instance, if goal i is twice as important as goal j, then aij = 2. All diagonal elements of the matrix/~ are set as 1, and its lower triangle is the inverse of the upper triangle. Then, weight vector, w, can be calculated by applying an eigenvalue method t o / ~ . (Refer to Saaty (1977) for a detailed explanation of the eigenvalue method.) The absolute value sign in the above regret function can be removed by introducing new +
variables of di's and di's as follows: for i = 1..... n,
192
Y i - Yi
ifYi > Yi
0
otherwise;
Yi - Yi
ifYi < Yi
+ di =
di =
f" 0
otherwise. ,
+
" ,
+
+
Then, ly i - yi I = d[ + d i , Yi " Yi = d[ - d i , and d[ x di = 0. Combined with the previous result, the regret function can be rewritten as: 1
r(ylp, w) =
)p 1 (:i)
co_~i (di + d~) p ill ~,, ki
9
It is noted that the minimization of the above regret function is equivalent to the minimization of r'(ylp, w) = ~ OOi i=l
(d;
+ d + ) p.
Therefore, design problems with multiple objectives can be presented in the following CP formulation: (For simplicity and without loss of generality, r' is replaced by r.)
Minimize Subject to
n {Oi r(ylp, w ) = ~ - - i=1 k~ ,
+ P (d i + di)
l) p +
Yi - f i ( X ) = d i - d i , V i = l ..... n IBX = ~ , where X is a decision vector, (x 1..... Xn), with decision variables of x i ( i = 1..... n), f i (X)is a i-th goal function, Yi, such that Yi = f i (X), ]!5 is a constraint coefficient matrix, and is a right-hand side vector of constraints.
It is not uncommon for system designers to check if the target values of goals are achieved after solving the above compromise programming problems with multiple goals. If target values of some goals are not obtained at the satisfactory level, the system designers may attempt to improve the goals by adjusting target values of other goals. Thus, this iterative compromising process will continue, as shown in Figure 4, until a certain satisfactory solution is reached.
193
Construct a design problem by x, f(x), y*, w, and p. _
Solve the problem using compromise progranuning.
No
Is the solution Yes ,
,
,,,,,i
Adopt the solution and terminate the iterative searching process Figure 4. An iterative Compromising Process in the Compromise Programming.
6.
A N U M E R I C A L E X A M P L E OF MANUFACTURING CELL DESIGN
THE
MULTIPLE
OBJECTIVE
6.1. Description of a Manufacturing Cell Problem A. Description of a Manufacturing Cell Configuration. A gear manufacturing cell with six workstations (or, machine centers) is, as illustrated in Figure 5, used to show a numerical example of the multi-objective cell design method presented in Section III. The six operations of the gear cell are: (1) turn side number 1 (of a forging blank), (2) turn side number 2 and bore, (3) hob teeth, (4) deburr/shave teeth, (5) broach keyway, and (6) inspection and package. The gear cell machines 98 different gears with processing and setup times varying from operation to operation. Buffer storage is located between two workstations to reduce imbalance of workloads among workstations. B. System modeling via simulation. The gear cell presented in Figure 5 is modeled using STARCELL which is a PC-based simulator to assist in the design and evaluation of flow line manufacturing cells. (Refer to Steudel and Park (1987) for details of the STARCELL.) Parts coming to the cell are processed on the first-come-first-served basis. Operator's travel time from a workstation to another are given and constant. To avoid the initial transient system performance, collection of statistics from the simulation results begins after 40 hours (i.e., a week), and the data collection ends at the time of 2,000 hours (i.e. a year with 50 weeks).
194
r D Turn/ Bore
Stock
ob Teeth~ ob Teeth] I ob Teeth~ C Bufler #3 ]
Finish Stock
i Inspect ~) ~ Packag~J
ir~ D
Figure 5. Layout of a Gear Manufacturing Cell Used for a Numerical Example. C. Design of Experiments via the Taguchi Method. The following eight decision variables are involved in the manufacturing cell simulation model: i) Machine setup policy ii) Number of operators iii) Operator assignment to workstations in a cell iv) Buffer storage # 1 between workstations 1 and 2 v) Buffer storage # 2 between workstations 2 and 3 vi) Buffer storage # 3 between workstations 3 and 4 vii) Buffer storage # 4 between workstations 4 and 5 viii) Buffer storage # 5 between workstations 5 and 6. The following four performance measures are to be optimized in this research: i) Minimize average tardiness per job (for simplicity, it will be called job tardiness throughout this paper), ii) Minimize in-process inventory, iii) Maximize production rate, iv) Maximize system utilization. The design of experiments for the manufacturing cell design problem includes main effects of eight factors and two levels in each factor for simplicity, which requires orthogonal array L12 of the TM. It should be noted that since in the L12 the interaction effects of factors are consistently spread across all columns, higher orthogonal arrays should be used for incorporating the interaction effects into a manufacturing cell design problem. The layout of the design is delineated in Table 3, and also the table shows the allocation of effects of interest to the columns.
195 Table 3 Manufacturing Cell Design Parameters and Their Levels. Design Parameters Sym.
Contents
A
Machine Setup Policy
B
No. of Operators
C
Assignment of operators to workstations
Levels Low 1 (All machines in a workstation should be setup to process a job.) 2 1 (Operators can work at any workstation.)
High 2 (Only one machine in a workstation should be setup to process a job.) 4 2 (Half of operators should work at workstations 1 and 3, and the other half should work at the rest of workstations.)
D
Size of Buffer #1
10
20
E
Size of Buffer #2
10
20
F
Size of Buffer #3
25
50
G
Size of Buffer #4
1
5
H
Size of Buffer #5
1
5
6.2. Statistical Analysis via the Taguchi Method Twelve simulation runs are conducted, parameter settings and system utilization of which are shown in Table 4. To investigate the significance of effects, the analysis of variance (ANOVA) table is constructed and presented in Table 5. Since some columns have very insignificant effects on a performance measure as compared with the others, they are compounded into an error term and indicated by "p" in the table. For example, effects of seven columns 5, 6, 7, 8, 9, 10, and 11 for the job tardiness are compounded into the error term due to very small mean sum of squares. Therefore, this compounding process results in 7 degrees of freedom of the pooled error. From the F-test with F6,1,0.05 = 5.99 at the significance level of 0.05, some main effects are found significant, and these significant effects are highlighted by underlines in Table 5. For instance, the significant factors for the job tardiness performance measure include machine setup policy (A), number of operators (B), operator assignment to machines (C), and buffer storage #1 (D). Since the sizes of buffer storages #2, #3, and #5 do not have significant effects on any performance measure, they are set at the lowest level.
196
Table 4 E x p e r i m e n t a l settings and response values of 12 test runs for four different performance measures. Settings of Factors Job Tardi- Prod. M/C Worker Rate Setup No. of Assign- Buffer Buffer Buffer Buffer Buffer ness Policy workers merit No. 1 No. 2 No. 3 No. 4 No. 5 Oars) (pcs/hr)
Test Run No. 1 2 3 4 5 6 7 8 9 10 11 12
1 1 1 1 1 1 2 2 2 2 2 2
2 2 2 4 4 4 2 2 2 4 4 4
1 1 2 1 2 2 2 2 1 2 1 1
10 10 20 20 10 20 20 10 20 10 20 10
10 10 20 20 20 10 10 20 20 10 10 20
25 50 25 25 50 50 25 50 50 25 50 25
1 5 1 5 1 5 5 5 1 1 1 5
1 5 1 5 5 1 5 1 5 5 1 1
3.74 3.94 6.88 2.17 2.03 2.21 16.40 14.31 13.77 9.67 10.52 8.10
4.97 4.98 4.41 8.17 8.00 8.05 3.55 3.28 2.62 5.85 5.39 6.31
System WIP Utiliza(pcs) tion (%) 2.50 2.57 16.88 20.83 17.20 19.89 26.29 15.15 4.48 27.07 35.96 31.27
36.37 36.49 31.89 60.21 58.63 58.89 26.28 26.81 34.44 47.27 41.57 53.26
Table 5 The A N O V A table from the results shown in Table 4. Col. no. in OA
1 2 3 4 5 6 7 8 9 10 II
Source
A B C D E F G H Dummy Dummy Dummy
Degrees . of Freedom Job Tardiness 1 1 1 1 1 1 1 1 1 1 I
582.30 582.30 18.61 22.40 p p p 1.07 p p p
F Values WlP
Production Rate
System Utilization
7.53 7.53 1.28 1.69 p 1.81 p p p p p
180.24 180.24 p 1.94 p 1.19 12.92 p p p p
14.75 14.75 0.83 p 1.79 p p 1.11 p p p
F r o m a regression analysis, the following equations corresponding to the four performance measures are established: (1) Job tardiness (Yl): Yl = -5.398 + 9.384 Xl - 1.829 x2 + 2.221 x3 + 0.106 x4, (2) W I P (Y2): Y2 = -16.58 + 10.05 Xl + 7.021 x2, (3) Production rate (Y3): Y3 = 2.429 + 0.571 xl + 0.286 x2 - 0.429 x s , (4) S y s t e m utilization (y4): Y4 = 29.87 - 12.257 Xl + 10.008 x2, w h e r e Xl ..... x5 are a machine setup policy, the n u m b e r of operators, an assignment policy of operators to workstations, size of buffer #1, and the size of buffer #4.
197
6.3. The Design of a Manufacturing Cell Using Compromise Programming With the regression equations for four performance measures presented in Section 6.2, the manufacturing cell design problem can be formulated as follows: Problem 1 (P1): (1) Objective functions: Min Yl = -5.398 + 9.384 X 1 - 1.829 x2 + 2.221 x3 i) Job tardiness (Yl): + 0.106 x4, Min Y2 = -16.58 + 10.05 Xl + 7.021 x2, ii) WlP (Y2): iii) Production rate (Y3): Max Y3 = 2.429 + 0.571 Xl + 0.286 x2 - 0.429 xs, iv) System utilization (y4): Max Y4 = 29.87 - 12.257 xl + 10.008 x2, (2) Constraints:
l<x1<2,1<x2<6,1<x3<2,
l<xn<50,1<x5<15,
Xl ..... x5 are all integer values. The boundaries of variables x4 and x5 in Problem P1 are set through prior analysis. It should also be noted that in a real manufacturing system, these boundaries should include physical constraints, such as space limitation in buffer storage and a target manufacturing cost for the cell operations. A model for designing the manufacturing cell with four multiple performance objectives is then formulated in a CP mathematical form presented in Section 5. Let IF be a constraint set as shown above, and denote the function of i-th performance measure and its target value by Yi = fi(x) and Yi*, respectively. The CP model for the manufacturing cell design problem is: 00i
Minimize
r(ylp, w ) =
subject to
Yi*-fi(x) = d i - d i ,
i=l +
+)p
(di + di
V i = l ..... 4
XEF,
where x = (x 1, x 2, x 3, x 4, x5). An eigenvalue method is used to m e a s u r e 0 ) i ' s . Such a method has been successfully used for the Analytic Hierarchy Process (AHP) (Saaty (1977)), and a computer software called the Expert Choice (Forman et al. (1985)) is available. (It should be noted that another method called the Multiple Attribute Utility Technique (Keeney and Raiffa (1976)) can also be used to obtain weights.) On the basis of pairwise comparisons of four objectives, the Expert Choice using the eigenvalue method generates weights of 0)1 = 0 . 2 8 8 , 0)2 -" 0 . 0 8 1 , 0) 3 = 0.477, and 0)4 = 0.154. Normalization factors, ki's, are calculated by Ibest value - worst valuel of goal measures shown in Table 4, resulting in k 1 = 14.37, k 2 = 33.46, k 3 = 5.55, and k4 = 33.93. The regret function to be minimized is then: +
r ( y ) - 0.0200 * d~ + 0.0024 * d2 + 0.0859 * d3 + 0.0045 * d4. With an initial target of y* = (0.25, 0.25, 5.0, 80.0) and p =1, a compromise design is obtained such that x = (1, 3, 1, 1, 1). The goal is achieved at y = (0.826, 14.530, 3.429, 47.637).
198 The manufacturing cell (MC) designer is not satisfied with the high level of WIP occurring in the currently-designed MC. To improve the level of WIP, the MC designer sets a new target of WlP from 0.25 to 0.0 with the detriment of targets of other objectives. For example, the target of job tardiness has been increased from 0.25 to 1.0. Similarly, after the targets of throughput rate and utilization are adjusted, a new target vector becomes y* = (1.0, 0.0, 2.5, 50.0). Then, a new design is obtained at x = (1, 2, 1, 1, 1) with a new achieved goal of y = (2.655, 7.510, 3.143, 37.629). Note that WIP is enhanced by 48.3% with the detriment of other three objectives. For example, the level of throughput is deteriorated by 8.3%. Now, the MC designer attempts to see the possibility of improving the levels of throughput and utilization. For this purpose, a new target is set at y* = (1.0, 0.0, 3.5, 60.0). The compromised MC design using the CP remains unchanged under the target. Thus, the current design is accepted as a satisfactory solution, and the compromising process terminates. Therefore, the MC is designed as follows: (1) Machine setup policy: All machines in a workstation should be setup to process a job, (2) Number of operators: 2, (3) Assignment of operators to workstations: Operators are assigned to all machines, (4) Size of buffer storage # 1: 1, (5) Size of buffer storage # 2: 10, (6) Size of buffer storage # 3: 25, (7) Size of buffer storage # 4: 1, (8) Size of buffer storage # 5: 1. The above iterative processes including setting targets and searching for better solutions are likely to better represent an MC designer's compromising behavior for trade-offs among multiple goals in conflict. 7. CONCLUDING REMARKS This paper presents a new method for designing a manufacturing cell (MC) with multiple performance objectives via a simulation-based experimental design method and the Compromise Programming. The Taguchi Method is employed to establish an efficient design of experiments which can reduce the computational efforts while still obtaining enough statistical information for designing the MC. After conducting all test runs based on the Taguchi Method, multiple performance objectives are formulated through a regression analysis. Finally, design parameters associated with the multiple objective MC problem are determined simultaneously using Compromise Programming which can resolve the dilemma of the conflicting objectives. This is the first kind of a new approach to solving a multi-objective MC design problem by employing both the Taguchi Method and the Compromise Programming. Numerical results from an application of the method to an MC with six workstations have showed that it allows MC designers to interactively compromise among conflicting performance objectives while determining system parameters with significantly reduced number of simulation experiments. Although only main effects of design parameters are modeled in the numerical example for a
199 simplicity purpose, interaction effects of design parameters can be studied by using different orthogonal arrays in the Taguchi Method or other experimental design methods such as a full factorial design. The proposed method can be extended to include not only design parameters but also scheduling parameters so that design and scheduling MC problems can be solved at the same time in order to implement a globally optimal MC. REFERENCES
1. Hyer, N. L., The Potential of Group Technology for U.S. Manufacturing. Capabilities of Group Technology, edited by N. L. Hyer, Society of Manufacturing Engineers, Dearborn, MN (1987) 391-406. Kusiak, A. and Chow, W. S., Efficient Solving of the Group Technology Problem, J. of Manufacturing Systems, 6 (1987) 117-124. 3. Hyer, N. L. and Wemmerlov, U., Group Technology in the US Manufacturing Industry: A Survey of Current Practices, Int. J. Prod. Res., 27 (1989) 1287-1304. Wemmerlov, Production Planning and Control Procedures for Cellular Manufacturing Systems: Concepts and Practice, American Production and Inventory Control Society, Falls Church, Virginia, (1988). Chu, C. H., Cluster Analysis in Manufacturing Cellular Formation, Omega, 17 (1989) 289-295. King, J. R., Machine-component Group Formation in Group Technology, Omega, 8 (1979) 193-199. King, J. R., Machine-component Grouping Using ROC algorithm, Int. J. of Prod. Res., 18 (1980) 213-231. 8. Chandrasekharan, M. P. and Rajagopalan, R., MODROC - An Extension of Rank Order Clustering for Group Technology, Int. J. Prod. Res., 24 (1986a) 1221-1233. Chan, H. M. and Milner, D. A., Direct Clustering Algorithm for Group Formation in Cellular Manufacture, J. of Manufacturing Systems, 1, (1982) 65-75. 10. McAuley, J., Machine Grouping for Efficient Production, The Production Engineer, 52 (1972) 53-57. 11. Mosier, C. and Taube, L., Weighted Similarity Measure Heuristics for the Group Technology Clustering Problem, Omega, 13 (1985) 577-579. 12. Seifoddini, H. and Wolfe, M. P., Application of the Similarity Coefficient Method in Group Technology, IEEE Transactions, 18 (1986) 271-277. 13. Waghodekar, P. H. and Sahu, S., Machine-component Cell Formation in Group Technology: MACE, Int. J. ofProd. Res., 12 (1984) 937-948. 14. Wei, J. C. and Kern, G. M., Commonality Analysis: A Linear Clustering Algorithm for Group Technology, Int. J. of Prod. Res., 12 (1989) 2053-2062. 15. Luong, L. H. S., A Cellular Similarity Coefficient Algorithm for the Design of Manufacturing Cells, Int. J. ofProd. Res., 31 (1993) 1757-1766. 16. Chow, W. S. and Hawaleshka, O., Minimizing Intercellullar Part Movements in Manufacturing Cell Formation, Int. J. ofProd. Res., 31 (1993) 2161-2170. 17. Stanfel, L., Machine Clustering for Economic Production, Engineering Costs and Production Economics, 9 (1985) 73-81. Q
Q
o
0
.
.
200 18. Chandrasekharan, M. P. and Rajagopalan, R., An Ideal Seed Nonhierarchical Clustering Algorithm for Cellular Manufacturing, Int. J. Prod. Res., 24 (1986b) 451-464. 19. Chandrasekharan, M. P. and Rajagopalan, R., ZODIAC - An Algorithm for Concurrent Formation of Part-families and Machine Cells, Int. J. Prod. Res., 25 (1987) 835-850. 20. Srinivasan, G. and Narendran, T. T., GRAFICS - A Nonhierarchical Clustering Algorithm for Group Technology, Int. J. of Prod. Res., 29 (1991) 463-478. 21. Rajagopalan, R. and Batra, J. L., Design of Cellular Production Systems - A Graph Theoretic Approach, Int. J. ofProd. Res., 13 (1975) 567-579. 22. Vanelli, A. and Kumar, K. R., A Method for Finding Minimal Bottle-neck Cells for Grouping Part-machine Families, Int. J. of Prod. Res., 24 (1986) 387. 23. Ballakur, A. and Steudel, H. J., A Within-cell Utilization Based Heuristic for Designing Cellular Manufacturing Systems, Int. J. of Prod. Res., 25 (1987) 639. 24. Kusiak, A., The Generalized Group Technology Concept, Int. J. of Prod. Res., 25 (1987) 561-569. 25. Choobineh, F., A Framework for the Design of Cellular Manufacturing Systems, Int. J. ofProd. Res., 26 (1988) 1161-1172. 26. Shtub, A., Modelling Group Technology Cell Formation as a Generalized Assignment Problem, Int. J. of Prod. Res., 27 (1989) 775-782. 27. Gunasingh, K. R. and Lashkari, R. S., Machine Grouping Problem in Cellular Manufacturing Systems - An Integer Programming Approach, Int. J. of Prod. Res., 27 (1989) 1465-1473. 28. Rajamani, D., Singh, N., and Aneja, Y. P., Integrated Design of Cellular Manufacturing Systems in the Presence of Alternative Process Plan, Int. J. of Prod. Res., 28 (1990) 1541-1554. 29. Srinivasan, G., Narendran, T. T., and Mahadevan, B., An Assignment Model for the Part-families Problem in Group Technology, Int. J. ofProd. Res., 28 (1990) 145-152. 30. Kusiak, A., EXGT-S: A Knowledge Based System for Group Technology, Int. J. of Prod. Res., 26 (1988) 887-904. 31. Xu, H. and Wang, H., Part Family Formation for GT Applications Based on Fuzzy Mathematics, Int. J. of Prod. Res., 27 (1989) 1637-1651. 32. Tabucanon, M. T. and Ojha, R., ICRMA - A Heuristic Approach for Intercell Flow Reduction in Cellular Manufacturing Systems, Material Flow, 4 (1987) 189-197. 33. Harhalakis, G., Nagi, R., and Proth, J. M., An Efficient Heuristic in Manufacturing Cell Formation for Group Technology Applications, Int. J. of Prod. Res., 28 (1990) 185198. 34. Wemmerlov, U. and Hyer, N. L., Research Issues in Cellular Manufacturing, Int. J. of Prod. Res., 25 (1987) 413-432. 35. Wemmerlov, U. and Hyer, N. L., Cellular Manufacturing in the U. S. Industry: A Survey of Users, Int. J. ofProd. Res., 27 (1989) 1511. 36. Sarin, S. C. and Chen, C. S., The Machine Loading and Tool Allocation Problems in a Flexible Manufacturing System, Int. J. Prod. Res., 25 (1987) 1081-1094. 37. Co, H. C., Biermann, J. S., and Chen, S. K., A Methodical Approach to the FlexibleManufacturing-System Batching, Loading, and Tool Configuration Problems, Int. J. Prod. Res., 28 (1990) 2171-2186. 38. Snead, C. S., Group Technology: Foundation for Competitive Manufacturing, Van Nostrand Reinhold, New York, 1989.
201 39. Logendran, R., Impact of Sequence of Operations and Layout of Cells in Cellular Manufacturing, Int. J. Prod. Res., 29 (1991) 375. 40. Black, J. T., An Overview of Conventional Manufacturing Systems and Comparisons of Conventional Systems, Industrial Engineering, 15, 11 (1983) 89-93. 41. Stecke, K. E., Machine Interference: The Assignment of Machines to Operators, in the Handbook of Industrial Engineering, edited by Gavriel Salvendy, Jogn Wiley & Sons, New York (1982). 42. Stecke, K. E. and Aronson, J. E., Review of Operator/Machine Interference Models, Int. J. of Prod. Res., 23 (1985) 129-151. 43. Elsayed, E. A. and Kao, T. Y., Machine Assignments in Production Systems with Manufacturing Cells, Int. J. ofProd. Res., 28 (1990) 489-501. 44. Gershwin, S. B. and Schick, I. C., Modeling and Analysis of Three-State Transfer Lines with Unreliable Machines and Finite Buffers, Operations Research, 31 (1983) 354-380. 45. Yeralan, S. and Muth, E. J., A General Model of a Production Line with Intermediate Buffer and Station Breakdown, liE Transactions, 19, 2 (1987) 130-139. 46. Choong, Y. F. and Gershwin S. B., A Decomposition Method for the Approximate Evaluation of Capacitated Transfer Lines with Unreliable Machines and Random Processing Times, lie Transactions, 19 (1987) 150-159. 47. Chow, W., Buffer Capacity Analysis for Sequential Production Lines with Variable Process Times, Int. J. Prod. Res., 25 (1987) 1183-1196. 48. Ho, Y. C., Eyler M. A., and Chien T. T., A Gradient Technique for General Buffer Storage Design in a Production Line, Int. J. Prod. Res., 17 (1979) 557-580. 49. Altiok, T. and Stidham, S., The Allocation of Interstage Buffer Capacities in Production Lines, lie Transactions, 15, 4 (1983) 292-299. 50. Park, T., A Two-phase Heuristic Algorithm for Determining Buffer Sizes of Production Lines, Int. J. of Prod. Res.~ 31 (1993) 613-631. 51. Smith, M. L., Ramesh, R., Dudek, R. A., and Blair, E. L., Characteristics of U.S. Flexible Manufacturing Systems -- A Survey, Proceedings of the Second ORSA/TIMS Conference on FMS, Ann Arbor, MI (1986) 477-486. 52. Lee, S. M. and Jung, H., A Multi-objective Production Planning Model in a Flexible Manufacturing Environment, Int. J. Prod. Res., 27 (1989) 1981-1992. 53. Dean, B. V., Yu, Y., and Schniederjans, M. J., A Goal Programming Approach to Production Planning for Flexible Manufacturing Systems, J. of Engineering and Technology Management, 6 (1990) 207-220. 54. Ro, I. and Kim, J., Multi-objective Operational Control Rules in Flexible Manufacturing Systems, Int. J. Prod. Res., 28 (1990) 47-63. 55. Welke, H. A. and Overbeeke, J., Cellular Manufacturing: A Good Technique for Implementing Just-in-Time and Total Quality Control, Industrial Engineering, 20, 9, (1988) 36-41. 56. HarsheU, J. and Dahl, S., Simulation Model Developed to Convert Production to Cellular Manufacturing Layout, Industrial Engineering, 20, 12, (1988) 40-45. 57. Schmidt, M. S., and Meile, L. C., Taguchi Designs and Linear Programming Speed New Product Formulation, INTERFACES, 19 (1989) 49-56. 58. Chanin, M. N., Kuei, C., and Lin, C., Using Taguchi Design, Regression Analysis, and Simulation to Study Maintenance Float Systems, Int. J. Prod. Res., 28 (1990) 19391953.
202 59. Lee, H., Shi, Y., and Stolen, J., Allocating Data Files over a Wide Area Network: Goal Setting and Compromise Design, Information and Management, forthcoming. 60. Barker, T. B., Quality Engineering by Design: Taguchi's Philosophy, Quality Progress, (Dec. 1986) 32-42. 61. Kackar, R. N., Off-Line Quality Control, Parameter Design, and Taguchi Method, J. of Quality Technology, 17 (1985) 176-188. 62. Taguchi, G., System of Experimental Design, Vols I and 2, American Supplier Institute, Inc., Dearborn, MI 48126 (1987). 63. Box, G. E. P., Bisgaard, S., and Fung, C. A., An Explanation and Critique of Taguchi's Contributions to Quality Engineering, Quality and Reliability Engineering InternationaL, 4 (1988) 123-131. 64. Zeleny, M., An Essay into a Philosophy of MCDM: A Way of Thinking or Another Algorithm?, Computers & Ops. Res., 19 (1992) 563-566. 65. Romero, C., Handbook of Critical Issues in Goal Programming, Pergamon Press, Oxford (1991). 66. Shi, Y., and Yu, P. L., Goal setting and compromise solutions, In G. Karpak and S. Zoints (F_As),Multiple criteria decision making and risk analysis using microcomputers, Berlin: Springer-Verlag (1989). 67. Yu, P. L., Multiple-Criteria Decision Making: Concepts, Techniques and Extensions, New York: Plenum Press (1985). 68. Zeleny, M., A concept of compromise solutions and the method of the displaced ideal, Computers and Operations Research, 1 (1974) 479-196. 69. Saaty, T. L., A scaling method for priorities in hierarchical structures, J. Math Psychology, 15 (1977) 234-281. 70. Steudel, H. J. and Park, T., A Flexible Manufacturing Cell Simulator, Proceedings of the 1987 Winter Simulation Conference, 1987) 230-234. 71. Forman, E. H., Saaty, T. L., Selly, M. A., and Waldron, R., Expert Choice, Decision Support Software Ind., Pittsburgh (1983). 72. Keeney, R. L. and Raiffa, H., Decisions with Multiple Objectives: Preferences and Value Tradeoffs, Wiley, New York (1976).
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
203
Machine Sharing in Cellular Manufacturing Systems Saifallah B e n j a a f a r t
Department of Mechanical Engineering, University of Minnesota, Minneapolis, Minnesota 55455 Abstract: In this chapter, we study the effect of machine sharing on performance of traditional cellular manufacturing systems. We present analytical models for evaluating the desirability of sharing machines of the same type among two or more cells. The impact of machine sharing on several performance measures, such as machine utilizations, cell production rates, and part flow times is investigated under varying conditions of system loading, setup time, batch sizes, and demand and processing variability. Conditions under which machine sharing may be of value and the operating requirements necessary for realizing this value are identified. A methodology for trading off costs and benefits of machine sharing and determining optimal sharing levels is also presented. The sensitivity of these results to different scheduling policies is examined.
1. INTRODUCTION In traditional approaches to the design and operation of cellular manufacturing systems, it is typically assumed that each cell in the system should be ideally dedicated to a specific part family. Using techniques such as group technology (GT) and product flow analysis (PFA), an existing functional layout, where machines are grouped according to their similarity in their processing capabilities, is partitioned into a set of independent cells, each assigned to a group of parts with similar processing requirements. The cells are then configured so as to operate as semi-autonomous flow lines. The motivation behind dedicating cells to families is, among others, the reduction in material handling and setup times, the decrease in batch sizes, the simplification of scheduling and production control, the more efficient allocation of floor space and workers, that is usually anticipated from converting a job shop layout to a cellular organization. In turn, these benefits are supposed to translate into lower production costs, shorter lead times, lower work-in-process inventories, and greater worker satisfaction. Reports exist for a number of instances where such benefits have indeed been realized by a number of manufacturing companies [54] [22]. The empirical evidence remains however limited and inconclusive [54]. In fact, much of the reported successes have been for highly stable environments with easily identifiable part families and sufficiently high and predictable production volumes [3] [50] [51]. Traditional implementation of cellular manufacturing becomes problematic under more variable and dynamic conditions. For example, it is unclear how a purely cellular structure could accommodate part families with fluctuating demand and/or unpredictable part mix composition. It is also unclear as to how t The author's research was in part supported by the National Science Foundation under grant No. DMII-9309631 and the University of Minnesota Graduate School.
204 independent cells could be constructed for parts with relatively short life cycles and/or with frequent design and manufacturing changes. Furthermore, independent cells appear to result in inefficiencies due to possible duplication of processing capacity among cells with similar requirements [28]. Since production volumes for part families rarely correspond to integer machine requirements, duplication of the same machines among different cells could result in poor capacity utilization and higher production costs. During operation, the fact that intercell flows are discouraged, additional inefficiencies could result from not taking advantage of alternative part routing options that are increasingly enabled by computer integration and automated material handling and tool management systems. The lack of cooperation among cells also increases the vulnerability of the manufacturing system in the face of machine breakdowns and labor shortages. These limitations have prompted a number of researchers to challenge the wisdom of a purely cellular shop structure and advocate instead either a return to a more efficient job shop or the adoption of a hybrid structure that combines elements of both cells and job shops. This is apparently supported by current industrial practice as noted in a recent survey by Wemmerlov and Hyer [54]: "13 of the 27 companies (48%) reported that 5% or less of their annual machine hours were spent in cells. Similarly, 63% of the firms reported that 15% or less, and 74% reported that 25% or less of the annual hours were spent in cells ... 20% of the companies with manned cells and 14% of those with unmanned cells reported that machines were shared between cells ... For the parts processed in the cells, the average percentage of total machining time accomplished in the cells was 78.3%." Other studies, using computer simulation, showed that less dedicated cells can yield significant benefits. For example, Ang and Wiley [1] showed that allowing some degree of intercell flow can dramatically improve system performance. Flynn and Jacobs [19] [20] demonstrated using specific shop data that the flow time and work-in-process (WlP) performance can be inferior in systems with dedicated cells. Morris and Tersine [38] produced similar results by comparing the performance of a cellular layout to that of a process layout under varying operating assumptions for setup time, material handling, and demand stability. They found the process layout to be superior in terms of flow time and WIP under most operating conditions, with the exception of environments where setup times are significantly higher than processing times and demand levels are stable. These results are corroborated in a more recent study by Sarper and Greene [41 ]. Burgess et al. [12] compared a factory organized as a traditional job shop with the same factory structured as a hybrid factory containing a cellular manufacturing unit and a remainder job shop. The hybrid factory was shown to perform better than the traditional job shop when appropriate allocation of resources between the cell and and remainder workcenters is made. Other hybrid structures have been proposed by Lee [33], Suresh [50] [51 ], Irani [28] and Drolet [ 15]. Specifically, Lee [33] showed that larger cells, using multiserver facilities, perform better than smaller, excessively partitioned cells. A similar result is obtained by Suresh [50] [51 ] who shows that larger cells where several part families share the same resources can result in important reductions in flow time and work-in-process inventory. Along the same lines, Irani [28] proposes the principle of overlapping or virtual cells as an alternative to the traditional emphasis on machine duplication in creating independent cells. In a virtual cellular layout, machines that are common to more than one cell are made accessible to each of these cells either by rearranging the layout of the individual cells, so that intercell distances are minimized, or by retaining these machines in a centrally located functional section. Examples of cells with shared machines are given in Figure 1. This concept of virtual cells has its origins in a control architecture proposed by the National Institute of Standards (NIST) for its Automated Manufacturing Research Facility (AMRF) [36] [44]. A virtual cell is a logical grouping of machines created by the system
205 controller and assigned temporarily to the production of similar jobs. Upon completing these jobs, the cell may be disbanded or reconfigured into a different cell. A machine may belong to more than one virtual cell at a time so that much of the resources can be shared in real time. The feasibility and desirability of virtual cells for small batch production was demonstrated in a recent study by Drolet [ 15] where a new scheme for scheduling and control of such system is proposed. Example virtual cells are illustrated in Figure 2. In this chapter, we examine the impact of machine sharing, as advocated by Suresh [50] [51 ], Irani [28], and Lee [33], on performance of traditional cellular systems. We present an analytical model for evaluating the desirability of sharing machines of the same type among two or more cells. Machines that are shared are assumed to be grouped in a functional section that is equally accessible to the associated cells. This may require rearranging the internal layout of the cells, so that shared machines are made adjacent to each other, and/or readjusting the overall layout of the cells, so that intercell distances between overlapping cells are minimized. Performance measures that we consider include machine utilizations, cell production rates, part flow times, and levels of work-in-process inventories. The effect of machine sharing on these measures is studied under varying conditions of system loading, setup time, batch sizes, and demand and processing variability. More specifically, we address the following questions: 9What are the key system design and operation parameters that need to be considered in order to accurately capture the impact of machine sharing on manufacturing performance? 9 Which system performance measures, if any, are positively correlated to the level of machine sharing in a system and what are the characteristics of such correlations? 9What are the conditions under which machine sharing may be of value and what are the operating requirements necessary for realizing this value? 9 How can the costs and benefits of machine sharing be traded-off and optimal, and/or affordable, levels of machine sharing be determined? 2. LITERATURE REVIEW The issue of machine sharing has been for the most part addressed in the context of studies of manufacturing flexibility [3]. In these studies, machine sharing is seen to arise from either machine or part routing flexibility, where machine flexibility refers to the capability of a machine to process a variety of part types and part routing flexibility is defined by the possibility of routing a part to more than one machine [41] [2] [24] [25]. Much of the work on machine and routing flexibilities is concerned with developing measures of these flexibilities. Examples are many and include [32], [56], [35], and [ 16]. Evaluative models of flexibility are fewer and typically limited to simulation models. Examples can be found in [57], [34], [27], [11], [37], [55], [40], [24], [39], and [18]. Most of these models tend to agree that flexibility can be beneficial to system performance (e.g., lower lead times, higher machine utilizations, smaller inventories, etc.). Limiting factors to the benefits of flexibility, such as higher setup times, larger lot sizes, and longer material handling times, are generally not taken into account in these models. A number of analytical models have been proposed for the general performance evaluation of manufacturing systems [8] [9] [10]. However, few of these models address specifically the issue of machine sharing. Stecke and Solberg [48], Dallery and Stecke [ 17] and Stecke and Kim [47] examine the issue of resource pooling, in the context of closed queuing network models of FMS's, and find pooling to generally increase system throughput. Calabrese [ 13] extends this result to open queuing network models of job shops and shows that work-in-process inventories can be reduced by increasing machine pooling. Smith and Whitt [46] and Benjaafar [2] offer a more general discussion of the effect of resource pooling in queuing systems. Buzacott [5] examines the effect of machine and routing flexibility using a stochastic model with two part types and find setup times and product mix variety to
206
~IIV ~lIV
~llr ~11~ (a) A traditional Cellular Layout
(b) A cellular layout with machine sharing
Figure 1. Traditional versus overlapping cellular layouts
I
n
U
I
U
I--I I I I"~
U U Figure 2. A virtual cellular layout
207 limit the effectiveness of flexibility. This model is extended in [6] and [7] where the effect of scheduling rules is also considered. Karmarkar [29] and Karmarkar et al. [30] use a queuing model of a single part/single machine system to study the relationships between batch sizes and lead times. This model is extended in [31 ] to the multi-item/multi-machine case. A similar model is also independently proposed by Zipkin [58]. These models do not however address machine sharing. In the cellular manufacturing literature, the issue of machine sharing has become in the last few years of some debate. In particular, an increasing number of authors have been challenging the wisdom of partitioning machines into dedicated production cells and have been advocating instead a greater deal of resource sharing and cooperation between cells [ 19] [20] [38] [28], [ 1], [50] [51] [ 12]. For the most part, these challenges are based on simulation studies contrasting job shops, cellular systems and hybrids of these systems. As mentioned in the previous section, the findings are generally consistent and show that rigid dedication of cells can result in unbalanced machine utilizations, higher lead times, and larger work-inprocess inventories. This is particularly the case when production demand is variable and/or product mix variety is high. Dedicated cellular systems are found to be more effective only when setup times are high, batch sizes are small and demand for each part family is sufficiently high and stable [42] [21]. 3. M O D E L I N G AND ANALYSIS OF MACHINE SHARING In order to evaluate the effect of machine sharing on performance of traditional cellular systems, we consider a set of m machines of the same type which can be potentially grouped into a set of n machine pools. Each machine m i (i = 1, 2 . . . . . m ) is initially associated with a single part type or family Pi. Upon grouping, part types that are initially assigned to individual machines within specific cells, can arbitrarily be assigned to any machine in the group. The level of machine sharing in a group is determined by the group size (i.e., the number of machines in the group and the corresponding number of part types). Shared machines loose their cell identity and are treated as a common resource to the cells. Machines are assumed to be equally accessible to the cells so that material handling penalties due to intercell flows are negligible. As mentioned earlier, this is made possible by a rearranged layout of the cells and/or of the machines within the cells. Grouped machines share a single queue where parts in need for processing wait in first-come first served order and are assigned to the first available machine regardless of the part and the machine cell membership (an alternative batch sequencing scheme is discussed in section 6). Parts are produced in batches of size B (B = 1, 2 .... ). Batches arrive dynamically and independently to the system with an average arrival rate D i / B , where D i is the average demand per period for parts of type i. Since increasing the number of part types processed by a machine group will most likely increase set up times, we will assume that a batch incurs a minor production setup "Cminor ('Cmino r _> 0) when the previous batch on the same machine is of the same type, otherwise it incurs a major setup "g'major such that Zmajo r > "Cmino r. Minor setups are due to simple changeovers between batches (e.g., part placement and fixture positioning) while major setups may require changes in tooling, part programs, and fixtures as well as adjustment time for operators. The ratio of "Cmajor to "Crninor will depend on the degree of similarity between the different part types and the versatility of machines. In a system where part handling is automated and machines are highly flexible (e.g., machining centers), changeover times between different part types will be small. On the other hand, in a system handling a large variety of part types and relying on specialized manual labor for part fixturing, transportation and setup, changeover times could be significant. Setup could also be sequence dependent so that the value of ~major depends on the identity of the current and the previous batch.
208 Intuitively, it should be clear that machine sharing can be potentially beneficial to system performance due to the resulting increase in resource pooling. It should also be clear that these benefits can be eroded by the higher frequency of setups due to the increase in the product mix variety. The interaction between these two opposing effects is not however clear. The impact of system operating parameters such as batch sizes and batch sequencing rules is also not known. In the remainder of this chapter, we set out to examine these various tradeoffs that arise from machine sharing and investigate the effect of other system operating parameters on the realization of the benefits, if any, of this sharing. To allow for a better understanding of the various effects that are at play during machine sharing, we divide our discussion in three sections. In the first section, we study in the absence of setup times the impact of machine pooling on various measures of system performance. In the second section, we evaluate the validity of these results when setup times are included and consider the effect of different batch sizing strategies on the desirability of machine sharing. In the last section, we examine the sensitivity of our findings to alternative scheduling rules. In particular, we study the effect of adopting a setup based batch sequencing priority rule and its implications for various system performance measures. For the sake of brevity, we omit detailed proofs for many of the results. A full discussion of the various issues addressed in this chapter can be found in [21, [3] and [4]. 4. T H E P O O L I N G E F F E C T In order to isolate the effect of machine pooling, we first consider the case where setup times are negligible (i.e., ~'minor-" "Cmajor"- 0 ) . TO allow for a fair comparison between different pooling scenarios, we assume that part average demands are identical with D i = D, for all i = 1, 2 . . . . , m, and part processing requirements are homogeneous with a mean operation time 1/#. These assumptions ensure that machines are balanced with equal utilizations Pi = P = D/# for all i = 1, 2 . . . . . m. For ease of exposition, we also assume initially that batch inter-arrival and processing times are independent and exponentially distributed. Given the above assumptions, average batch flow time, which is also average part flow time, in a machine group of k machines with batches of size B can be obtained as that of a multi-server queuing system (i.e. a M/M/k queue) and is given by [3]
W(k,B)=
BZCk,B + B__, k(,u - D) la
(1)
where
Zrk,8 =
[
k!(1 -
Ok,8)
+ 1]
.
(2)
j=o j!(kpk,8) k-j From equality 1, it is easy to verify that machine sharing and production batching exert opposite effects on average flow time. The value of W(k, B) increases with B while it decreases with k. More specifically, we have W (k, B) = BW (k, 1). (3) where W(k, B) is the average flow time for a batch size of one. This result follows from the fact that in the absence of setups no benefits are obtained from increased batching. On the other hand, average flow time can be shown to be a strictly decreasing and convex function of k [3] [4]. In particular, average part waiting time in queue, Wq(k, B), is found to decrease by at least a factor of k when k machines are shared. That is, Wq(1, B) (4)
Wq(k, B) < ~ .
209 (a result that follows from the fact that 7rk,B < 7rl, B)" These results are illustrated in Figures 3 and 4 for various machine utilization levels. It is interesting to note that while batching has a linear effect on flow time, the effect of machine sharing is of the diminishing type. In fact, much of the reduction in flow time occurs at relatively low levels of sharing. A small increase in the number of shared machines has however a significant effect on performance. For instance, with the grouping of only two machines, waiting time can be cut by more than 50%. Thus, when setup times are negligible and lead time related performance is important, a strategy of machine sharing should be pursued whenever possible. Note that because of the diminishing impact of machine sharing, only limited sharing is of significant value. Hence, a strategy of partial machine pooling would yield comparable performance to that of total pooling. It can also be seen that the effect of machine sharing is particularly important at high utilization levels. In fact, the amount of reduction in flow time can be shown to strictly increase with increases in machine utilization [3]. In addition to its effect on average performance, machine pooling can be shown to have an equally beneficial effect on performance variance. This can be seen by considering the variance of waiting time in queue, Vq(k, 1), for a machine group of size k. The value of Vq(k, 1) is given by [3]: Vq(g, 1)= ~k, 1(2 - /~'k,1) . (5) [kp (1- Pk, 1 )]2 and is a decreasing function of k. Noting that zrk, I -~ z i , 1, we can easily show that waiting time variance for k shared machines is always smaller than that of k dedicated machines by at least a factor of k 2. That is,
Vq(k, 1)_< Vq(1, 1).
(6) k2 Thus with the grouping of two machines, for example, queueing time variance is reduced by more than 75%. This means that fluctuation in workloads among different machines is drastically reduced and the possibility of both bottleneck and starved machines is minimized. The effect of sharing on waiting time variance is depicted in Figure 5. Note that the degree of reduction in variance is particularly significant under conditions of high utilization. The fact that waiting time variance is reduced results in a reduction of overall flow time variance which in turn leads to greater consistency and predictability in lead time related performance. This is desirable in environments where being dependable in meeting due dates and having consistently short lead times is important. Furthermore, machine sharing can be shown to be an effective mechanism for dealing with system variability. In fact, the benefits of sharing can be shown to increase with increases in either demand or processing variability. This can be seen, for example, by considering the following approximation of average flow time for the same machine group described above when batch arrivals and processing times are generally distributed [3]: W (k, 1)
=
Zrk, 1(1 +
2kp
(1-
C2)(C2 + pk2,1 C2) Pk, 1)(1 + pk2,1 C2s)
1
+ --.
]2
(7)
where C2 and C 2 represent respectively the squared coefficients of variation (i.e. the ratio of variance over the squared mean) in customer inter-arrival and processing times. The value of C2 indicates the degree of variability in the part arrival process which can be due to either demand variability and/or to variability in time between part releases to the system. Similarly, the value of C2 indicates the degree of variability in the part processing times which can be due to either inherent variability in the process or to external interferences such as machine breakdowns, tool wear, and poor fixturing. The value of W(k, 1) can be easily shown to be decreasing in k. More importantly, the amount of performance improvement can
210 20
!
I
I
I
I
I
I
I
0.95 15-
10- p=(.9
5-p=o p- o
~
i
I
I
I
I
I
I
I
I
1
2
3
4
5
6
7
8
9
0 0
~
10
Figure 3. The effect of machine sharing on average flow time
(/g= 1)
I
200
I
I
I
!
I
I
I
I
150-
J
100-
p =0.9 /
50-
0
I
I
I
I
I
I
I
I
I
1
2
3
4
5
6
7
8
9
10
B
Figure 4. The effect of batch size on average flow time
(~= 1)
211 be shown to increase with increases in either Ca2 or Cs2 . This is illustrated in Figures 6 and 7. Machine sharing is thus particularly valuable for systems subject to high variability, with the provision of sharing almost eliminating the negative impact of this variability. Note also that in the absence of variability, that is when Ca2 = Cs2 = 0, machine pooling has no effect on performance as the dedicated and shared systems become equivalent. In other words, the value of machine sharing is contingent upon the existence of some degree of variability in the system.
I
100
I
I
I
I
I
I
I
I
I 3
I 4
I 5
I 6
I 7
I 8
I 9
0.9
8060 4o 0.8 20
0 0
I 1
I 2
10
k
Figure 5. Waiting time variance versus machine sharing (/~= 1)
5. T H E E F F E C T O F SETUPS In this section, we investigate the effect of setups on the validity of our previous results and the overall desirability of machine sharing. As mentioned earlier, we associate either a major or a minor setup with the processing of each batch. A major setup occurs when a machine switches between two different part types. On the other hand, only a minor setup is required when the succeeding batches are of the same type. Average batch processing time for parts of type i is thus given by t~, 8(i) = Pr(previous batch is of type
i)'gminor+ Pr(previous
batch is not of type
i)'~major+_B_B. //
Assuming steady state operation, random and independent part arrivals according to a renewal process, and first-come first served batch ordering, the probability that the previous batch "is" and "is not" of the same type in a machine group of size k are given respectively by Pr(previous batch is of type i) =
k
Di
2 De i=1
and
212 I
14
!
. . . . . I. . . . . . . . . . . . .1. .
.I
1210-
0,,
i
1
0
l-
J
2 3 4 Processing variability (Cs 2)
5
Figure 6. The effect of processing variability and machine sharing on flow time (# = 1, C~ = 1, p = O.S)
15
J"
k=l
10 k=2 5
0
u,
0
,
,
'~',
i
1
~
,"i
,
i
,
,
,
~'i
,
~'
~
,
i'~',
2 3 4 Demand variability (Ca 2)
,
~ ''
5
Figure 7. The effect of demand variability and machine sharing on flow time (/2 = 1, Cs2 = 1, p = 0.8)
213
Di
Pr(previous batch is not of type i) = 1
k
Z
.
Di
i=1
where D i is the average demand per period for part type i. The overall average batch processing time can then be obtained as k
kDi ( kDi "Cminor-t- (1- kDi )Tmajor).t_ ]11 n__, tk, B= Z i=l Z Di Z D i 2 Di i=1
i=1
i=1
which can also be rewritten as
k )2 Tminor+ (1 - ~_~
k ( kDi t~,8 = ~__~ i=1
Z
Di
i=1
i=1
B ) 2)'gmajor+ lZp
( k Di
Z
(8)
Di
i=1
We can see that average batch processing time is composed of two components: (1) a setup time component, determined by the degree of machine sharing and part mix composition, and (2) an operation time component, a function of operation time and batch size. The effect of these two components can be more clearly seen by considering the case where D i= D for all i = 1, 2 ..... k. Equality 1 then simplifies to tk, 8
= "Cmajor( ~ -) dr Tmikn~ q. --.B
(9)
#
From equality 9, we can clearly see that batch processing time increases with increases in either machine sharing or batch size. The increase due to machine sharing can be explained by the increase in the frequency of major setups, as k - 1 out of every k setups are of the major type. Note that the proportion of major setups, (k- 1)/k, grows rapidly with k so that the major setup term becomes dominant relatively quickly. In the more general case described by equality 1, setup time will also be determined by the part mix composition (i.e., demand distribution among part types). Batch processing time is maximum when D / = D2 . . . . . DI, = D, and is minimum when there exists a part type i such that D i > 0 a n d Dj = 0 for all j ;e i . More generally, average batch processing time increases as the difference in the production ratios of different part types decreases. This is a result of the increase in the likelihood of a major setup when the different part types are produced in equal proportions. This likelihood is reduced when only few part types dominate the part mix. Thus, batch processing time is not only affected by the number of shared machines but also by the relative variety in the associated part mix. Average batch processing time determines the maximum feasible throughput (production rate) and, consequently, system capacity. Assuming a uniformly distributed part mix (maximum part variety), the maximum throughput per machine, THma x, can be calculated as THmax =
B
q~major "Cmaj~ - "Cminor B k +l~
.
(11)
This maximum production rate decreases with machine sharing while it increases with batch size. In limit cases, we have limk _+ ~(THmax) =
B B' ~major +
(12)
and lim B ~ ~(THmax) = ~t.
(13)
214 Plots of THmax are given in Figures 8 and 9 for various values of k, B and Tmajor. With increased batch size, throughput becomes less affected by machine sharing as setup time is gradually eliminated. Thus, in environments where maintaining high production rates is important (e.g. make-to-stock environments), larger batch sizes should be used. For environments where lead times are more important (e.g., make-to-order environments), large batch sizes could be dangerous since they tend to increase batch processing times. This is however ironic since in a make-to-stock environment the number of different part types is typically limited and setups are not significant, while in a make-to-order environment the number of part types could be high and setups could be important. In addition to determining system capacity, batch processing time determines system utilization. Assuming a uniformly distributed product mix, average utilization per machine, P~ B, is given by
Dk, B = D( Tmajor(~__ ) + Tmin~ + -~B)
(14)
which can also be rewritten as
[Ok,B "- [9setup at- Doperation ,
(15)
Psetup = ~ ( T m a j o r ( ~ ) q- Tm~~~ )
(16)
where and is the proportion of time a machine is being set up (in reality, this is idle time), while
Doperation = D P
and is the proportion of time a machine is actually busy performing operations. This is also the machine utilization in the absence of setup times (see section 4). In order to ensure system stability, we need to have Pk, n < 1. This means that for a fixed level of machine sharing, a lower bound on the required batch size is given by
B( Tmajor(-~-~--) + Tmikn~ ) Brain =
max(l,
),
1 D
(17)
#
which can also be rewritten as Bmi n =
max(l, 1 -
DT
Poperation
)
(18)
where
T=
"Cmajor( k - 1 )
" k
+
Tminor k
and represents total setup time. The minimum required batch size increases linearly as a function of setup time and inversely as a function of actual machine utilization. In the absence of setups, the need for batching is eliminated and B m i n becomes 1. Since total setup time is in part determined by the degree of machine sharing, the value of Bmin will depend on the value of k. In fact, Bmin is a steep convex function of k with the following limit
limk ~ .o(Bmin) =
D'Cmajor 1 - Poperation
.
(19)
Similarly, for a fixed batch size, we can calculate the maximum feasible number of shared machines, kmax. The value of kmax is given by
215 I
1.2
I
I
I
I
I
I
I
I
1 -
E
0.8
--
0.6
--
0.4
--
0.2
--
0 0
B=5 B-=A__13--___3__ B=2 B=I
I 1
! 2
I 3
'1 4
I 5
I 6
I 7
I 8
I 9
10
k
Figure 8. The effect of machine sharing and batch sizes on system capacity (/.t = 1, ~najor = 5, "gminor = O)
I
1.2
I
I
I
!
I
I
I
I
1-
0.8E
0.6-
"t =1 maj~ 1; . = 2 mal__ x =3 maj
0.40.2-
1;
maj = 4&5
0 0
I
I
I
i
I
I
I
I
I
1
2
3
4
5
6
7
8
9
l0
k
Figure 9. The effect of machine sharing and setup times on system capacity (11 = l, B = 1, Zmino r = O)
216 kma x =
Tmaj~ - Tmino r Tmajor - n ( 1 -
1
(20)
--~)
where Vmajor - Tminor > 0 and Tmajor - B (1/D - l/p) > 0. The value of kma x decreases as a function of Tmajo r with a limitt
Lim~major -o .o (kmax) = 1. The value of kma x also decreases with operation machine utilization, Pope rati on , as higher utilization reduces the available capacity for setups. On the other hand, the number of allowable shared machines tend to increase as batch size increases. A result due to the reduction in the frequency of setups. In fact, for B > "Cmajor/(1/D - Ill.O, the value of kmax becomes unbounded (kmax = oo). The value of kmax also increases as ~'major approaches B(1/D - 1/#) with kmax being unbounded for Tmajor < _ B ( 1 / D - 1/p). In summary, increased machine sharing can have a negative impact on several system performance measures. In particular, machine sharing increases batch processing times by increasing the frequency of major setups. This in turn, limits the available capacity for actual operation and increases the proportion of machine idle time due to setups. Consequently, the maximum feasible production rate is also reduced. This also means an increase in the minimum feasible batch size. However, as we saw in the previous section, machine sharing can have a positive impact on a number of flow-related performance measures such as part flow time and flow time variance. In the remainder of this section, we examine the degree to which these benefits are undermined by the presence of setups and the extent to which the negative effect of setups can be mitigated by increased batch sizes. For ease of exposition, let us assume that batch inter-arrival times and processing times are exponentially distributed. Average part flow time in a machine group of k machines with batches of size B can then be obtained, as previously, as that of a multi-server queuing system and is given by [3]
W (k, B) =
~k, B
kp
_D
+ Tmajor(k_~) + ~m~,or + n
P"
(21)
k(p ('rmajor(k- 1) + "rminor) + kB B ) It can be seen that he presence of setups tend to shift the effect of sharing and batching in opposite directions. Larger setup times diminish the desirability of machine sharing while favoring larger batch sizes. For example, consider the case where k is set to 1 and B is allowed to vary. The expression for average flow time can then be rewritten as Tminor + n___
W(1, B ) =
P . (22) 1 D DTminor p B As B is initially increased from its lower bound Bmin (assuming Bmi n > 1), average flow time is dramatically reduced. This reduction eventually levels off and flow time starts to increase again with batch size. This behavior is graphically depicted in Figure 10. The initial decrease in flow time is due to the reduction in the frequency of setups. This is however gradually offset by the increase in processing times as batches become larger. Note that as B increases, average flow time becomes almost linear in B. Noting that W(1, B) is a convex function of B, the batch size value that minimizes flow time can be obtained by simple differentiation of equality 22 and is given by [29]
t We should note that, for a fixed batch size and machine sharing level, there is a limit on the maximum feasible major setup time. The value of this maximum setup time can be directly obtained from the stability condition.
217
P # B * - D'Cminor--. D ~-(1-~-)
(23)
This result illustrates the fact that some degree of batching can be desirable even in the absence of machine sharing, with the optimal batch size being an increasing function of setup time, Tmino r, and machine loading D. Further discussion of the effect of batch sizes in the single machine case can be found in [291, [30] and [31].
I
150
100-
Xmi = 0.75
50-
0
I
I
5
10
15
Figure 10. The effect of batch size and setup time on average flow time (p = 1, D = 0.8) Now let us consider the case where B = 1 but where k is variable. The expression of flow time becomes W (k, 1) = k(
7~k, 1 1
- D)
+ Tmajor(k_~) + "Cminor q_ 1 k
-p"
(24)
"Cmajor(~-~) q- "Cmikn~ -l- l-~P
It is useful to distinguish here between two scenarios. The first is where "Emajor - ( 1 / D - l/p) < 0 and the second is where Zmajo r - ( 1 / D - l/p) > 0. For the first scenario, we have shown earlier that k,nax is unbounded. In fact, in the limit case we have limk ~ oo(Wk, 1) = "Cmajor + 1__ P
and thus the optimal sharing level, k*, is also unbounded. On the other hand for
(25) "Cmajor - ( 1 / D
1/ p ) > O, kma x is finite and given by expression 20. As k approaches kmax, flow time grows without bound. The expression of flow can be shown to be monotonically increasing in k and therefore k* = 1. The behavior of flow time for both scenarios is depicted in Figures 11 and 12. Note that for the first scenario, flow time initially grows as k increases. However because of the diminishing increase in setup time, flow time eventually starts to decrease again with k. This decrease is itself of the diminishing kind with much of the reduction
-
218 ,I
100
!
I
!
I
I
' r / aj. = 0.301
80
60-
I
/
_
40/
20-
0 0
I 2
/
I 4
I 6
~
I 8
..... I 10
1: .=0 26 ma~ . . . . . I 12
I 14
16
Figure 11. The effect of machine sharing and setup time on average flow time (Tmajo r <_(I/D-
1//.t), Tmino r
!
I
I
= 0 , ]2 = 1, D = 0 . 8 )
I
I
5-
1
4-
1
3~"---.....~
2-
x 'r
1' 0
I 5
I 10
I 15
I 20
.
.
. = 0.20 -maj I 25 .
.
.
.
.
=0.18 .
.
.
.
.
.
.
.
30
k
Figure 12. The effect of machine sharing and setup time on average flow time ('Cmajor>(1/D-
1/~), "Cminor
= 0,,/.t = 1, D = 0 . 8 )
219 occurring with a relatively limited number of machines. In the second scenario, the effect of machine sharing is never sufficient to counteract the corresponding increase in setup times. Since kmax has a finite value, increases in k will eventually lead to serious deterioration in performance. This will particularly be evident when Tmajor is high. These results mean that for cells producing parts in batches of size one machine sharing does not necessarily improve flow performance. In fact, when the major setup time is large, any degree of machine sharing may deteriorate performance. For smaller setup times, machine sharing can only be beneficial when the machine group is sufficiently large. Thus, in the absence of batching, machine sharing can be impractical for systems with significant setups or with a small number of cells. Machine sharing can be made more desirable and practical by allowing for larger batch sizes. The general expression of flow time for arbitrary k and B is given in equality 21. By increasing batch sizes, flow time may initially be reduced and the feasible range for machine sharing, kmax, extended. However, increases in B will eventually lead to increases in flow time. This can be seen by considering the following lower bound on flow time:
Wmin _-
~min k ( - (Tmajor(k_ 1) + Tminor) + kn
4" Tmajor(~)-4- ~rn~nor -t- --,B
(26)
)
where 7~min is the value of z~, B when Tmajor = Tminor'- O. Rewriting equality 26 as
Wmin =
~min(Tmaj~
+ Tmikn~ "l"
(k-1
-t- Tmajor, k
)+
~rn~nor B +
P
(27)
and noting that Zmin is independent of B, it is easy to verify that Wmin is linearly increasing in B. The value of Wmin becomes a good approximation of flow time as B gets larger [3]. Although no closed form expression exists for the optimal solution, numerical methods can be applied to equality 21 to solve simultaneously and optimally for k* and B*. Because of the diminishing effect of k and B in reducing, respectively, waiting time and setup time, optimal values for k and B will be, in general, relatively small [3]. 6. THE EFFECT OF SETUP BASED SCHEDULING In the previous two sections, we have assumed that batches are processed on a first come first served ( F C F S ) basis. Using this scheduling policy, a machine is setup for a different batch type even though batches from the same type as the one just completed may be in the queue. A FCFS policy does not obviously take advantage of setup time savings that may result from giving priority to parts with the smaller setup time requirement. In this section, we investigate the impact of such a policy on system performance. In particular, we verify if such a policy can be used to mitigate the increase in setups that generally results from increased machine sharing. In order to isolate the impact of minimizing setups in batch sequencing, we study the case of a single machine that processes k different part types. A setup of length z, z > 0, is incurred only when the machine switches between batches of different type. We consider a sequencing rule where once the machine is setup for a particular part type, it continues processing batches from that type until all batches are exhausted. It is subsequently setup for the next part type. The machine is assumed to switch from one part type to the next in a cyclical order. This sequencing rule is known in the queuing literature as a cyclic and exhaustive alternating priority policy and is usually studied in the context of polling systems [2]. In the remainder of this section, we refer to this policy as a setup minimization (SM) policy. Similar policies have been recently proposed for the scheduling of manufacturing
220 cells by, among others, Vakharia and Wemmerlov [53] and Kekre [32]. However, these policies have generally been evaluated using simulation. Assuming exponential batch arrivals with average arrival rate 2, ~. = D/B, and generally distributed part processing times with mean 1/# and variance cr2, the expression of average part flow time for the SM scheduling policy is given by [4]:
W(SM) = -B- +
z s ( ~ + Crs2) + z(k- --Z8 !-t~
/z
~)
.
(28)
2(1 - p~--~)
Note that in order to assess the effect of product mix variety k, we assumed that the demand is equally distributed among the different parts types so that D i = D/k for all i = 1, 2 . . . . , k. Expression (28) can also be rewritten as W (SM) = ~ B + ~,
(29)
where a = k(2i~ - D), fl = 2t.tk(l.t - D),and y = k2[ rsl.t(l.t- D/k) + Dl.t2cr2/k]. Since a, ~, and y are all positive parameters independent of B, average part flow time is a linearly increasing function of B, and, thus B*(SM) = 1. In other words, using the SM policy, it is always optimal to produces parts in batches of size one. The stability condition for this policy is simply &B/p < l, or equivalently D/~t < 1. Consequently, Bmin(SM) = 1, and the maximum feasible setup time, Zmax, is unbounded. These results contrast with the FCFS policy where B*(FCFS) and Bmin(FCFS) are generally greater than one and there is a limit on the maximum feasible setup. The maximum feasible throughput rate for the SM policy is given by THmax (SM) = It, which is always greater (for v > 0 and k > 1) than that of the FCFS policy. In fact in this case the maximum throughput rate is independent of ~: and is unaffected by the increase in part variety. These results have important implications for system operation and management. In particular, they go counter to long held beliefs regarding the inevitability of batching in the presence of setups. In fact, these results not only do they show that batching is not necessary when a setup avoiding policy is in place but that it is not even optimal. Furthermore, the SM policy improves system capacity by maintaining a maximum feasible production rate that is unaffected by increases in setup times. In practice, this capability is important for systems where sustaining high production volumes is desirable and/or where parts with highly different setuprequirements are simultaneously produced. Note that with a F C F S policy, system capacity quickly deteriorates with increases in setup time. The impact of the SM policy on B*, Bmi n and ~:maxis graphically depicted in Figures 13 and 14. Since an increase in part variety, k, increases the frequency of setups for the FCFS policy, there is a limit on the amount of allowable product mix variety. This is the case, even if the overall production demand, D, is kept constant. The value of the maximum feasible product variety, kmax(FCFS), is given by the following expression: D'r (30) D't:- a ( 1 - p)" In contrast, there is no limit on the degree of part variety when the SM policy is used [4]. The capability to accommodate large variety in the product mix is important for manufacturing systems with a highly diversified product portfolio or those that compete based on kmax(FCFS) =
221 j
240
I
I
FCFS ] 200
160
.~,,q
o 120 t~ l.t
<~"
80
SM
0
i
i
i
5
I0
15
20
Setup time
Figure 13. The effect of setup time on average flow time ( k = 2 , /. t= 1, D = 0.6, B = 11)
180,
~
!
I
I
i
i
30
40
150
120
~
9o
<~
4o 3O
0
i
0
10
9
20
50
Batch size Figure 14. The effect of batch size on average flow time (k = 2,/1 = 1, D = 0.8, I: = 1.3)
222 customized products, among others. The impact of increased part variety on flow time is illustrated in Figure 15. The above somewhat counterintuitive results can, in part, be explained by the fact that as either ~: or k increases, the machine would spend more time processing batches after each setup, making these setups increasingly less frequent. I
I
I
31100
2400
FCF
.q-i
0
1800
~.
1200
<
600
~ 0
sM $
10
15
Part variety
Figure 15. The effect of part variety on average flow time (B = 10,/z = 1, D = 0.8, z= 2.63) In view of these results, it is easy to find instances where the SM policy performs significantly better than the FCFS policy. For example, this will be the case when B is sufficiently small, setup time is long, or product variety is high. In fact, when either B is below Bmin(FCFS), z is greater than Zmax(FCFS) or k is larger than kmax(FCFS), the FCFS policy leads to an infinitely large flow time (see Figures 13, 14 and 15). More generally, we have the following result: Proposition 1 [4]: W(SM) ~_ W(FCFS) if and only i f ( l ) p >_PO or (2) B _~B O, where P0 - [(k 2 _ k + 2) - ~/(k - 1)(k 3 _ k 2 + 4) ]/2, and B0= D(k - 1)['r(2(1 - p) + k(k - 1)) + kDa2)]/[p 2- p(k 2_ k + 2) + (k 2- k + 2)]. The value of Po is an increasing function of k so that the range of utilizations over which the S M policy is more desirable increases as k decreases. The value of B0 is similarly an increasing function of k which in this case means that as k increases the range of batch sizes over which the SM policy is more desirable increases. The value of B0 is also an increasing function of setup time and processing time variance so that with increases in either z or a 2 the range of batch sizes that makes the SM policy superior increases. We should note that condition (2) could have been equivalently expressed in terms of either a critical setup time parameter or a critical part variety parameter.
223
Table 1 Flow time comparisons between the FCFS and the SM policies k
B 1 2 3 4 5 6 7 10 15 20 30 50
W(FCFS) oo oo
87.4 41.9 36.8 36.4 37.6 44.1 57.6 72.0 101.4 161.0 oo oo oo
4 5 6 7 8 9 10 15 20 30 50
586.3 80.6 59.3 53.7 52.3 52.7 53.9 64.7 78.2 106.8 165.9 oo oo oo oo
10 15 20 30 50
132.8 74.8 62.5 59.0 57.6 58.1 67.4 80.4 108.7 167.6
W(SM) 8.9 11.9 14.9 17.9 20.9 23.9 26.9 35.9 50.9 65.9 95.9 155.9 15.4 18.4 21.4 24.4 27.4 30.4 33.4 36.4 39.4 42.4 57.4 72.4 102.4 162.4 21.9 24.9 27.9 30.9 33.9 36.9 39.9 42.9 45.9 48.9 63.9 78.9 108.9 168.9
224 Numerical comparisons of average flow time between the SM and the FCFS policies are provided in Table 1. The SM policy can be seen to yield significant gains in performance over a wide range of operating conditions. It is particularly important to note that the difference in flow time at the optimal batch size for both the SM and the FCFS policies can be substantial. 7. CONCLUSION In this chapter we explored the effect of machine sharing on the performance of cellular systems. We presented analytical models that capture several practical dimensions of sharing, such as setup times, batch sizes, machine loading, demand and processing variability, and scheduling policies. These models were used to assess the impact of increased sharing on a number of system performance measures including machine utilizations, cell production rates, part flow times, and flow time variances. Few of the resulting findings are summarized below: 9In the presence of setup times, machine sharing invariably reduces machine utilizations and cell production capacity while increasing batch processing times and the minimum required batch size. Machine sharing can, however, result in a better distribution of the workload among machines so that the possibility of bottleneck and/or starved machines is minimized. 9The effect of machine sharing on dynamic performance measures, such as mean flow time and flow time variance, is highly dependent on system operating parameters such as setup times, demand levels, batch sizes, and system variability. Depending on the setting of these parameters and on the number of shared machines in a group, sharing may or may not be beneficial to system performance. 9With relatively small setup time penalties, machine sharing was demonstrated to have a dramatic impact on system performance. This is particularly the case for systems operating under high levels of loading and/or high degree of demand and processing variability. 9For systems where setup times are significant, machine sharing can result in performance deterioration. This deterioration can be mitigated by choosing appropriate batch sizes. In fact in order to maximize the value of machine sharing, optimal sharing levels should be simultaneously determined with the optimal batch size. 9 Under most operating conditions, the effect of machine sharing was found to be of the diminishing kind so that most of the benefits of sharing are realized with relatively small machine groups. 9 In addition to its effect on average performance, machine sharing can have a beneficial effect on performance variance. This means that increased sharing can improve performance predictability and consistency. 9The need for batching can be eliminated by adopting a setup avoiding scheduling policy. Such a policy is found to preserve system capacity despite the presence of setups and allows for higher product mix variety. REFERENCES [ 1] Ang, C. L. and P. C. T. Willey, "A Comparative Study of the Performance of Pure and Hybrid Group Technology Manufacturing Systems Using Computer Simulation Techniques," International Journal of Production Research, 22, 2, 193-233, 1984. [2] Benjaafar, S., "Performance Bounds for the Effectiveness of Pooling in Multi-Processing Systems," The European Journal of Operational Research, in press. [3] Benjaafar, S., "Modeling and Analysis of Machine Sharing in Automated Manufacturing Systems," The European Journal of Operational Research, in press.
225 [4] Benjaafar, S. and M. Sheikhzadeh, "On the Effect of Setup Minimizing in Scheduling Policies," Working Paper, Department of Mechanical Engineering, University of Minnesota. [5] Buzacott, J. A., "The Fundamental Principles of Flexibility in Manufacturing Systems," Proceedings of the First Conference on Flexible Manufacturing Systems, Brighton, U.K., 1322, 1982. [6] Buzacott, J. A. and D. Gupta, "Impact of Flexible Machines on Automated Manufacturing Systems," Proceedings of the Second ORSA/TIMS Conference on Flexible Manufacturing Systems: Operations Research Models and Applications, edited by K. E. Stecke and R. Suri, Elsevier Science Publishers B. V., Amsterdam, 257--156, 1986. [7] Buzacott, J. A. and D. Gupta, "Impact of Flexible Machines on Automated Manufacturing Systems," Annals of Operations Research, 15, 169-205, 1988. [8] Buzacott, J. A. and J. G. Shanthikumar, Stochastic Modeling of Manufacturing Systems, Prentice Hall, New Jersey, 1993. [9] Buzacott, J. A. and J. G. Shanthikumar, "Design of Manufacturing Systems Using Queueing Models," Queueing Systems, 12, 135-214, 1992. [10] Bitran, G. R. and S. Dasu, "A Review of Open Queueing Network Models of Manufacturing Systems," Queueing Systems, 12, 95-132, 1992. [11] Bobrowski, P. M. and V. A. Mabert, "Alternate Routing Strategies in Batch Manufacturing: An Evaluation," Decision Sciences, 19, 713-733, 1988. [ 12] Burgess, A. G., Morgan, I. and T. E. Vollman, "Cellular Manufacturing: Its Impact on the Total Factory" International Journal of Production Research, 31, 9, 2059-2077, 1993. [13] Calabrese, J. M., "Optimal Workload Allocation in Open Networks of Multiserver Queues," Management Science, 38, 12, 1792-1802, 1992. [14] Dahel, N. E. and S. B. Smith, "Designing Flexibility into Cellular Manufacturing Systems," International Journal of Production Research, 31, 4, 933-945, 1993. [15] Drolet, J. R., "Scheduling Virtual Cellular Manufacturing Systems," Ph.D. Thesis, School of Industrial Engineering, Purdue University, West Lafayette, IN, 1989. [16] Dixon, J. R., "Measuring Manufacturing Flexibility: An Empirical Investigation," European Journal of Operational Research, 60, 131-143, 1992. [ 17] Dallery, Y. and K. Stecke, "On the Optimal Allocation of Servers and Workloads in Closed Queuing Networks," Operations Research, 38, 4, 694-703, 1990. [18] Das, S. K. and P. Nagendra, "Investigations into the Impact of Flexibility on Manufacturing Performance," International Journal of Production Research, 31, 10, 23372354, 1993. [19] Flynn, B. B. and F. R. Jacobs, "A Simulation Comparison of Group Technology with Traditional Job Shop Manufacturing," International Journal of Production Research, 24, 5, 1171-1192, 1986. [20] Flynn, B. B., and F. R. Jacobs, "An Experimental Comparison of Cellular (Group Technology) Layout with Process Layout," Decision Sciences, 18, 562-581, 1987.
226 [21] Garza, O. and T. L. Smunt, "Countering the Negative Impact of Intercell Flow in Cellular Manufacturing," Journal of Operations Management, 10, 1, 92-118, 1991. [22] Greene, T. J. and R. P. Sadowski, " A Review of Cellular Manufacturing Assumptions, Advantages and Design Techniques," Journal of Operations Management, 4, 85-97, 1984. [23] Gupta, D. and J. A. Buzacott, "A Framework for Understanding Flexibility of Manufacturing Systems", Journal of Manufacturing Systems, 8, 89-97, 1989. [24] Gupta, Y. P. and S. Goyal, "Flexibility of Manufacturing Systems: Concepts and Measurement," European Journal of Operational Research, 43, 119-135, 1989. [25] Gupta, Y. P. and T. M. Somers, "The Measurement of Manufacturing Flexibility," European Journal of Operational Research, 60, 166-182, 1992. [26] Hall, D. N. and K. Stecke, "Design Problems of Flexible Assembly Systems," Proceedings of the Second ORSA/TIMS Conference on Flexible Manufacturing Systems: Operations Research Models and Applications, edited by K. E. Stecke and R. Suri, Elsevier Science Publishers B. V., Amsterdam, 145-156, 1986. [27] Hancock, T. M., "Effects of Alternate Routings Under Variable Lot Size Conditions," International Journal of Production Research, 27, 2, 247-259, 1989. [28] Irani, S. A., Cavalier, T. M., and P. H. Cohen, "Virtual Manufacturing Cells: Exploiting Layout Design and Intercell Flows for the Machine Sharing Problems," International Journal of Production Research, 31, 4, 791-810, 1993. [29] Karmarkar, U. S., "Lot Sizes, Lead Times and In-Process Inventories," Management Science, 33, 3,409-418, 1987. [30] Karmarkar, U. S., Kekre, S., Kekre and S. Freeman, "Lot Sizing and Lead-time Performance in a Manufacturing Cell," Interfaces, 15, 2, 1-9, 1985. [31 ] Karmarkar, U. S. and S. Kekre, "Lotsizing in Multi-Item Multi-Machine Job Shops" liE Transactions, 17, 3, 290-298, 1985. [32] Kekre, S., "Performance of a Manufacturing Cell with Increased Product Mix," liE Transactions, 19, 2, 329-339, 1987. [32] Kumar, V., "Entropic Measures of Manufacturing Flexibility," International Journal of Production Research, 25, 7, 957-966, 1987. [33] Lee, L. C., "A Study of System Characteristics in a Manufacturing Cell," International Journal of Production Research, 23, 6, 1101-1114, 1985. [34] Lin, Y. and J. J. Solberg, "Effectiveness of Flexible Routing Control," The International Journal of Flexible Manufacturing Systems, 3, 189-211. [35] Mandelbaum, M. and P. H. Brill, "Examples of Measurement of Flexibility and Adaptivity in Manufacturing Systems", Journal of the Operational Research Society, 40, 6, 603-609, 1989.
227 [36] McLean, C. R. Bloom, H. M. and T. H. Hopp, "The Virtual Manufacturing Cell," Proceedings of Fourth IFAC/IFIP Conference on Information Control Problems in Manufacturing Technology, Gaithersberg, MD, 1982. [37] Morito, S., Takano, T., Mizukawa, H. and K. Mizoguchi, "Design and Analysis of a Flexible Manufacturing System with Simulation- Effects of Flexibility on FMS Performance," Proceedings of the 1991 Winter Simulation Conference, 294-301, Phoenix, Arizona, 1991. [38] Morris, J. S. and R. J. Tersine, "A Simulation Analysis of Factors Influencing the Attractiveness of Group Technology Cellular Layouts," Management Science, 36, 12, 15671578, 1990. [39] Nandkeolyar, U. and D. P. Christy, "An Investigation of the Effect of Machine Flexibility and Number of Part Families on System Performance," International Journal of Production Research, 30, 3, 513-526, 1991. [40] Newman, E. W., Boe, W. J. and D. R. Denzler, "Examining the Use of Dedicated and General Purpose Pallets in a Dedicated Flexible Manufacturing System," International Journal of Production Research, 29, 10, 2117-2133, 1991. [41] Sarper, H. and T. J. Greene, "Comparison of Equivalent Pure Cellular and Functional Production Environments Using Simulation," International Journal of Computer Integrated Manufacturing, 6, 4, 221-236, 1993. [41 ] Sethi, A. K. and S. P. Sethi, "Flexibility in Manufacturing: A Survey," The International Journal of Flexible Manufacturing Systems, 2, 289-328, 1990. [42] Shafer, S. M. and J. M. Charnes, "Cellular Versus Functional Layout Under a Variety of Shop Operating Conditions," Decision Sciences, 24, 3,665-681, 1993. [43] Shanthikumar, J. G. and D. D. Yao, "On Server Allocation in Multiple Center Manufacturing Systems," Operations Research, 36, 2, 333-342, 1988. [44] Simpson, J. A., Hocken, R. J. and J. S. Albus, "The Automated Manufacturing Research Facility of the National Bureau of Standards," Journal of Manufacturing Systems, 1, 1, 17-32, 1982. [45] Slack, N., "The Flexibility of Manufacturing Systems," International Journal of Operations and Production Management, 2, 289-328, 1990. [46] Smith, D. R. and W. Whitt, "Resource Sharing for Efficiency in Traffic Systems," Bell System Technical Journal, 60, 1, 39-55, 1981.
[47] Stecke, K. E. and I. Kim, "Performance Evaluation for Systems of Pooled of Unequal Sizes: Unbalancing Versus Balancing," European Journal of Operational Research, 42, 2238, 1989. [48] Stecke, K. E. and J. J. Solberg, "The Optimality of Unbalancing Both Workloads and Machine Group Sizes in Closed Queueing Networks of Multiserver Queues," Operations Research, 33, 4, 882-910, 1985. [49] Stecke, K. and J. Solberg, "Loading and Control Policies for a Flexible Manufacturing System," International Journal of Production Research, 19, 5, 1981.
228 [50] Suresh, SIC., "Partitioning Work Centers for Group Technology: Analytical Extension and Shop-Level Simulation Investigation," Decision Sciences, 23, 267-290, 1992. [51] Suresh, N. C., "Partitioning Work Centers for Group Technology: Insights from an Analytical Model," Decision Sciences, 22, 772-791, 1991. [52] Suri, R. and G. Whitney, "Decision Support Requirements in Flexible Manufacturing," Journal of Manufacturing Systems, 3, 1, 61-69, 1984. [53] Wemmerltiv, U. and A. J. Vakharia, "On the Impact of Family Scheduling Procedures," liE Transactions, 25, 4, 102-104, 1993. [54] Wemmerl~v, U. and L. N. Hyer, "Cellular Manufacturing in the U.S. Industry: A Survey of Users," International Journal of Production Research, 27, 9, 1511-1530, 1989. [55] Wilhelm, W. E. and H. M. Sfiin, "Effectiveness of Alternate Operations in a Flexible Manufacturing System," International Journal of Production Research, 23, 1, 65-79, 1985. [56] Yao, D. D., "Material and Information Flows in Flexible Manufacturing Systems," Material Flow, 2, 143-149, 1985. [57] Yao, D. D. and F. F. Pei, "Flexible Parts Routing in Manufacturing Systems," liE Transactions, 22, 1, 48-55, 1990. [58] Zipkin, P. H., "Models for Design and Control of Stochastic, Multi-Item Batch Production Systems," Columbia Business School Research Working Paper No. 469A, Columbia University, New York, 1983.
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
229
Integration of flow analysis results with a cross clustering method M.Barth and R. De Guio Laboratoire de Recherche en Productique de Strasbourg Ecole Nationale Sup~rieure des Arts et Industries de Strasbourg 24 boulevard de la Victoire, F-67084 Strasbourg Cedex, France. Email:
[email protected] strasbg.fr
K e y w o r d s : performance measures, production flow analysis, cellular manufacturing, seriation, cross-clustering.
1. INTRODUCTION Manufacturing firms apply continuous improvement strategies aiming at cutting down lead times, prodUction costs and improving quality. These improvements are obtained by acting on the planning and piloting system, as well as on the information system of production. The organisation into manufacturing cells is a way to improve the physical system of production. According to R.W. Hall [11] "The overall effect of such an organisation is to greatly shorten lead times of material through production and provide excellent visibility and immediate feed back among the operations in the cell". The relocation of the workcenters into manufacturing cells is a complex problem which affects every department of the firm. The points of view of the those concerned with the reorganisation are many, partial and sometimes divergent [2]. In order to know the various points of view and to best integrate them into the final organisation of the workshop, it is essential to entrust the relocation with a working party. The working party usually includes representatives of the engineering office, industrialisation office, production organisation and management, and of the finance department. To be efficient, the working party should comply with a conception method. A method commonly used for scheme conception management is the value engineering method. In order to set out clearly our contribution to a global scheme management application, we recall the main steps of the value engineering method:
230 1) Defining objectives 2) Analysis of the present system 3) Functionnal analysis 4) Solution proposals 5) Solution analysis The main part of scientific studies concerns the fourth step [1], [6], [8], [9], [12], [13], [14], [15], [17], [18], [19], [21], [22], [23], [25] and many others. The propositions stemming from these methods reflect points of view linked with the material flows. They have to be supplemented by other propositions reflecting the points of view of the maintenance, security, working conditions, aesthetic and others. Notes that step 4 of the method is a creativity stage. It is natural and desirable to obtain a great number of relocation propositions. The purpose of step 5 is to elaborate a final solution from the solution of step 4. Our contribution initialises step 5 by summing up the consensus and disagreements between the members of the working party [3], [7]. The more fruitful step 4 has been, the more necessary is the summary. A study case allows us to illustrate our point. A workshop composed of eleven workcenters respectively denoted M01 to M l l , wishes to relocate its workcenter into cells. It has set up a working party composed of four people responsible for the method office, production management, maintenance and working conditions. During step 4, four solutions have been proposed by the working party. A first solution called I1 consists in creating families of routings following criteria linked with the methods office. The method officer proposes the following solution: Cell I1,1: M01, M05, M06, M10, Cell I1,2: M02, M03, M04, M08, M l l Cell I1,3: M07, M09, This solution does not satisfy the production manager since cell I1,3 includes two workcenters only and does not justify an autonomous management. He would rather link M01 and M09 to cell I1,2 which leads to proposition I2: Cell I2,1: M01, M05, M06, M10 Cell I2,2: M02, M03, M04, M07, M08, M09, M l l The handling supervisor wishes to minimise the handling distances. His point of view on the layout is: Cell I3,1: M01, M03, M05, M06, M10 Cell I3,2: M02, M04, M07, M08, M09, M l l As for the person responsible of the working conditions, the only constraint he expresses is the isolation of workcenters M07 and M09 in sound-proof premises which leads to solution I4: Cell I4,1: M07, M09 The question is now to initialise step 5 by synthesising the agreements and
231 disagreements of the various views of the working party members. In industrial cases, dedicated software are available. Thus, the method officer makes use of routings clustering software to identify the cells, the production m a n a g e r possesses flow simulation software to evaluate lead times, and the person responsible for the working conditions may use an acoustic simulation software to forecast intense noise disturbance zones. It is common at the end of step 4 to have at hand tens of propositions, involving hundreds of cells. The great number of solutions justifies the use of the method proposed in this paper. The principal contribution of this paper is a method for the research of a consensus on several flow analysis approaches. The second contribution is a way to evaluate the quality of a consensus. The method is discussed for a scholar as well as an industrial example. 2. Q U E S T F O R S E A R C H OF A C O N S E N S U S
2.1. C o n s e n s u s and quasi-seriation relation Let us consider a workshop with m workcenters. The m e m b e r s of the reorganisation committee have carried out several flow analyses which resulted in N propositions of clustering of the workcenters in manufacturing cells. These propositions are described in a binary matrix A figure I where : Mi (i=l..m) are the workcenters, j=I..N are the propositions of clustering, nj the number of cells of the proposition number j Ij,k is the k th cell of the proposition number j. If Mi belongs to the k th cell of the proposition n u m b e r j, then a(Mi;Ij,k)=l. otherwise, a(Mi;Ij,k)=0.
I 1,1 ....... I I , n l 1 2,1 ....... I2,n2 M1 M2 Mi
....
Cells Ii,1..Ii,k..Ii,nk
a(Mi;Ij,k)
Mm Figure 1: A, the assignment matrix
IN,I..... IN,nN
232 Let us denote M the set of workcenters and I the set of all the proposed cells. Let R be the set of couples (Mi;Ij,k) such that a(Mi;Ij,k)=l. R is a relation on the sets M and I. If the N propositions are identical to each other, then it is possible, by permutation of the rows and columns of matrix A figure 2, to obtain the remarkable aspect of figure 3.
Ilrl M1 1 M2 M3 M4 1 M5i M6 1
~
Cells Ilr2 12,1 12,2 13,1 I3~2 1 1
Legend:
1
1
1
1
[-"i"]
1
1
1
1
R
Figure 2: Initial matrix A
1 1
]
Cells M4 ~ M6 M1 ~ M5 ~ M3
iLMmLmiM~~~
mmmm
Legend:
iLm
~
R ~
Z
Figure 3: Ideal consensus
The matrix of figure 3 is obtained by permutation of the rows and columns of matrix figure 2. The common propositions of the three analysis are pointed out on figure 3. None of the propositions contains workcenter M2. Figure 3 shows t h a t R is a relation of quasi-seriation on the sets M and I. Mathematically a quasi-seriation [5], [16] relation Z on two sets M and I is a subset of the Cartesian product M • I such that n z=
Ux,•
i=l where Xi and Yi, for i=0..n, are subsets of M and I respectively. - X={Xi; i=l..n} is a partition of M-Xo in n subsets. - Y={Yi;i=l..n} is a partition of I-Yo in n subsets Xo and Yo are subsets of M and I respectively. On figure 3 for example X0={M2}, Xl={M4,M6,M1}, X2={M5,M3}, Y0=O, Y1={II1,I22,I31}, Y2={I32,I21,I12}. Diagonal blocks of figure 3 correspond to the Cartesian products Xi x Yi for i = 1,2. We have shown that a total convergence of points of views means a quasisedation relation on the sets M and I. As a quasi-senation relation Z corresponds to a consensus of the working party we will use indifferently the term consensus Z and quasi-seriation relation Z. In practical cases, the relation R on the sets M and I is seldom a quasi-sedation relation. In that case, a quasi-seriation relation "close to" R conveys the consensus of the working party. Our purpose is to find quasi-sedation relations "close to" R. Beforehand, the concept of proximity between a quasi-senation Z and relation R should be formalised. The quantities -
-
g(Z,R) = card(R n Z). card(R)
(1)
233 and a(Z,R) = card(R n Z)
(2)
card(Z) are similarity measures between Z and R. These quantities each express a different aspect of the likeness of Z and R. We consider t h a t we have to define the "closest" quasi-seriation to R, taking into account each of its aspects. Thus, two consensus Z i and Zj are said to be equally close to R if
r
and
g ( Z i , R ) = g ( Z j , R ) . A consensus Z i is
considered as being closer to R t h a n a consensus Zj if a ( Z i , R ) > a ( Z j , R ) and g(Z i,R) >_g(Zj,R), or if a(Z i,R) > a(Zj,R) and g(Z i,R) > g(Zj,R). We sum up this last definition using the notation Z i > Zj. Applying the above definition to the example of figure 4, we conclude t h a t Z 3 > Z 2 > Z 1. However Z 3 and Z 4 cannot be compared using the ">" relation. It is easy to verify t h a t ">" defines a partial order relation on the set of quasiseriation relations on M and I. A consensus is said to be a m a x i m a l consensus on a set of consensus' ~ if there is no consensus Zj such t h a t Zj > Z i. On figure 4, for example Z 3 et Z 4 are the maximal consensus' of the set ~={ Z 1, Z2,Z 3, Z 4 }.
/~Z=R
Z2 Z3 4b-- 4b
---
I
-
-
-
Rc_Z
!
!
I
"O-
I
I
!
9 Z4
6 Figure 4: R agreements with Zl, Z2, Z3, Z4 In this section we have shown t h a t a quasi-seriation relation could express the consensus of the working party. We have defined a relation denoted ">" allowing us to compare the proximity to relation R, defined by the working party, of two quasi-senation relations. As the relation ">" is a partial order r e l a t i o n , it generally does not allow us to define the quasi-senation relation "the closest to R", but only a set of consensus' "close to"R that are the maximal consensus.
234 2.2.
The
method
The proposed method gives two types of information to the working p a r t y . It is made up of two steps whose characteristics will be described below. Step 1: research of maximal consensus' about R - Step 2: research of maximal consensus' about relations generated at random -
Step
1:
Taking into account the tremendous number of consensus on the sets P and I, it is not possible, even with todays best computers, to list each consensus and check if it is a maximal consensus. So, we satisfy ourselves with the maximal consensus of an initial set of consensus, obtained with an heuristic method. The initial set of consensus is composed of solutions of the following optimisation problem t h a t we note PB1. PB 1: Find a quasi- seriation relation Z on sets M and I maximizing the criterion: F(Z, ~,R) = ( a ( Z , R ) - ~).card(Z) where: R is a n o n - e m p t y subset of M x I card(Z) is the cardinal number of Z e [0.. 1] is a real number
The solution of PB1 with 13=0 is obtained with the algorithm presented in [15]. Solutions of PB1 with 13e ]0..1] are obtained with the algorithm presented in [8], [19]. The relevance of PB1 solutions as eligible maximal consensus of the subsets of M x I is shown in section 2.3. The operating process is: -
-
collect the working party propositions in the form of a binary matrix A. The relation R will then be defined. solve PB1 for ~=0.05i with i=0,1,2,..20. The set ~ of consensus is hence obtained. compute the quantities a(Z,R) and g(Z,R) for each element Z of~g retain the maximal consensus of represent the maximal consensus on a plot similar to figure 5.
The choice of the value of 13 stems from the objective properties of the solutions to PB1 on one hand, and our experience on the other. The first of these properties (P1) is t h a t a solution Zl to PB1 verifies a(Z 1,R) > ~. Hence the chosen values of allow us to reach the solutions to PB1 which have high values of a(Zl,R). The second property (P2) is that the solutions to PB1 for ~ = 0 contain the maximal
235 consensus Z 0 which verifies g ( Z o , R ) = 1. F u r t h e r m o r e ,we note t h a t the n u m b e r of different solutions of the set ~g is finite. Experience shows t h a t in resolving PB1 for a g r e a t e r n u m b e r of values ~, t h e c a r d i n a l n u m b e r of ~g would not be increased.
Proof of P1
Let us first d e m o n s t r a t e theorem 1:
Theorem 1
A solution Z 1 to PB1 with ~ ~ [O,l[ verifies a ( Z l , R ) > 13 and card(Z 1) > 0 Proof of t h e o r e m 1: As R is a non-empty subset of M • I, there exists a consensus Z o such t h a t card (Z 0 ) = card (R (~ Z 0 ) = 1
(3)
So, a ( Z o , R ) = card(Z 0 (~ R) = 1 card (Z 0 )
(4)
and F(Zo,~,R) = 1-~ > 0
(5)
Let Z! be a quasi-seriation relation solution of PB1, we t h e n have: F ( Z I , ~ , R ) = ( a ( Z 1 , R ) - ~).card(Z 1) > F(Zo,~,R)
(6)
As card(Z 1) is a positive number, (6) and (5) lead to the conclusions t h a t a ( Z 1,R) > 13 and card(Z 1) > 0. (QED proof of theorem 1) Let us now show t h a t a solution Z 1 of PB1 with 13= 1, verifies a(Z 1,R) = 13= 1 Indeed, let Z 0 be defined as above (3). Then F(Zo, 1,R) = 0
(7)
Suppose t h a t Z 1 is a solution of PB1 for 13= 1 .Thus: F ( Z l , I , R ) = ( a ( Z l , R ) - 1).card(Zl) _>F(Z0,1,R)
(8)
E i t h e r card(Z l) = 0 and a(Z 1,R) = 1 by definition, or card(Z 1) > 0 . In the last case (7) and (8) imply t h a t a ( Z 1,R) _> 1. However by definition
(9)
236 a(Zl,R) < 1
(10)
T h u s we have a ( Z l , R ) = 13= 1 0 T h e o r e m 1 and the property we have established imply the validity of property P1.
Proof of P2 We consider the set S 1 of quasi-seriations such t h a t g(Z,R) = 1. L e t S1 = { Z / g ( Z , R ) = 1} and let us prove the the assertion: Z e S 1 r Z is a solution of PB1 for 13= 0. The consensus Z 0 = M x I belonging to S 1. So, S 1 is a non-empty set. The function F ( Z , ~ , R ) for an a r b i t r a r y Z and 15= 0 is given by: F(Z,0,R) = g(Z,R)card(R)
(11)
By definition g(Z,R) < 1 a n d card(R) is fixed, thus F ( Z , 0 , R ) is m a x i m i s e d by each quasi-seriation of S 1. Suppose t h a t Z is a solution of PB1 for 15= 0. Then, F ( Z , 0 , R ) = g ( Z , R ) c a r d ( R ) > F(Z0,0,R) = card(R)
(12)
T h u s g(Z,R) >_1
(13)
By definition g(Z,R) _
Step 2 The results of this step m a k e up a reference set of couples ( a , g ) in order to validate the consensus obtained in step 1. The operating process is as follow: t a k e a r a n d o m sample of 100 matrices with the same n u m b e r of rows, columns and one values as the matrix A. Rj j=l,..100 are the relations obtained. for each relation Rj, solve PB1 for several values of ~(~=0.05k with k=0,1,2,..20). Zi (i=1..2100) are the quasi-seriations o b t a i n e d . r e t a i n maximal consensus' of Zi. r e p r e s e n t the maximal consensus on a plot similar to figure 10
237 2.3 Solutions of PB1 and maximal consensus' The use of the solutions to PB1 to find maximal consensus is justified by the following : the maximal consensus' of the solutions of PB1 are maximal consensus" on the whole set of quasi-seriation on M x I. In this section we shall d e m o n s t r a t e the l a s t s t a t e m e n t . We m u s t first define the curve (a(Z,R),g(Z,R)) for the solutions of PB1 and t h e n discuss the relationship between the set of solutions of PB1 a n d the set of maximal consensus'. The quasi-seriations Z which maximise the function F(Z,~,R), construct the curve (a(Z,R),g(Z,R)) on the g r a p h below figure 5. As can be seen, several consensus Z for which g(Z,R)= 1 can maximise the function F(Z,~,R) and likewise for a(Z,R)= 1.
L
1.0"- -- --
Zone 1 9- -
Z1
-I
,.!~, -I
--
Z4 Z2
Zone 3
--IL
ZOI
t
Z5 .,,
I I 0
Zone 2
I
0.5
Z6
I Z7 1.0
Figure 5: solutions of PB1 To 1) 2) 3)
study Zone Zone Zone
this 1: Z 2: Z 3: Z
relationship we thus divide the plane (a,g) into three zones. such t h a t g(Z,R) = 1 such t h a t a ( Z , R ) = 1 such t h a t a ( Z , R ) ~ 1 and g(Z,R) r 1
Zone 1 : g(Z,R)= 1 This zone has been studied in detail in point 2.2 where we have shown t h a t the s o l u t i o n s to PB1 of this zone contain the m a x i m a l c o n s e n s u s such t h a t g(Z,R) = 1.
Z o n e 2: a(Z,R) = 1 In this zone as in zone 1, there can be m a n y solutions of PB1 which are not m a x i m a l consensus. The maximal consensus such t h a t a ( Z , R ) = 1 is a solution of PB1. The following theorems validate and give a better precision to these statement. Let us define S 2 the set of all quasi-seriation subset of M x I such t h a t a(Z,R) = 1. T h e o r e m 2: Let Zmax be a solution of PB1 for ~max e ]0..1] such that a(Zmax,R) = 1 and Z a solution of PB1 for ~ >_~max" Then a(Z,R) = 1 a n d Z _
238
Proof As Zmax is a solution of PB1 for ~max such that a(Zmax,R)= 1 we have : F(Zmax, ~max, R) -> F(Z, ~max, R)
(14)
Taking into account the definitions of a and g ( see (1) and (2)) (1- ~max )card(Zma x )_> (a(Z, R ) - ~max )card(Z)
(15)
(g(Zmax, R ) - g(Z, R))card(R) > ~max (card(Zmax)- card(Z))
(16)
Z is a solution of PB1 for ]3 > ~max, thus: F(Zmax,~,R) < F(Z, ~, R)
(17)
Hence, from the definitions of a and g ( see (1) and (2)) (1-~)card(Zma x) < ( a ( Z , R ) - ~)card(Z)
(18)
(g(Z~,, R ) - g(Z, R))card(R) _<~(card(Zmax)-card(Z))
(19)
From (16), (19) and from the fact that ~max and ]3 are positive we deduce that card (Zmax ) _>card (Z)
(20)
From (20) and (18) we deduce that (1- ~)card(Z) < (a(Z,R)- ~)card(Z)
(21)
In the case where card(Z) = 0, (18) implies that card(Zma x) = 0. Thus, Zmax=Z. Meanwhile, we generally have card(Z) > 0 . This and (21) imply that a(Z,R) > 1. Since, by definition a(Z,R)< 1, we conclude that a(Z,R)= 1; which demonstrates the first part of theorem 2. Moreover, since R is non empty we can deduce from (20) and (16) that g(Zmax,R) > g(Z,R)
(22)
Since we also have a(Zmax,R) = a(Z,R) = 1, then Zmax _ Z, which demonstrates the second part of this theorem. 0 Theorem 3
I f ~max e ]0.. 1[/s the smallest value of [3for which Zmax is solution of PB1 where a(Zma x,R) = 1, then Zmax is a maximal consensus. Proof." Let Z be a quasi-seriation such that a(Z,R) = 1 and such that it is not solution of
239 a PB1. As Zmax is a solution of PB1 for ~max such that a(Zmax,R) = 1 we have (14), (15) and (16). Since a ( Z , R ) = 1 ; (15) becomes (1- ~max)card(Zma x ) >_(1-13max )card(Z)
(23)
Thus (20) is still verified. Moreover as R is non empty, we can deduce from (20) and (16) t h a t g(Zmax,R)> g(Z,R) which demonstrates that the maximal consensus of S 2 is solution of a PB1. This result, associated with that of theorem 2, implies that Zmax is a maximal consensus. <> Z o n e 3: a(Z,R)r 1 and g(Z,R)r 1 We now consider Z 1 solution of PB1 such that a(Zl,R) r 1 and g(Zl,R) r 1. This is equivalent to studying the solutions of PB1 for ~ e ]~min ,~max[, with ~max defined as in theorem 3. The next theorem demonstrates the relationship between the set of solutions of PB1 for ~ 9 ]~min,~max[ and the set Sma x of maximal consensus' of M x I. We consider whether given Z 1 solution of PB1 for[~ 9 ]~min ,~max[, there can exist another quasi-seriation Z 2 which is such t h a t Z 2 > Z 1. Theorem 4
A solution Zl of PB1 for ~ e ]~min, ~max [ is a maximal consensus Proof." Let Z 1 be a solution of PB1 for ~ e ]~min, ~max [ and let Z 2 be a quasi-seriation relation. i) Case where Z2 is not a solution of PB1 for ~: Thus F(ZI,[3,R) > F(Z2,~,R)
(24)
Then (1) and (2)imply: (a(Zl, R ) - [3).card(Z l) > (a(Z2, R) - ~). card(Z 2)
(25)
and g(Z l, R). c a r d ( R ) - ~. card(Z l) > g(Z2, R). c a r d ( R ) - ~. card(Z 2 ).
(26)
In order to have Z2 > Z 1 we must have in particular : a(Z2,R) > a(Z 1,R)
(27)
From theorem I we get a(Z 1,R) > ~.
(28)
We also have card(Z 2 ) r 0
(29)
240
Indeed, by definition a ( Z 2 , R ) =
card (R n Z 2 ) card(Z 2 )
(30)
thus card(Z 2) = 0 ~ a(Z2,R) = 0.
(31)
Now (27) and (28) imply that a(Z2,R) > ~ > 0.
(32)
Thus (25) and (27) give us card(Z1) a(Z2,R)- ~ > >1 card(Z2) a(Z~,R)- ~
(33)
Hence card(Z~ ) > card(Z 2). Then (26)implies that:
(34)
(g(Z 1,R) - g(Z 2,R)). card(R) > ~(card(Z1)-card(Z 2)) > 0
(35)
Thus g(Z~,R) > g(Z2,R)
(36)
In this case, we deduce from (27) and (36) that Z, and Z 2 cannot be compared. Hence there does not exist a quasi-senation relation Z 2 non solution of PB1 for ~, such that Z 2 > Z,. ii) Case where Z, and Z 2 are two solutions of PB1 for ~ e ]~lmm,~m,x[such that the co-ordinates of Z, and Z 2 differ in the plane (a,g): Then F(Z 1,~,R) = F(Z2,~,R)
(37)
So we have: (a(Z 1, R ) - ~). card(Zl) = (a(Z2, R) - ~). card(Z 2)
(38)
g(Z~, R). card(R) - ~. card(Z~) = g(Z2, R).card(R ) - ~. card(Z2)
(39)
In order to have Z 2 > Z~, we must have (27) If a(Z2,R) = a(Z~,R)
(40)
then (38) implies that card(Z 1) = card(Z:)
(41)
Since R is a non empty set, from (39) we have g(Zl,R) = g(Z2,R)
(42)
Thus (40) and (42) show that the co-ordinates of Z 1 and Z 2 are the same, which contradicts the initial hypothesis.
241 (43)
If a(Z2,R) > a(Z~,R) as card(Z2) > 0 (see theorem 1), (38) and (43) imply card(Z 1) = a(Z2,R)-~ > 1 card(Z 2) a(Z 1,R)-~
(44)
Thus card (Zl) > card (Z 2)
(45)
from (39) and (45) we have (g(Z 1,R)-g(Z2, R)). card(R)= ~. (card(Z 1)-card(Z 2)) > 0
(46)
Hence g(Zl,R) > g(Z 2,R) Hence (43) and (47) show that Z1 and Z2 are not comparable and there exists no solution Z2 of PB1 for ~e ]~n,~m=[ which maximises Zl. Thus any consensus solution of PB1 that belongs to zone 3 is a maximal consensus. 0 We have proved that the maximal consensus' of the solutions of PB1 are maximal consensus' on the whole set of quasi-seriations on M x I.
3.
APPLICATIONS
3.1 A p p l i c a t i o n to t h e s c h o l a r case:
Let us return to the example in the introduction. The solutions proposed by the working party are: 1. 2. 3. 4. 5. 6. 7. 8.
Cell Cell Cell Cell Cell Cell Cell Cell
I1,1: I1,2: I1,3: I2,1: I2,2: I3,1: I3,2: I4,1:
M01, M02, M07, M01, M02, M01, M02, M07,
M05, M03, M09, M05, M03, M03, M04, M09
M06, M10, M04, M08, M l l M06, M04, M05, M07,
M10 M07, M08, M09, M l l M06, M10 M08, M09, M l l
Let us apply step l of the method presented in point 2.2. The binary matrix figure 6 represents workcenters to cells assignment relation R. The matrix includes eleven rows corresponding to the eleven workcenters M01 to M l l and eight columns representing the four solutions in 8 cells. We have then resolved PB1 for 21 values of ~ and calculated the consensus couples (a,g). Three consensus only are different; furthermore, they are maximal. We call them Zl, Z2 and Z3. Figures 7,8,9 specify the consensus Zl,
242 Z2 a n d Z3 as well as the value of the couples (a, g).. I1,1 11,2 11,3 12,1 I2,2 I3,1 I3,2 14,1 1 1 . MO1 1 M02 9 i : . i . i M03 9 1 . . 1 1 M04 i 1 9 i 1 i i 9 M05 M06 1 . . 1 . 1 . . M07 1 1 . 1 1 1 . 1 M08 . i i " 1 1 i M09 " i . i MIO i Mll ,
,
.
,
,
I1,1 12,1 I3,1 11,2 12,2 13,2 11,3 I4,1 M03 MO1 M05 M06 MIO M02 M04! M08 Mll M07 M09
i
1
1 1 1
1 1 1
1 1 1
9
9
9
9
9
,
9
,
9
1
9
.
.
.
9
.
9
9
~
.
.
~
9
~
~
.
9
9
~
9
28+0
9
9
9
1 1 1 1
1 1 1 1
1 1 1 1
. . .
. . .
9
1 1
1 1
1 1
1 1
. 28
Figure 6: the relation R
1
i
28 g=~=0.80 28+7
=1.00
Figure 7" relation Z l I1,1 I2,1 I3,1 11,2 12,2 13,2 I1,3 I4,1 MO1 M05 M06 MIO M02 M03 M04 :M08 Mll M07 M09
1 1 1 1
1 1 1 1
1 1 1 1
9
.
1
9
i
i
1
1
1
i
1 1 . 9
1 1 1 1
1 1 1 1
1 ,
9
30 a=~=0.97 30 + 1
9
g-
9
i
I1,1 12,1 I3,1 I1,2 12,2 13,2 11,3 14,1 1 1 MO1 1 1 1 M05 1 1 1 1 MIO~o 1 1 1 i i i : : M02 1 1 M03 M04 1 1 1 " " 1 1 i i M07 i l l M08 11ii M09 i 1 1 . . Mll 9
9
1 1
30 -0.86 30 + 5
Figure 8: relation Z2
1 1
9
.
34 a=~=0.72 34 + 13
.
.
.
.
.
.
.
9
.
.
.
.
.
.
9
.
1
34 g=~=0.97 34 + 1
Figure 9: relation Z3
We can now proceed with step 2. F i g u r e 10 p r e s e n t s the couples ( a , g ) obtained from the resolution of PB1 for 100 randomly generated matrices a n d 21 v a l u e s of [3. E a c h m a t r i x possesses 11 rows, 8 columns a n d 35 one v a l u e s (cardR=35). A statistical study [20] has shown t h a t the borders of the 2100 points of the r a n d o m l y generated matrices contain 99% of the points of the m a x i m a l consensus of the m a t r i c e s with n rows, n columns and a n u m b e r of one values equal to cardR. These borders are drawn on figure 10.
243 g
J
1.0-
9 l" l ~ _
"'" ":[;. ~
0.8" 0.6-
~ID
Z3
4,~zli,Z2
bord
0.4" 0.2" 0
I
I
I
I
0.2 0.4 0.6 0.8 "" randomized matrices 9 relation Z ]
I"--
1.0
Figure 10" graph of consensus' We can complete the former results with a few remarks. The positioning of Zl,Z2,Z3 on figure 10 relative to the borders of the randomly generated matrices points allows us to state t h a t the probability of the consensus being a m a t t e r of chance is less t h a n 1%. Each of these consensus is candidate to a detailed analysis. The analysis of Z3 figure 9 reveals t h a t the workcenters within the group GI={M1,M05,M06,M10} are systematically in the same cell. On this p o i n t , there is indeed a consensus between all members of the working party. The same consensus shows that workcenters within G1 and G2={M02,M03,M04,M07,M08,M09,Mll} are not, with the exception of M03 included in a solution of the working party. Hence, there is a consensus within the working party on the separation of the group of workcenters G1 and G2. The co-ordinates of Z3 on the graph of maximal consensus figure 10 foretells this result. Indeed, g(Z3,R)=0.97 indicates t h a t the workcenters in a group of workcenters of Z3 are almost never associated with those of a n o t h e r group. Solutions Z2 and Z l are close to one another on the graph of consensus. The analysis of matrices figures 7 and 8 shows t h a t the consensus Z l and Z2 are nearly identical. These findings are generalised in effect. Two close consensus on the graph of consensus are generally similar. We shall limit our analysis to t h a t of Zl. As a ( Z l , R ) = I we know that the Zl groups of workcenters are very often gathered within the same cell by the working party members. This solution will probably reveal consensus of the working party. The detailed analysis of solution Z l confirms this presumption. Indeed, the workcenters of group G'I={M01,M05, M06,M10},G'2={M02,M04,M08,M11}, G'3={M07,M09} are systematically gatherd within the solutions elaborated by the working party. There is very much a consensus of the working party on this point. Furthermore, the workcenters of groups G'I and G'2 are never clustered in a working party solution. Likewise for the workcenters of groups G'I and G'3. We can guess from the value g(Zl,R)=0.80 this last kind of results for consensus Zl.
244 3.2
Industrial application In this study, we were asked to evaluate the advisability of reorganising workcenters in m a n u f a c t u r i n g cells. The workshop studied includes 244 workcenters producing 200 items. The feasibility study was carried out using the method of project management [4]. The material flow analysis was performed with the help of the specialised software of process planning clustering, SAFIR [23]. The n u m b e r of propositions of the reorganisation committee was about twenty. The relation R matrix possesses 244 rows and 1046 columns. The cardinal number of relation R is 4880. The graph C1 is the maximal consensus obtained with the propositions of the working party in stepl. The borders of 99% of the points of the randomly generated matrices is given by C2.
g 1.0-
.
.
.
0.8
.
.
9.
9.
.
zl z2
5 J z7
I
0.6 0.4
I
C2 L
0.2 0.2
0.4
0.6
0.8
1
Figure 11" Consensus for W2 Step 1 should provide us with 21 maximal consensus. However, we note that on figure 11 seven couples (a,g) only (Zl to Z7) are obtained from this process. In fact different values of an~n for PB1 can give rise to the same maximal consensus. Practical trials have shown that an increase in the number of values of a ~ , used does not, in general, increase the number of maximal consensus obtained. The fact that the number of maximal consensus is limited is an interesting property in practice, since it means that the working party can quickly and efficiently analyse and compare all the possible solutions. Each proposed consensus is obviously useful to elaborate the final decision. If solutions Zl to Z7 are far away from the borders; it is unlikely due to chance. The quantity g(Zi,R) for i~7 is greater than 0.8. This presupposes the fact that the solutions Z l to Z6 show up groups of workcenters generally dissociated in the working party solutions, and thus consensus. Consensus of a different nature are revealed by solutions Z7 for which a(Z7,R)=I. Analysis of consensus Z3 and Z7 allows us to quickly and definitely classify 80% of workcenters in a family with the working p a r t y agreement. Thus, the former was able to focus on the remaining 20%.
245 4.
CONCLUSION
For m a n y years, several authors have proposed methods to recognise manufacturing cells from the analysis of the machine-part incidence matrix. In fact, the problem of reorganisation of a workshop into cells is very complex, because a lot of technical constraints and social and economic points of view must be considered for industrial applications. So depending on the assumptions used in the methods, the results are various and it is difficult to deal with them. Here, we propose a method which permits us to point out the similarities of the different points of view. The method is based on a cross clustering technique. Using the a parameter and the overlapping rate g, the working party can easily identify the best grouping of workcenter into cells reflecting its various practical aspects. This study raises, however, number of points t h a t have not been resolved. From a mathematical point of view, we have shown that the set of maximal consensus is included in the solutions of the quasi-seriation problem (PB1). It remains for us to precisely define the relations between these two groups in order to be able to evaluate, for example, the exact number of maximal consensus. Statistical trials t h a t we have carried out for the randomly generated curves, show that 2100 matrices are sufficient to define the boundaries of these curves at a level of 99%. These binary matrices are subject to the number of rows, columns and percentage of I values being fixed. We are in the process of formulating and putting into practice a rigorous method which will allow us to minimise the number of random matrices used. From a practical point of view, problems concerning the robustness of solutions may occur if the distance from the random curve to the curve of consensus formulated by the working party is great. Therefore, we find it necessary to concentrate on this point and create a measure of the robustness of the reorganisation comitte solutions. Another practical aspect which could be improved is the analysis of the consensus considered as being interesting from the maximal consensus curve. We are working on the systematisation of these matrices analysis and on the automation of these simple and repetitive tasks. Not any firm is ready to embark on a relocation scheme, which is costly in study hours, without strong presumptions on the results. The method we have presented enables us to answer the following question: Is it reasonable to embark on a cell relocation study? Indeed, if a firm has got at its disposal production m a n a g e m e n t software, it is inexpensive to extract manufacturing routings and to classify them according to various points of view, with the help of a software's package, and then analyse the solutions obtained with the method presented in this paper.
246 REFERENCES
1. BALLAKUR A., STEUDEL H. J. " A within-cell utilisation based heuristic for designing cellular manufacturing systems"- Int. J. Res. 1987, Vol. 25, n ~ 5, 639-665. 2. BARTH M. and MUTEL B., "Data for management layout reorganisation A systemic approach", 8th international Conference on CAD/CAM, Robotics and Factories of the Future, Metz, France, August, 17-19, 1992, pp. 670-680. 3. BARTH M., DE GUIO R., MUTEL B., "An Help for Solving Dilemmas Encountered in Flow Analysis"- IEEE Computer Soc. Press, ISBN 0-8186-40308193,1993, 46-51. 4. BARTH M., "Methodologic contribution to workshop reorganisation" - PhD dissertation University of Nancy 1. F-54000 Nancy France- december1991. 5. BEDECARRAX C., "Quadri-d~composition en analyse relationnelle et applications ~ la s~riation" - IBM Paris Scientific Center Technical Report Fl17 1987. 6. BURBIDGE J. L., "Production flow analysis" - Production Ingeneering, 1975, 742-752 7. DE GUIO R., BARTH M., OSTROSI E., MUTEL B., "Workshop reorganisation and computer aided support for PFA" - Proc. 4 th International Congress of Industrial Engineering, IUSPIM]University of Aix-Marseille 3 FRANCE, December 1993, tome 2,. 269-275. 8. DE GUIO R., "Contribution to workshop b r e a k u p " - Phd dissertation University Louis Pasteur F-67084 Strasbourg France 1991 9. DE GUIO R., MUTEL B., "A general approach of the part workcenter grouping problem for the design of cellullar manufacturing problem", - In advances in factory of the future, CIM and Robotics, Elsevier ISBN 0444 898 565, 1992. 10. DE WITTE J., "The use of similarity in production flow analysis" - Int. J. Prod. Res.,Vol. 18, NO. 4, 1980, 503-514. 11. HALL R. W., "Attaining manufacturing excellence", Dow Jones-Irwin, 1987, p. 128, ISBN 0-87094-925-X 12. KAPARTHI S., SURESH N. C., "Workcenter-component cell formation in group technology : a neural network approach"- International Journal of Production Research 1992, Vol. 30, N ~ 6, 1353-1367. 13. KUSIAK A., CHUNG Y., "GT/ART : using neural networks to form workcenter cells"- Manufacturing review, Vol. 4, n~ December 1991. 14. KUSIAK A., "EXGT-S : A knowledge base system for group technology" Int. J. Prod. Res., Vol. 26 NO.5, 1988, 887-904. 15. KUSIAK A.; CHOW W.S, Efficient Solving of the Group Technology Problem, Journal of Manufacturing Systems Volume 6/NO.2,117-124. 16. MARCOTORCHINO F., "A unified approach of the block seriation problem" - Journal of applied Stochastic Models and Data Analysis,Vol 3, NO. 2 J. Wiley 1987. 17. Mc AULEY, "Workcenter grouping for efficient production" - The production Ingeneer 1972, 53-57. 18. MOON Y.B., "Forming part-workcenter families for cellular manufacturing : a neural-network approach"- International Journal of Advanced Manufacturing Technology 1990 5 : 278-291.
247 19. MUTEL B., BOUZID L., and DE GUIO R.,"Application of conceptual learning techniques to generalized group technology" Applied Artificiel InteUigence:6:, 1992,443-458 20. O'REGAN N.,"Seriation and quasi-seriation", Master's dissertation in mathematics, University of Strasbourg I, France, 1994. 21. RAJAGOPALAN R., "An ideal seed non hierarchical clustering algorithm for cellular manufacturing" - Int. J. Prod. Res., 1986, Vol. 24, n~ 451-464. 22. RAJAGOPALAN R., "An ideal seed non hierarchical clustering algorithm for cellular manufacturing" - Int. J. Prod. Res., Vol. 24, NO.2, 1986, 451-464. 23. RAJAGOPALAN R., "Design of cellular production systems. A graph theoretic approach"- Int. J. Prod. Res, Vol. 13, NO.6, 1975, 567-579. 24. SAFIR is a product of the Laboratoire de Recherche en Productique de S t r a s b o u r g - E N S A I S , 24 bld. de la Victoire F-67084 S t r a s b o u r g - Cedex FRANCE. 25. VANELLI A. and KUMAR K.R., "A method for finding minimal bottle neck cells for grouping part workcenters families", Int. Prod. Res., 1986, Vol. 24, n~ 387-401.
This Page Intentionally Left Blank
PART THREE
ARTIFICIAL INTELLIGENCE AND C O M P U T E R TOOLS
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
251
Adaptive Clustering Algorithm for Group Technology: An Application of the Fuzzy ART Neural Network Soheyla K a m a l 5530 H e a t h e r Ln Orefield, PA 18069
FACT (Fuzzy art with Add Clustering Technique) is a new clustering algorithm based on the neural network techniques and fuzzy logic concept. In this paper, structure and application of this algorithm with respect to the group technology (GT) problem is presented. Characteristics and abilities of the algorithm are shown through several examples. To evaluate the quality of the clustering results, two sets of performance measures are considered. The first set evaluates performance of the clustering results independent of the group technology application. The second set includes three group technology dependent measures. These measures are used for evaluating performance of the FACT algorithm with respect to several GT clustering algorithms. A comparison of the results of the FACT algorithm with several other GT clustering algorithms published in the literature has shown that FACT's results dominate other algorithms.
1.0 Introduction Clustering based algorithms are the most common methods for solving the group technology problem in the cellular manufacturing environment. Production flow analysis is the technique that provides required information for these clustering algorithms [1]. Ideally, each part family will map to a unique machine cell, and the entire family need not ever leave the cell in order to complete all necessary processing. Practically, this may be either impossible or computationally infeasible to
252 achieve. In most cases, the actual goal changes to satisfying objectives such as minimizing the number of inter-cell moves, minimizing the number of cells, maximizing utilization of machines, minimizing duplication of machines in different cells, maximizing the percentage of operations of a part processed within a single cell, etc. Cluster analysis approaches group objects (parts or machines) into homogeneous dusters (groups) based on object features. The existing clustering approaches to the group technology problem can be classified as (1) matrix based methods [2, 3, 4, 5, 6, 7, 8]; (2) mathematical programming algorithms [9, 10, 11, 12, 13]; (3) graph theory based methods [14, 15, 16]; (4) pattern recognition techniques [17, 18, 19, 20, 21]; (5) fuzzy logic approaches22, 23]; (6) expert system based methods [1]; (7) neural network based methods [24, 25, 26, 27, 28, 29]. For a complete review of literature interested readers are referred to Kamal [30]. All the above methods, with the exception of neural network based methods, are serial algorithms requiring significant time for processing. Moreover, these methods also require storage and manipulation of large matrices and virtually always focus on binary attributes. Even though neural networks are, at present, usually simulated via serial algorithms, they still require significantly less storage and processing time than conventional approaches. The next section presents information required by a GT clustering algorithm. In section 3, the fuzzy ART neural network which is used in the present algorithm is introduced. The structure of the FACT algorithm is presented in section 4. The performance measures used to evaluate goodness of the clustering results are discussed in section 5. An example from the literature illustrates the technique in section 6. In section 7, performance of the FACT algorithm is compared with several GT clustering algorithms. Finally, conclusions are presented.
2.0 Clustering Based Approaches to Group Technology The information required by a GT clustering algorithm is usually provided in the form of a part-machine incidence matrix A. If the information is represented in binary form, element [Aij] indicates whether part i requires machine j (Aij=l)or not (Aij=0). Alternatively, the transpose of the matrix, the
253 machine-part matrix A, gives element [A~] which is I if machine j is required by part i; 0 otherwise. Table 1 presents an incidence matrix based on the parts process routes.
Table 1- Clustering parts based on their process routes
Machines --)
.
.
.
.
.
.
.
.
Parts $ 1 2 3 .
.
.
.
.
.
.
.
.
.
.
.
.
.
1
.
.
.
.
.
.
.
1 1 0 .
.
2
.
.
.
.
.
.
.
3
1 1 1
.
.
.
.
.
.
.
.
.
.
4
0 0 1
.
.
.
.
.
.
.
.
.
.
5
1 1 0 .
.
.
.
.
.
.
.
.
.
0 1 1 .
.
.
.
.
.
.
.
.
1 1 1
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i
Information related to production volume, demand for a part on each machine, and processing time of a part on each machine can be considered in clustering parts and machines. To consider these information entries to the incidence matrix should be continuous values. Table 2 presents a continuous incidence matrix. Entries in this matrix are processing time of each part on each machine.
Table 2- Clustering parts based on their process routes and process times | , , ,
Machines --} .......
.
.
.
.
.
.
.
.
Parts $ 1 2 3 .
.
.
~
~
. . . . . . . . . .
1 0.5 0.2 0
_ .
.
2
.
.
.
3
7 0.3 7 .
.
.
.
4
0 0 0.3 ,
~
.
.
.
5
0.2 9 0 .
.
.
.
.
0 4 0.1 ~.
6 8 0.1 6
.
__~._~
3.0 N e u r a l Networks and the GT Problem Neural networks are excellent at some of the things that biological systems do, such as pattern classification [31]. Unsupervised neural networks are applied to pattern classification and clustering problems. Group technology problem can be viewed as a clustering problem. One of the well-known families of the unsupervised neural networks is the ART (adaptive resonance theory) family [32, 33, 34]. ART neural networks have been used for several clustering problems
254 [35]. The architecture of our clustering algorithm is based on fuzzy ART which is the latest development in this family of neural networks.
3.1 Fuzzy ART Fuzzy Art is an unsupervised category learning and pattern recognition network [36]. It incorporates computations from fuzzy set theory [37] into the Adaptive Resonance Theory (ART) based neural network. Fuzzy Art is capable of rapid, stable clustering of analog or binary input patterns. This network consists of two layers, the input (F1) and the output (F2) layers. The number of possible categories (output nodes) can be chosen arbitrarily large. At first each category is said to be uncommitted. A category becomes committed after being selected to code an input pattern. Each input I is presented by an M-dimensional vector I=(II,I2,...,IM), where each component I i is in the interval [0,1]. One set of weight v e c t o r s Wj'-(Wjl,Wj2,...,WjM) are used to represent each output category j. Initially Wjl=Wj2. . . . . WjM=I, for all j. To categorize input patterns, the output nodes receive net input in the form of a choice function, Tj. The following choice function is used
I I^ Wjl Tj=a+ IWj I
(1)
where ^ is the fuzzy MIN operator (Zadeh 1965), defined as
(X^Y)i-min (xi,Yi)
(2)
and the norm I ~ I is defined by M
IXI- ~xi i=1
(3)
A Category (output node) with the highest value of Tj becomes nominated to claim the incoming pattern where
255 (4)
Tj= max{Tj 9j=I...N} To accept the nomination of the category the match function should exceed the vigilance parameter; i.e.,
IIAWjl III >p
(5)
In the fast learning mode, if the first nominated category does not pass the similarity test, an uncommitted node should be committed to the input pattern. The weight vector of the winner category is updated as follow: Wj(new) =
13(IAWj(~
(~
(6)
Fuzzy ART has three independent parameters: 1) The choice parameter 0~ > 0, which Carpenter et al. suggested to be close to zero, affects the search procedure. The choice parameter controls the choosing of a category whose weight vector Wj is the largest coded subset of the input vector I (if such a category exists). The following example shows this property. Consider a two dimensional input pattern I=(.8, 1). Assume there are two categories W1=(.1, .2) and W2=(.4, .8) whose weight vectors are subsets of the input pattern. If we consider c~=0, the values of the choice function for two categories will be equal as follows: T1=(.1 + .2)/(.3)=1 T2=(.4 + .8) / (1.2)=1. Here, there is a tie between categories 1 and 2. Since T1 is visited first, it will be chosen even though it is not the best choice. Considering a small value of (z such as c~=.01, will change this condition so that: T1=(.1 + .2)/(.01 + .3)=.9677 T2=(.4 + .8)/(.01 + 1.2)=.9917.
256 Here, the second category which is more similar to the input will be chosen.
2) the learning parameter ~e[0,1], which defines the degree to which the weight vector Wj is updated (recoded) with respect to an input vector claimed by node J; and 3) the vigilance parameter pc[0,1], which defines the required level of similarity of patterns within clusters.In the fast learning mode, Carpenter et al (1991) suggest 13=1.Fuzzy A R T learns in fast commit-slow recode mode, which is 13=I for the first time committing an uncommitted node (fast learning/commitment), and 13<1 (slow recode) for the time we are training a node that is already committed. For a complete review of the fuzzy A R T neural network and its drawbacks interested readers are referred to Kamal [30].
4.0 The FACT A l g o r i t h m FACT is a powerful general purpose clustering algorithm which can be used for applications other than group technology. FACT algorithm has the following characteristics with respect to the group technology problem. 1-
Can accept continuous inputs such as weight, length, production volume, and processing time of a part on each machine;
2m
Creates part families and machine cells simultaneously; Is not problem dependent and is applicable to ill-structured problems with undesirable conditions (e.g. machines shared by several parts);
4-
Has potential for parallel implementation and its design and computational demand does not render it inefficient for problems of realistic size;
5..
Determines the part family of a new part without reclustering all parts; Creates new part families when a part does not fit into any of the existing part families;
7..
Does not require determination of the number of part families a priori; Can assign a part to a family without complete information about the part;
257 The FACT
algorithm has three major components. The first one is the
".Add" method. The second component is a method for clustering parts and machines simultaneously. The third component is a technique for clustering new parts a n d / o r machines without reclustering all parts /machines. In the following sections we will discuss these components.
4.1 The "Add" Method This method uses fuzzy ART to generate a hierarchy of alternative clusterings from which the best can be chosen. In this method, inputs are progressively merged until the fewest number of cells that can be formed or that is desired is reached. The algorithm for this method follows: 0.
Set k=0. For vigilance Pk and learning parameter ~k group inputs by fuzzy ART. Call the resulting number of groups n k and the elements of group i {eji}.
,
Each cluster obtained in step 1 may have more than one member. So, form n k new patterns (El, i=l,nk) by adding and scaling the members of each cluster. Ei= vector sum (eli). Scale inputs.
~
If either n k < N* or nk=l stop. Otherwise, set k=k+l, select vigilance Pk < Pk-1 a n d / o r learning parameter ~k < [3k-1 and go to 1. (Notice that if we go back to step 1, we use a new fuzzy ART network in which
wj~(0)..... Wjm(0)=l.) N* = Desired number of clusters It should be noticed that N* is not a requirement for the algorithm. It is an option for when a user wants to definitely have N* clusters. Otherwise, when the algorithm stops at step 3 with nk=l, the user can choose the best solution from among all produced clusters. The Add method operates in slow commit-slow recode mode. In slow commit mode if the first nominated output node does not pass the similarity test
258 all the committed output nodes should be tested before committing an uncommitted node and ~init slightly less than one for the first time committing a node. In the slow recode mode 13<1 for a node which is already committed. In fuzzy ART, the number of clusters is determined by the values that the user chooses for parameters p and 13, and the input matrix. The Add method uses fuzzy ART at each iteration. Therefore, by changing the values of p and 13 at each iteration, the number of clusters obtained can be controlled. Producing a desired number of clusters is based on trial and error. In solving a group technology problem two cases can occur. I) The user is searching for a good solution with a specific number of clusters (for example, four part families and four machine cells). U) The user is not concerned with a specific number of clusters and is looking for a solution which performs the best with respect to the performance measures. The procedure for solving the first case with the Add method is as follows: 1) Choose the values for parameters p and [3 and solve the problem. If this choice of p and 13 did not produce the desired number of clusters, change the values of parameters and try again. Usually, several combination of p and 13can produce a specific number of clusters. Every time that the user changes the values of parameters p and 13, and reaches the desired number of clusters, it is possible that the members of a cluster differ from the members of the cluster in a previous solution. With the Add method it is possible that more than one iteration is required to reach the desired number of clusters. We recommend that for the first iteration the users choose high values of p and 13. This should let the network become familiarized with all the inputs equally well. Later, decrease the values of p and ~ in order to reduce the number of clusters. We have recognized that if at step "i", the number of clusters produced is equal to the desired number of clusters, or smaller than the desired number, we may rerun the procedure and stop at step "i-1". Then, we may increase p to help the network to explore other combination of clusters. (Increasing 13 does not have this property.) Increasing the value of p at this point is not for changing the number of clusters. This change allows the network to
259 rearrange the m e m b e r s of the clusters to p r o d u c e a solution with better performance. 2) C o m p a r e the p e r f o r m a n c e
of the solutions w i t h respect to the
performance measures. A solution is desirable that minimizes similarity between
clusters,
number
of s h a r e d
machines,
and
intercellular
m o v e m e n t s and maximizes Grouping Efficiency 1 (GE). 3) Stop searching for a better solution w h e n the performance measures do not i m p r o v e more than a "~" percentage from one solution to the other, where ~5is defined by the user. (The solution has converged.)
For the second case the procedure is similar to the first case, except for the difference that we do not have a specific desired n u m b e r of clusters. We should note that, w h e n we reach a solution for the first time, there is no w a y to say how good this solution is; unless it is compared to other solutions. The A d d method produces a n e w solution in each iteration. Then, the best solution with respect to performance measures can be chosen. The procedure for the second case can be s u m m a r i z e d as follows. 1) A p p l y the clustering algorithm more than once and change the value of the parameters every time to produce various clustering solutions.
1Grouping efficiency (GE) measures the quality of clusters. GE= qn I + (1-q)n 2
Where number of entries in the diagonal blocks nl = total number of elements in the diagonal blocks number of entries "0" in the off-diagonal blocks n2- total number of elements in the off-diagonal blocks q is a weighting factor (0 < q < 1). Usually, in literature q=0.5 is considered [14].
260 2) Compare the quality of the solutions with respect to the performance measures. (In section 5 we will describe which performance measures are used.) Even though, in these procedures the clustering results are dependent on the choice of parameters which are arbitrarily chosen by the user, the process of clustering is very fast (usually the network reaches a solution in two to three cycles and it is rare that the network requires more than ten cycles for generating a solution and this is not dependent on the size of a problem). Therefore, even if we need to repeat the procedure for several times, it will not require a great amount of time.
4.1.1 A n
Example
In this section, we illustrate the steps for clustering an example by the "Add" method. The King's [38] example is chosen for this purpose. Its incidence matrix is shown in Table 3. This incidence matrix is the first set of input to the fuzzy ART neural network. This example has 14 input vectors (machines) and each input vector has 24 attributes (parts). In Table 3 attributes are binary, but, after the second step input attributes are continuous. This example shows ability of the FACT algorithm in working with continuous input. Learning and vigilance parameters are chosen as shown. Table 4 shows the first clustering results. Here machines are grouped in eight machine cells. Table 5 shows the input vectors used in the second round of the Add method. Each new input vector results from merging the members of a cluster. For example, the first input vector of Table 5 results from vector addition of the first and twelfth input vectors of Table 3. In the first round, the first and twelfth input vectors are clustered together, as shown in Table 4. Each component of an input vector of the fuzzy ART network should be in the interval [0, 1]. After adding the members of each cluster, to scale the resulted vector, we divide the value of each component of the vector by the number of members of that cluster.
261
S t e p 1: Table
3-
King's
example,
original
input
set of the
[3=.5 Machines: ,],
1 2
3
Parts: 4
5
6
7
8
9
"Add"
method
p=.5 --+
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1 1 1 2 1 1 ....~ .....................i .....~ ..................................................................................................................................... i" ........................... ................
:'--il --1 ...... i-
........................................................................
....5.....i....-i-....................................................................................................... " 6 ................... 1 1 1 1 1 1 1
1 .............i- ..... i ................ f
. . . . . . . . . . .
i ................... T ................... ~ ...........
1 ....... "7..... ]7..... 1 .................................. -i"...................................................................... -1"................... -i...................................
....ti ......................................i ...............................i .....i" ............i ......"i".....i".....~.........i ...............................................i ......................... .......................................................
9 1 1 1 "'~1"-0" 1 ........................................................................................................................................................ ~...... ' i i .................. i ..... i ....................................................................................................................................................... 1 1 12 .............................................. -i".....]7.......................................................................................................................... "i~
....................................... ~ .....i .....i ....................................................................... i .................................. 1 ............
"i'~ ........................................... i-..........i ....... i ...........1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table
4-
Members Ill
,
|l
of each
machine
II M
Machine
Cells
Machines
Cell #1
1,12
Cell #2
2,10
Cell #3 Cell #4
7 14
Cell #5
11,3
Cell #6
13
Cell #7
6,8,9
Cell #8
4,5
cell after
the
first
step
262
Step 2: Table
S e c o n d i n p u t set of the "Add" m e t h o d
5-
13=.5 clusters:
p=.5,.3
Parts: --+ 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1,123 1
1
1
1
"'":~................... T .....; 5 ........................................................................................................................................................ i .....
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3-.5.5
...............
.5
-
...........
.............._.........................~._..
.....
.................................................
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
..................................................................... 4 s ........ i ............. ~ ............ _33 ................................................................... "5 ...................i .....g ....................................................................................................................................................... 1 1
....7 ---8
1 1
1
1 1 1 1 1 1 1 1
1
1
1 1
1
1
T h e s e c o n d r o u n d of t h e c l u s t e r i n g r e d u c e s t h e n u m b e r of m a c h i n e cells to 6. Table 6 s h o w s the m e m b e r s of each m a c h i n e cell. In the t h i r d r o u n d (step) the n u m b e r of m a c h i n e cells are r e d u c e d to 4 clusters. Table 7 s h o w s the i n p u t set of t h e t h i r d r o u n d . H e r e , w e s t o p the p r o c e s s since o u r goal is to p r o d u c e t h e s a m e s o l u t i o n to t h e K i n g p r o b l e m t h a t is p u b l i s h e d in t h e literature.
Table 6- M e m b e r s of each m a c h i n e cell after the s e c o n d step ii
Machine Cells Cell #1 Cell #2 Cell #3 Cell #4 Cell #5 Cell #6
Machines 1,12,13 2,3,10,11 7 14 6,8,9 4,5
263
Step 3: Table 7-
Third input set of the "Add" method
~=.5
Parts:
clusters: DZ:m
] , 1 2 3 4 5 1 "~ 4 5 --~-
p=.2
1
1
1
1
1
1 .33 1 1
1
1 1
10 11 12 13 1 4 1 5 16 1 7 1 8 19 20 21 22 23 24
1 1
1
1 1
1
133 1 1
1
.5
1
1
1
1
1
1
After producing the desired number of machine cells, the FACT algorithm is able to determine the part families based on the information stored in the weight vectors of the fuzzy ART neural network. In section 4.2 we will describe this procedure. Table 8 shows the members of each machine cell and part family for the solution produced by the FACT algorithm for the King problem. Table 8-
The final solution to the King problem
Machine Cells Cell #1 Cell #2 Cell #3 Cell #4
Machines 1,12,13 2,3,10,11 6,8,9,14 4~5~7
Parts 6, 7, 8, 18 3, 4, 21, 24 5, 9, 10, 11, 12, 13, 14, 15, 16, 22 I r 2~ 17~ 19r 20~ 23
4.2 Simultaneous Clustering In cellular manufacturing environment, the goals of group technology are: to join similar parts as a family, and to group machines with common set of users (parts) into cells. The FACT algorithm is designed for simultaneous part and machine clustering. The fuzzy ART neural network, which is the clustering unit of the FACT algorithm, is unable to cluster parts and machines at the same time. If inputs to the fuzzy ART are information related to parts, output categories will be part families; and if inputs include information related to machines, output categories will be machine cells. For simultaneous clustering of parts and
264 machines the fuzzy ART's parameters should be adjusted such that the required information be stored in the system. Our investigations conclude that by adjusting the parameters, the weights generated by fuzzy ART can carry very useful information about the components (attributes) of the input vectors. If an input vector represents a part, the weight vector of the relative part family can provide the required information about the machines used by the members of that part family. The required parameter adjustments for simultaneous clustering are discussed in the next section.
4.2.1
Adjusting the Parameters
In fast commitment and slow recoding mode fuzzy ART considers 13=1 for the first input (which commits a new node), this action reduces some of the weight vector's attributes to zero. These weight attributes become zero because the corresponding attributes of the input vector were zero. Therefore, these weight attributes become unable to carry all the information about the other inputs that will be committed to the corresponding node. Reducing the weight attributes to zero causes to lose the information required for simultaneous clustering of parts and machines. To solve this problem, we reduced the learning rate (from ~init=l to [~init
produced that if we clustering of 13, the
Table 9- Grouping 24 input parts in King's example with reducing initial [Ys.
0~
~init
~
p
Cycles
Committed nodes
1
0.1
1
0.3
0.3
3
5
2
0.1
0.9 or 0.99
0.3
0.3
8
8
3
0.1
0.5
0.3
0.3
9
11
265 Parameters [~initand 13are the same (learning parameter). Parameter [3initis used for the first time a node becomes committed, and J3 is used for the other times. In the above three cases, all the parameters are the same except ~init" "Cycles" means the number of cycles the network has to see the input set until it stabilizes. As shown,
for ~init=l, 5 nodes are committed, and in 3 cycles the
network stabilizes. The number of cycles and committed nodes increase, as ~init decreases. Our investigation shows that if we decrease ~init to slightly less than one, such that the number of clusters stays the same as when ~init=l, we will have the required information to determine the machine cells and part families at the same time. One suggestion is that first find the best possible number of clusters (with respect to the evaluation measures), and then reduce ~init slightly and cluster the inputs again, such that the number of the clusters stays the same. Here, increasing the value of the parameter 0~ may help to keep the number of clusters constant. The benefit of increasing the parameter (z is that as ~init becomes less than one, the values of the attributes in the weight vector decrease less than before (see the learning rule) and the choice function Tj loses its sensitivity for choosing the best possible solution. By increasing the value of the parameter c~, we increase the sensitivity of the choice function. Table 10 shows this condition for the King's problem. Here, in the second case, (z is more than in the first case and ~init is less than the first case, so the number of nodes is kept constant. In the third case, (z is more than the (z in the first case, [~init is constant (has the same value in the first and third cases) and the number of nodes is reduced. The first two cases in this example produce inferior solutions, since sensitivity of the choice function is poor. In the third case parameters (z and ~init are high enough to prevent the system from committing unnecessary nodes.
266 Table 10- Grouping 24 parts in King example with changing (x and ~init'S simultaneously.
0~ ........
1
. . . . . .
. . . . . .
0.001
~ . . . . . . . . . . . . .
~init
13
0.99 or 0.999
0.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
p .,.,.,.,
0.3
. . . . . . . . . . .
Committed nodes 8
,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
0.1
0.9 or 0.99
0.3
0.3
8
3
0.1
0.999
0.3
0.3
5
In the process of categorizing the input vectors, when we are considering [~init
4.2.2 Simultaneous Clustering Algorithm We propose the algorithm below, which illustrates how the FACT algorithm can "simultaneously" form both machine cells and part families quickly and effectively. It assumes that inputs are processing routes for parts. Thus, the group of parts assigned to an output node, I, represents a part family (The same procedure can be used for producing the machine cells). 1. Group the parts in the fast commitment-slow recode mode of fuzzy ART ( [~init=l, 13<1). 2. If the number of clusters is satisfactory (may use the "Add" method of section 4.1), save the value of the parameter p and go to step 3. Otherwise, change the value of p, and repeat steps 1 and 2. 3. Group the parts for [~init close to but not equal to 1 (e.g. [3init=.999) and 13<1 (use the same value of 13 as in step 1) such that the number of groups
267 f o r m e d e q u a l s the n u m b e r f o r m e d in step 2 (use the s a m e v a l u e of p as in step 2). 4. For e a c h a t t r i b u t e j, find
J*k= maxj{Wkj} (the m a x i m u m w e i g h t a m o n g
the clusters). A s s i g n m a c h i n e j to cluster k. If m o r e t h a n o n e cluster k has the v a l u e J'k, b r e a k the tie by c h o o s i n g the cluster w i t h the s m a l l e s t v a l u e ofk. To i n c r e a s e the s p e e d of the process, w h e n w e h a v e reliable p e r f o r m a n c e m e a s u r e s , o u r e x p e r i m e n t s s h o w e d that, w e can o m i t the first t w o steps of this a l g o r i t h m a n d c l u s t e r the i n p u t p a t t e r n s w i t h
~init
v e r y close to o n e (such as
~init=.999). Table 11 s h o w s the final v a l u e s of the w e i g h t v e c t o r s for a p a r t a n d m a c h i n e c l u s t e r i n g e x a m p l e . P a r t s are g r o u p e d
in 3 c l u s t e r s . T h e r e a r e 10
m a c h i n e s t h a t s h o u l d be c l u s t e r e d w i t h r e s p e c t to the i n f o r m a t i o n a v a i l a b l e in the w e i g h t vectors. Each a t t r i b u t e of a w e i g h t vector is r e l a t e d to a m a c h i n e . Each a t t r i b u t e p r e s e n t s the p e r c e n t a g e of the use of the m a c h i n e in a cluster. Table 11clusters: 1
'$
The w e i g h t vectors of a t r a i n e d n e t w o r k
2
Machines: 3 4
5
6
7
8
9
10
1 0.0007 0.0007 0.0007 1.0000 1.0000 0.0007 1.0000 0.0007 0.0007 0.0007 ""2.....0:0007......L0000 i.0000 0:00-07.......0:0007-0:0-007 .... 0-:00-07......0:0007 ....0:0007.......i.00-00.... ......"i.'0"00"0.......0~.'00~0~........"0"~0~0~;........."0~0"00"~......"0.'000"~.......0"~'9"~0......."0~'00"~........~0~'~9:~.........0".~'~9"......"0~0"00~"...... i
As d e s c r i b e d in s t e p 4 of the s i m u l t a n e o u s c l u s t e r i n g a l g o r i t h m , e a c h m a c h i n e s h o u l d be a s s i g n e d to a cluster w h i c h uses that m a c h i n e the most. For j=l,
J'k=3=1.000.
T h e r e f o r e , m a c h i n e 1 s h o u l d be a s s i g n e d to cluster 3. Table 12 s h o w s m e m b e r s of each cluster ( m a c h i n e cell).
268 Table 12- Members of each cluster Clusters Cluster #1 Cluster#2 Cluster #3
Machines 4, 5,7 2, 3,10 1, 6, 8, 9
The knowledge embedded in the weights and characteristics of the parameters enables us to simultaneously cluster parts and machines. A study of the weight vectors provides us with several information related to parts and machines. For example, in a problem where parts are input vectors and input attributes are binary, if an attribute (machine) of a weight vector is equal to one, it means that all of the members (parts) of that cluster are using that machine. In the above example, machines 4, 5, and 7 are used with every part in cluster 1.
4.3 Clustering of the N e w Parts In some cases, a part or machine may be added to the system after training the network. The FACT algorithm is able to determine the proper cluster for this new pattern without the need to recluster all the parts and machines. For the group technology application we decided to keep the neural network in the learning (plastic) mode. Therefore, as a new input arrives, the network searches for a proper category, and if it finds one, adjusts its weights and learns about the characteristics of the new input. In the case where none of the existing categories are similar enough to the new input, the network assigns a new category to this input if any uncommitted output node is still available in the system. Otherwise, the new input will be stored in a residue area. The decision about keeping this system in learning mode or not is application dependent. In this application, we do not anticipate too many new inputs. In addition, when new patterns enter the system, some of the older patterns are usually retired. Therefore, it is beneficial that the system learns about new patterns and gradually forgets the older ones. For cases where the input set changes drastically, we recommend reclustering the entire set. Reclustering the patterns with our algorithm is easy and fast. Therefore, retraining the network may result in a better solution set.
269
4.4 Ungroupable inputs In reality, it is possible that some input patterns (parts or machines) are not suitable for grouping with other patterns. The FACT algorithm is able to prevent forceful grouping of every pattern by limiting the number of output nodes. In manufacturing cell formation problem, if the user decides that not more than a limited number of cells is suitable for the plant, the m a x i m u m n u m b e r of o u t p u t nodes (categories) can be fixed. Then by adjusting the parameters
(p, [3), the similar patterns can be collected in the cells and
ungroupable patterns are accumulated in an area called "residue area". At the end of the process, the program provides a report about the contents of the residue area.
5.0 Performance Measures To evaluate the performance of the clustering results, we are interested in a) indicating the dissimilarity between the clusters and the density of the data within each cluster, and b ) m e a s u r i n g the uniformity of the cells, volume of the movements between the machine cells and the number of machines that are required in more than one machine cell. Therefore, we consider two groups of performance measures. The first group evaluates performance of the clustering results independent of the group technology application. For this group we have developed two measures. The first one measures similarity within the clusters and distance between the clusters based on Euclidean distance rules "Controlled Cluster Separation Measure". The second one measures the same factors with respect to a fuzzy similarity measure and Euclidian rules "Fuzzy Similarity Measure". In the second group, we consider three group technology dependent measures: one measure counts the number of the shared machines between the clusters, another one counts the number of intercellular movements, the last one evaluates grouping efficiency of the solution [14].
H o w to calculate these
measures is out of the scope of this paper, interested readers are referred to Kamal [30]. In the next two sections we describe how these measures should be used for evaluating the performance of a solution.
270
5.1 General Purpose Measures Controlled Cluster Separation Measure "CR" and Fuzzy Similarity Measure "IR" are general purpose measures. They can be used to evaluate any clustering algorithm. These measures are similar in measuring the distance between the clusters. They differ in the method of calculating similarity within each cluster which for the Fuzzy Similarity Measure, it is based on the fuzzy subsethood theorem [39]. Fuzzy subsethood theorem is the same theorem that is used by fuzzy ART neural network for evaluating similarity of the input patterns. These performance measures (CR and IR) are used to measure the similarity within each cluster and between the clusters. During the process of finding the best clustering solution, as the values for CR and IR decrease, it means that the members of the clusters are becoming more similar a n d / o r distance between the clusters is increasing and we are moving towards better solutions. When CR and IR increase, it means that we are moving away from a good solution. Every time we are calculating CR or IR, we are measuring the performance of two cluster sets. One cluster set is for the part families and the other is for the machine cells. The number of clusters in these two cluster sets are equal, but the members of one set are parts and the members of the other cluster set are machines. It is possible that two, three or any number of solutions exist for a problem where all of these solutions have the same number of clusters (number of part families and machine cells), but the members of each part family a n d / o r machine cells may differ. CR and IR can help us determine which one of these solutions are better. The solution that produces the lowest values for CR and IR is the best solution. When applying the FACT method, in each iteration we obtain a solution. CR and IR can compare these solutions and it does not matter that the number of clusters in these solutions are equal or not.
5.2 G r o u p T e c h n o l o g y D e p e n d e n t M e a s u r e s The
objective
of
group
technology
in
cellular
manufacturing
environment is to group parts into part families and machines into machine cells such that,
m o v e m e n t of parts between machine cells (intercellular
m o v e m e n t s ) and usage of machines outside their machine cell (shared
271 machines) is minimized (with respect to the number of groups). If parts with similar processing requirements are processed in one machine cell, number of intercellular movements and shared machines will be zero, resulting in the best solution. Chandrasekharan and Rajagopalan [14] suggested the grouping efficiency measure (GE) which measures the quality of clusters. A G E value of I means that the matrix has a perfect block diagonal form, while zero means the opposite case.
6.0 Computational Results In this section, the largest size problem (40x100) found in the literature is selected to be solved by the FACT algorithm. Here we demonstrate how performance measures help us choose the best solution. This example is introduced by Chandrasekharan and Rajagopalan [40]. The 100-part and 40-machine incidence matrix is shown in Table 13. The block diagonal form of Chandrasekharan's solution is shown in Table 14. By applying various values of the network parameters we could obtain different solutions. We found that according to the five performance measures, the ten-cluster solutions were better than any other size solutions at all times. However, among these ten-cluster solutions we chose two solutions to be presented here. The shape of this problem indicates that there is probably only one best solution and that both we and the authors have found it. This solution is presented in Table 14. The steps that led to this answer and the five performance measures are reported in Table 15. Another ten-cluster solution is shown in Table 16. The steps that led to the latter solution are reported in Table 17.
272
Table 131 2 3 4 5 6 7 8 9 i0 Ii 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
The incidence matrix for Chandrasekharan's example.
1 ••••••••••22222222223333333333444444444455555555556666666666777777777788888888889999999999• 123456789•123456789•123456789•123456789•123456789•123456789•123456789•123456789•123456789•123456789• 11 1 1 1 1 11 11 1 11 1 1 1 II 1 Ii 1 1 1 1 iii ii 1 1 1 II 1 1 1 1 1 1 III 1 1 1 II 1 II ii 1 ii II 1 ii 1 1 III II 1 1 II 1 1 1 1 1 1 III 1 1 II 1 1 1 1 II II 1 II 1 II 1 1 i II 1 1 II 1 1 1 1 II 1 II 1 1 1 1 1 1 II i 1 1 1 1 1 Ii 1 iii iii II II 1 1 1 1 1 1 II 1 1 1 1 II 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 II 1 ii 1 1 II 1 1 1 1 Ii 1 1 1 1 1 Ii Ii ii i 1 1 1 1 1 1 1 1 1 Ii 1 ii 1 II 1 1 II II 1 1 III 1 1 II 1 1 II ii ii 1 III 1 1 1 1 I 1 1 1 II 1 1 Ii ii Ii 1 IIIII 1 1 II II 1 1 III 1 1 1 1 1 1 II 1 1 1 ii II II 1 1 1 1 1 1 1 1 11 1 1 II 1 Ii ii 1 1 1 1 1 1 1 1 1 1 1 1 ii II 1 Ii 1 1 1 1 1 1 1 ii 1 1 1 ii 1 1 1 1 1 1 1 1 1 1 1 1 1 III 1 1 II 1 II II 1 Ii 1 II II ii 1 1 1 II II 1 1 1 1 1 II 1 ii Ii 1 1 1 II 1 1 1 IIi 1 1 1 1 1 Ii
273
Table 14-
1 3 7 32 2 i0 16 21 31 4 9 2O 5 8 22 23 37 39 6 12 26 38 4O II 13 14 17 35 15 18 33 34 36 19 25 28 30 24 27 29
The block diagonal form for Chandrasekharan's example. Shared m a c h i n e s = 36 Intercellular m o v e m e n t s = 37
1 233455668115667779 1123450 1222456778888999 11233444456669 113456789225789233457778899 123345689 45943997856•234•4378•3•9•68••67678659•62389389•245908•3459•2357•87267907•225643•2••4•245•28654573876 IIIII iiiiii II IIIIIIIII iiiiiiii II IIII IIIIIII Iiiii III IIIiiiiii IIIi IIII 1 1 ii iiiiii 1 1 Iiiii 1 IIIIIIII IIiiiiii I iii IIII IIIIIIIIIIIIIIiii iiii IIIII IIIIII 1 1 IIIiiiiiiii II II 1 1 Iiiiiiiiiiii II Iiiiiiiiiiiiiiiii III 1 IIIIIiiii 1 II IIIiiiiiiii 1 1 1 iii III Iiiiiiii 1 1 IIIIIIIiiiiiii iiiiii II iiiiii 1 Iiii iiiiii 1 II iiiiiiiiii I IIIIiiiiii I I IIIIII i 1 III i Iiiiii 1 II ii Iiii 1 II III iiiiii 1 1 1 IIIi 1 1 iiiiii 1 iiiiii IIIIII 1 1 IIIiiii iiiiiiiiii 1 IIiiiiiiii IIIIIiiiii
274 T a b l e 15-
P e r f o r m a n c e of t h e first s o l u t i o n for C h a n d r a s e k h a r a n ' s
example.
ii
Inputs Parameters --~ Steps $
~
p
No. of
CR
IR
GE
clusters
1
0.3
0.5
2
03
03
3
Outputs
_
Share
Inter-cellular
Machines
Movements
11
12.29
2.33
95.45
48
55
B
10
1063
172
95 09
36
37
0.30.211 II
9
11.14
2.34
91.04
34
37
"
T a b l e 16-
"
II
. . . . .
~ ................
:_ . . . .
"
........................................................................
O u r s e c o n d s o l u t i o n to C h a n d r a s e k h a r a n ' s
example.
S h a r e d m a c h i n e s = 39 Intercellular movements
5 6 7 8 9 10
= 40
Parts 1, 3, 7, 32 4, 5, 9, 24, 33, 39, 49, 57, 58, 65, 66, 81 2, 10, 16, 21, 31 12, 13, 54, 61, 64, 73, 77, 78, 90 4, 9, 20 3, 10, 19, 20, 36, 48, 50, 100 5, 8, 22, 23, 37, 39 6, 17, 26, 27, 28, 46, 55, 69, 70, 76, 82, 83, 88, 89, 93, 98, 99 6, 12, 26, 38, 40 1, 2, 14, 15, 29, 30, 38, 40, 43, 44, 45, 59, 60, 62, 63, 95 11, 13 7, 11, 18, 37, 42, 56, 67, 79, 80, 97 14, 17, 35 21, 22, 52, 75, 86, 94 15, 18, 33, 34, 36 23, 32, 41, 51, 74 19, 25, 28, 30 31, 71, 72, 84, 85, 91, 92 24, 27, 29 8, 16, 25, 34, 35, 47, 53, 68, 87, 96
275 Table 17-
The steps for the second solution of Chandrasekharan's example.
Inputs Parameters ---) Steps $
~
1
0.5
Outputs CR
IR
GE
Share Machines
Inter-cellular Movements
16.61
3.10
95.60
62
111
No. of
p
clusters
0.5
2 ............... 0 : 5 ........
0:511
3
0.2 I[
13
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.5
10
11.15
1.81
94.80
39
40
9
11.14
2.34
91.04
34
37
..................015.......ilhil!iiT[...........8 ..........!!:!.! .......2"69 .....83:95 ..........35 ................ 44 s
0.5
0.00
][ ...........5 ...........i i i7 .........3 8s ........g 2 1 i 5 ......... 2 9 .................
44
In Table 17, we have included steps "3", "4", and "5" to s h o w that the performance measures "CR" and "IR" increase (deteriorate) w h e n the n u m b e r of clusters decreases, after step "2". This indicates that the best measure of similarity among the members of the clusters is achieved at step "2" with 10 clusters. "GE" can not help us here because we can not compare the value of "GE" for a solution with 10 clusters with the value of "GE" for a solution that has more or less than 10 clusters. Contrasting the values of all five measures
for step "2" in
Tables 15 and 17 indicates that the former 10-cluster solution is superior to the latter solution. However, the final choice of the number for clusters depends on the application.
7.0 Performance Evaluation To evaluate the performance of the FACT algorithm, we have examined 14 examples from the literature. Our clustering results for these examples are contrasted with the best clusterings found in the literature. The results of this evaluation are s u m m a r i z e d in Table 18. Four measures are used to evaluate the FACT a l g o r i t h m ' s
performance.
These m e a s u r e s
are 1 ) c o n t r o l l e d
cluster
separation (CR), 2) fuzzy similarity (IR), 3 ) g r o u p i n g efficiency (GE), and 4) the n u m b e r of inter cellular movements (ICM). A solution is better when: a) it has the m i n i m u m value of CR and IR, or b) it has the m a x i m u m value of grouping efficiency
(is closer to the perfect block diagonal form), or c) its inter cellular
276
movements
are
increases,
the value
GE
for cases
minimum.
For
of GE increases.
with
an unequal
problems
all of these
situation,
it w i l l b e d e p e n d e n t
most
important
conditions
produced
by the FACT
produced
with
other
problem,
Therefore,
number
as the
may
Performance
on most
of the FACT
algorithm
In this
evaluates
the
that the solutions
or as good
performance
of
for some
time.
measure
18 shows
better
that
at the same
which
Table
are either
algorithms,
of clusters the values
I t is p o s s i b l e
not be satisfied
of the problem.
algorithm
number
we can not compare
of clusters.
on the user to decide
requirements
Table18
a specific
as the solutions
measures.
for selected
examples.
Ex. No. 1
Size Ref. Reference Algorithm FACT (mxn) NC ICM GE CR IR NC ICM GE CR IR (5x7) [41] 2 0 91.18 1.56 0.36 2 0 91.18 1.56 0.36 ......2 ........... ~ 7 ~ .............. t~l .............. ~ ................. ~ ........... ~:~~ ........~:f~ ............ -o:~- ........... ~ ............. ~- ........... -~-:~-~ ......... ~-:~-~ ........-o:~- .... ......~...........~ 7 ~ ..........f2"8i-..........~..............3-"........-:8"9S~.........~:33 .........0.-47".........-2"......... ~.........8~'_4-6.........2.:i'5........"-0:6s-.... ZSZ--;O;5x-!O) ....-{3]-ZZ;_3_-;2_L_-- _0__ ;__2%_-.;00_-;_;_2 _!_,5__-_4.....-;;0;25__~....-J3_-Z_L___-_O;; .....-96_-_,O_OLZ;_I:5_4_~-Z;2_O;_25 ..... 11111116111Z~(~.~;~...~.);Z~i~I~Z~sZZZ.~..~Z~9.~.;~.~Z~2;~.~Z~.~.~.~ZZ~.ZZZ~ZZ9.~Z~.ZZ~.;~9.~Z~.~3;~C~
7 (14X24) [ 3 8 ] 4 2 83.90 5.77 1.52 4 2 83.90 5.77 1,52 ......8-........(-23-x20~......"l~f3l......4-+'i2........50"........"~f:9-9.......~'i'~22-......"3~-i'5..........~i...........3"i........-69.-59........9":2"~.........2?60 .... ......_9_........_(L6.x..3.9).......[_4_4!.........._4............7_2...........8.__5._.9_O......... _ 6_,89.........!_P___7.........._4..........._2__!.........8_.5_..9_O_ .........6..88._.........!.,9..6.... ....~'0-.......('1"~-~-~1......[~'5I......5"~'i~........-42.........78":~'~........"~:85........-2:-~9"..........5...........-29-.......79.'8-~........9-:~'2.........-2:4-0.... .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
....-i-i-.......(~Ox;ii) ......i-~i~i......~+~~ .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
........ ~~i ......... 6-i-:~- ......i'~:~i .......~:~-~ .......... ~-............ 6 ........Z-~i:-~ .......K;I~ .......... ~i:~~ .... .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
"'--T2.......-(3-0k5-0-)-......['47I.........."~i.............-0..........6"8;3S-......i-0.-93.......4-;~-7..........4 ............-0.........6-8133.......~f0.-93........;4.T713 ( 3 0 x 5 0 ) [-47]..... -36-2 -5-7.-40- ......i-5-~94- 6.70 3- ...........4 0 ........5~9--.0'~....... J 1 - 4 . - 5 9 - ........6-.-2-0.... )
m: n u m b e r of machines GE: G r o u p i n g Efficiency n: n u m b e r of parts CR: Controlled Cluster Separation M e a s u r e NC: N u m b e r of Clusters IR: F u z z y Similarity M e a s u r e ICM: Inter Cellular M o v e m e n t s 1: For a specific problem, as the n u m b e r of clusters increases, the value of GE increases. W e can not c o m p a r e the values of GE for cases with an unequal n u m b e r of clusters. 2: In this e x a m p l e the n u m b e r of part families is K, a n d the n u m b e r of m a c h i n e cells is K+I. 3: In this e x a m p l e the n u m b e r of m a c h i n e cells is L, a n d the n u m b e r of part families is L+I.
In addition problems. certain
Each
to 14 examples problem
best clustering
component). scrambled
Then
included
of machines the
by computer.
rows
and
from
the
500 parts and
literature, and
The resulting
matrices
of
generated
40 machines.
parts were
columns
we
first assumed the
were
incidence inputted
In each
18 other example,
(no off-diagonal matrices to our
were
algorithm.
277 Our algorithm was able to find the solutions in one or two cycles without prior knowledge of the solutions. The time to solve each problem was about 15 to 20 seconds on IBM RISC 6000 main frame.
8.0 Conclusion The clustering problem of group technology (GT) can be considered as a pattern recognition problem. The process route of each part is a distinct pattern. Our task is to distinguish similar patterns and group them together. Neural networks are known as a reliable tool for pattern recognition. The goal is to convert a large volume of data into useful categorized information. This categorized information can solve the GT problem and help us in: a) better sequencing and scheduling the products by applying the group scheduling techniques [48]; b)controlling material handling requirements by distinguishing the bottle neck areas in manufacturing cells; and c) satisfying the demand for each product by considering production volume in the determination of part families and machine cells. Here we introduced the FACT algorithm which is able to accomplish this task. This algorithm can accept binary and continuous features of a part as attributes of the input data. The clustering process happens very fast, and processing time does not increase significantly for large problems or complex conditions. Ill-structured problems with undesirable conditions (e.g., machines shared by several parts) are solvable. In a situation where all of the parts and machines are already clustered, if a new part or machine enters the system, it can be clustered without reclustering all of the previous parts and machines. Also, the algorithm can create new part families (machine cells) when a part (machine) does not fit into any of the existing part families (machine cells). In manufacturing plants it is not possible to rearrange machine cells each time a new part is introduced to the system; and it is very important to be able to determine the proper part family and machine cell for a new part without the need to recluster all the existing parts and machines. In reality, it is possible that only a section of the plant is suitable to be transformed into manufacturing cells. The FACT algorithm has a mechanism to separate the ungroupable parts or machines and store them in a residue area. At the end of the process, the algorithm notifies the user with the quantity and description of these ungroupable items.
278 Several examples were used to show how this clustering algorithm functions. Performance of the FACT algorithm was compared with several GT clustering algorithms published in the literature. This comparison has shown that FACT's results are either better or as good as the results of the other algorithms on most performance measures. The FACT algorithm is a general purpose clustering algorithm and it can be applied to other group technology applications such as design information retrieval, services, sales, and purchasing. Our future work includes investigation of the other clustering problems that can be solved by the FACT algorithm.
REFERENCES 1. 2.
3.
4.
5. 6.
7.
A. Kusiak.Intelligent Manufacturing Systems, Prentice Hall, New Jersey, (1990). J . R . King. "Machine-component group formation in production flow analysis: An approach using a rank order clustering algorithm," International Journal of Production Research, vol. 18, no. 2, pp. 213-232, (1980). H . M . Chan, and D. A. Milner."Direct clustering algorithm for group formation in cellular manufacturing," Journal of Manufacturing Systems, vol. 1, no. 1, pp. 65-74, (1982). P . H . Waghodekar, and S. Sahu. "Machine component cell formation in group technology: MACE," International Journal of Production Research, vol 22, pp. 937-948, (1984). A. Kusiak. "The part families problem in flexible manufacturing systems," Annals of Operational Research, vol. 3, pp. 279-300, (1985). H. Seifoddini, and P. M. Wolfe. "Application of the similarity coefficient method in group technology," IIE Transaction, vol. 18, no. 13, pp. 271-277, (1986). M . P . Chandrasekharan, and R. Rajagopalan (1986a). "MODROC: An extension of rank order clustering for group technology," International Journal of production Research, vol. 24, no. 5, pp. 1221-1233.
279
,
0
10. 11.
12.
13.
14.
15.
16.
17. 18.
J. C. Wei, and G. M. Kern. "Commonality analysis: A linear cell clustering algorithm for group technology," International Journal of Production Research, vol. 27, no. 12, pp. 2053-2062, (1989). A. Kusiak, A. Vannelli, and K. R. Kumar. "Clustering analysis: Models and algorithms," Control and Cybernetics, vol. 15, no. 2, pp. 139-154, (1986). A. Kusiak."The generalized group technology concept." International Journal of production Research, vol. 25, pp. 561-569, (1987a). A. Kusiak."The generalized P-median problem," Working paper #20/87, Department of Mechanical and Industrial Engineering, University of Manitoba, Winnipeg, Canada, (1987b). A. Ballakur, and H. J. Steudel. "A within-cell utilization based heuristic for designing cellular manufacturing systems," International Journal of Production Research, vol 25, no. 5, pp. 639-665, (1987). R. G. Askin, and K. S. Chiu. "A graph partitioning procedure for machine assignment and cell formation in group technology," International Journal of Production Research, vol 28, no. 8, pp. 1555-1572, (1990). M. P. Chandrasekharan, and R. Rajagopalan. "An ideal seed nonhierarchical clustering algorithm for cellular manufacturing," International Journal of Production Research, vol 24, no. 2, pp. 451-464, (1986b). A. Vannelli, and K. Kumar. " A method for finding minimal bottle-neck cells for grouping part-machine families," International Journal of Production Research, vol 24, no. 2, pp. 387-400, (1986). R. Rajagopalan, and J. L. Batra. " Design of cellular production systems, A graph theoretic approach," International Journal of Production Research, vol 13, pp. 567-, (1975). J. Peklenick, and J. Grum. "Investigation of the computer-aided classification of parts," Annals of the CIRP, vol. 29, pp. 319-323, (1980). J. Peklenick, and J. Grum. "Computer-aided design of the part spectrum data base and its application to design and production," Annals of the CIRP, vol. 31, pp. 313-317, (1982).
19. J. Peklenick, J. Grum, and B. Logar. "An integrated approach to CAD/CAPP/CAM and group technology by pattern recognition," 16th CORP International Seminar on Manufacturing Systems, Tokyo, Japan, July 13-14, (1984). 20. B. Mutel, H. Garcia, and J. M. Proth. "Automatic classification of production data," 18th CIRP Manufacturing Systems Seminar, Stuttgart, Germany, June 5-6, (1986).
280 21. B. Logar, and J. Peklenik. "Computer-aided selection of reference parts for GT-part families," 19th CIRP Manufacturing Systems Seminar, Pennsylvania State University, University Park, U.S.A., June 30, July 1, (1987). 20 J. Li, Z. Ding and W. Lei. "Fuzzy cluster analysis and fuzzy recognition methods for formation of part families," 14th North American Manufacturing Research Conference, (1986). 3Q D. Ben-Arieh, and E. Traintaphyllou. "Quantifying data for group technology with weighted fuzzy features," Int. J. Prod. Res., vol. 30, no. 6, pp. 1285-1299, (1992). 40 Y. B. Moon, and S. C. Chi. "Generalized part family formation using neural network techniques," Journal of Manufacturing Systems, vol. 11, no. 3, pp. 149-159, (1992). 25. T. P. Caudell, D. G. Smith, G. C. Johnson, and D. C. Wunsch II. "An application of neural networks to group technology," SPIE vol. 1469 Applications of Artificial Neural Networks II, pp. 612-621, (1991). 60 Y. Kao, and Y. B. Moon. "A unified group technology implementation using the backpropagation learning rule of neural networks," Computers Ind. Engng., vol 20, no. 4, pp. 425-437, (1991). 27. C. O. Malave, and S. Ramachandran. "Neural network-based design of cellular manufacturing systems," Journal of Intelligent Manufacturing, vol 2, pp. 305-314, (1991). 28 C. Dagli, and R. Huggahalli. "Neural network approach to group technology," Knowledge-Based Systems and Neural Networks: Techniques and Applications, Elsevier, N.Y., N.Y., PP.213-228, (1991). 29 S. Kaparthi, and N. C. Suresh. "Machine-component cell formation in group technology: a neural network approach," Int. J. Prod. Res., vol. 30, no. 6, pp. 1353-1367 (1992). 30. S. Kamal. Adaptive Clustering of Parts and Machines in A Cellular Manufacturing Environment: An Application of the Fuzzy ART Neural Network, Ph.D. thesis, Lehigh University, Bethlehem, (1993). 31. P.D. Wasserman. Neural Computing, Theory and Practice, New York, Van Nostrand Reinhold, (1989). 32. G. A. Carpenter, and S. Grossberg. "A massively parallel architecture for a self-organizing neural pattern recognition machine," Computer Vision, Graphics and Image Processing, vol. 37, pp. 54-115, (1987a).
281 33. G. A. Carpenter, and S. Grossberg. "Art2: self-organization of stable category recognition codes for analog input patterns," Applied Optics, vol. 26, pp. 4919-4930, (1987b). 34. G. A. Carpenter, and S. Grossberg. "Art3 Hierarchical Search: chemical transmitters in self-organizing pattern recognition architectures," in Proc. Int. Joint Conf. on Neural Networks, vol. 2, pp. 30-33, (1990). 35. L. I. Burke. "Clustering characterization of adaptive resonance," Neural NetWorks, vol. 4, pp. 485-491, (1991). 36 G. A. Carpenter, S. Grossberg and D. B. Rosen. "Fuzzy Art: Fast stable learning and categorization of analog patterns by an adaptive resonance system," Neural Networks, vol. 4, pp. 759-771, (1991). 37. L. Zadeh. "Fuzzy sets," Information and Control, vol 8, pp. 338-353, (1965). 38. J. R. King. "Machine-component group formation in group technology," Proceedings of the Fifth International Conference on Production Research, pp. 193-198, (1979). 39. B. Kosko. Neural Networks and Fuzzy Systems; A Dynamical Systems Approach to Machine Intelligence, Prentice-Hall, New Jersey, (1992). 40. M. P. Chandrasekharan, and R. Rajagopalan. "ZODIAC- an algorithm for concurrent formation of part-families and machine-cells," International Journal of Production Research, vol 25, no. 6, pp. 835-850, (1987). 41. J. R. King, and V. Nakornchai. "Machine-component group formation in group technology: Review and extension," International Journal of Production Research, vol. 20, pp. 117-133, (1982). 42. J. McAuley. " Machine grouping for efficient production, " The Production Engineer, February, pp. 53-57, (1972). 43. M. P. Groover. Automation, Production Systems, and Computer-Integrated Manufacturing, Prentice-Hall, New Jersey, (1987). 4, G. Srinivasan, T. T. Narendran, , and B. Mahadevan. "An assignment model for the part-families problem in group technology," International Journal of Production Research, vol 28, no. 1, pp. 145-152, (1990). 45. J. L. Burbidge. The Introduction of Group Technology, John Wiley and Sons, New York: Halsted Press, (1975). 46. R. K. Kumar, and A. Vannelli. "Strategic subcontracting for efficient disaggregated manufacturing," International Journal of Production Research, vol 25, no. 12, pp. 1715-1728, (1987). 47. L. E. Stanfel. "Machine clustering for economic production," Engineering Costs and Production Economics, vol. 9, pp. 73-81, (1985).
282 48. I. Ham, K. Hitomi, and, T. Yoshida. "Group Technology: Applications to production management," Boston: Kleuver Academic publishers, (1985).
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
283
Intelligent Cost E s t i m a t i o n of Die-Castings T h r o u g h A p p l i c a t i o n of Group Technology Dharmaraj Veeramani Department of Industrial Engineering, University of Wisconsin-Madison, 1513 University Avenue, Madison, WI 53706, USA.
This paper describes ongoing work with the die-casting industry on the application of Group Technology in developing a computer-integrated system that will assist cost estimators in developing quotes for die-cast parts in a consistent, accurate, and timely manner. The paper also defines the framework for an intelligent cost estimating system that will be capable of estimating the cost of die-cast parts on the basis of CAD models. This system integrates geometric reasoning of the CAD model with decision-logic that encapsulates the dependencies between cost driving design specifications and manufacturing cost. The availability of such computer-integrated cost-estimation systems will provide die-casting companies a formidable weapon for competing in today's global marketplace characterized by increasing time-based competition. 1. INTRODUCTION
1.1. The Importance of Cost-Estimation in the Die-Castings Industry Die-castings companies operate in a highly competitive environment in which a "capture rate" of 2% to 5% is common (Capture rate is defined as the % of bids submitted to customers that materialize as actual job orders). Since many of the die-cast parts are manufactured in large quantities (wind-shield wiper housing, for instance), each order can correspond to a significant amount of revenue to a die-castings company. Much of the success in winning job orders hinges on the ability of a die-castings company to provide the best (lowest) costestimate quickly. The quality and response-time of cost-estimation are therefore major concerns in any die-castings company because, very often, a few cents difference in the price quoted per part or a few days' delay in providing a cost-estimate to a customer can cost the company a job order. Since the price that is quoted is usually on a per piece basis, even small errors in cost estimation are magnified due to the high volume of production and can significantly impact the income to the company. Hence, from a profitability perspective, it is important that these cost estimates are also made accurately and consistently. 1.2. Overview of the Manufacture of Die-Cast Parts Die-cast parts are typically manufactured in large quantities ranging from tens of thousands to millions of parts per year. A typical die-cast part will undergo six production stages: 9 9 9 9 9 9
die-casting trimming surface finishing machining inspection shipping and packaging
284 Two types of die-casting machines are commonly used, namely hot-chamber and coldchamber. Hot-chamber machines are used for alloys based on zinc, tin, and lead that have relatively low melting points whereas cold-chamber machines are used for alloys based on aluminum, magnesium, and copper that have relatively high melting points. After the part is removed from the die-casting machine, the runner system and flash need to be trimmed. This is commonly done using a trim die and sometimes by hand. Different types of surface finishing processes are employed in the die-casting industry to remove sharp edges and burrs from the parts. One commonly used method is to place the parts in a vibratory container along with specially shaped abrasive pellets. Die-cast parts are often machined to generate shape features to desired dimensions and tolerances. Milling, turning, boring, drilling, tapping, and reaming are the processes most commonly used to machine the parts. Since die-cast parts are typically near-net-shape, the amount of metal to be removed by machining is usually quite small. In a typical die-casting company, it is common to find 80% of the production volume being made up of 20% of the part types. It is also common to find that 80% of the production volume requires machining since customers are increasingly demanding die-casting companies to deliver a finished part that can directly go into assembly. Typically, for the high volume components that require machining, die-casting companies dedicate a machining center or specialized machine with multiple machining spindles and heads. Some die-cast parts also require other processes such as heat treating and plating. However, the processes that contribute most to the cost are the die-casting process and machining process.
1.3. Present Cost-Estimation Procedure Currently, much of the cost-estimation is done manually on the basis of blueprints of designs provided by customers. The different types of costs that are considered include
(1) (2) (3) (4)
Material Costs Tooling Costs Processing Costs (die-casting process) Post-processing Costs (heat treat, machining, cleaning, etc.)
Estimation of material costs primarily entails the calculation of the volume of the part. Currently, this is a highly time-consuming task in which the cost-estimator partitions the blueprint into smaller regular shapes and manually (with the help of a calculator) estimates the volume of each section. The total volume of the part is then obtained by summing all the sub-volumes. The material cost is subsequently estimated by multiplying the volume with a cost factor corresponding to the material type of the die-casting. Sometimes the customer will provide a sample part. In such cases, the volume of the material is estimated directly by weighing the sample part. Estimation of tooling costs (usually ranging from $30,000 to $300,000) is much more complex. Usually, the expertise of a die-designer within the company is sought in estimating the tooling costs. The die-casting die may be made as a single-cavity die, multiple-cavity die, combination dies or unit die. The number of cavities in a die depends on the production capabilities of the chosen die-casting machine and the shape characteristics and production volume of the part. The cost of die manufacturing is determined by factors such as the complexity of the cavity (for example, the presence of bosses and fibs), the die material, special operations such as EDM, and the size of the part. The cost of the die is therefore generated on the basis of: 9the die block (the size) ~ the die design (location of runners, gates, sprues, and cooling lines) 9the cavity (the number and complexity of features in the cavity) ~ the presence of features requiring EDM.
285 The cost of the trimming die also needs to be estimated. Since the capture rate of orders is often very low, and customers usually demand a quick response of the cost-estimate, the time or motivation to do a detailed die-design is usually not present. Hence, tooling-cost estimation is usually based on experience. Die construction cost is, therefore, difficult to estimate accurately. In some cases, the customers themselves agree to provide the necessary tooling. Even in such cases, die-castings companies need to estimate a cost associated with die maintenance and include it in their cost estimate. This requires estimation (again, based on experience) of the nature of die-maintenance and the time between maintenance periods (i.e., die life expressed in terms of the number of shots). Estimation of processing costs is dependent on a number of factors, including type of process, type of machine, number of cavities, number of inserts, cycle time, useful life of tooling, etc. The cycle time and production rate depends on factors such as the thickest section in the part, the shot size, and the number of cavities in the die. The shot rate for a part can vary from a hundred to several thousand shots per hour. Each machine group carries its own overhead rate which includes the operating cost, maintenance cost, and equipment depreciation cost. The choice of machine and process is often dictated by customer specifications. For instance, in some cases, porosity is not allowed due to safety reasons or machining requirements. To remove the porosity in a part, vacuum die-casting is used to remove all the trapped air in a die before metal injection. This enables the metal injection pressure to be lower and allows the selection of a machine with lower compacting pressure. As a result, a longer service life of the dies is possible. Processing requirements such as vacuum die-casting can therefore lead to an increase in cycle time, die life, and processing cost. Currently, most of the decisions related to estimation of processing cost are made purely based on expert knowledge of the process and the equipment. In addition, these decisions are subjective and can yield different responses from different "experts". Post-processing costs for heat-treating and cleaning are estimated based on the type of process that is used. Estimation of machining costs is more complex and requires the identification of machining operations necessary, equipment selection, cutting tool selection, machining parameter selection, cycle time estimation, etc. Many of these decisions are timeconsuming. For instance, the cost estimator needs to determine what type of machining equipment to use for machining the die-cast part. For a high volume part, specialized machine tools are typically purchased and dedicated to the machining of the part. The cost of the equipment purchase, therefore, needs to be considered as part of the cost-estimate. Decisions such as determination of the number of stations/workheads at these specialized machine tools, the fixturing requirement, the distribution of operations among the various stations/workheads, and the cutting-tool requirements are important decisions but are typically estimated in a subjective manner purely on the basis of the estimator's experience. Due to the limited amount of time available for preparing a quote, the cost estimator is typically unable to work with the machine tool builder to develop an accurate estimate of the custom built machinery. Another factor which contributes to the cost of the part is the amount of scrap. A scrap percentage is difficult to estimate before actual production begins. Scrap percentage of around 5% of the production volume are not uncommon. Scrap is typically produced during the setup, testing, and "fine-tuning" of the die. The scrap estimate is also determined by the quality requirement for the part. Usually, the estimated scrap percentage increases as the surface finish and dimensional tolerance requirements become more stringent. Quality inspection cost is a combination of statistical process control cost, part inspection cost, and gauging cost. This cost is influenced by the time it takes to verify the dimensions on the part. On average, it takes at least half-an-hour to check a moderately complex part for 50 dimensions. Shipping and packaging cost is a function of the sizes, the weight, and the pad cost.
286 To summarize, the main contributors to the cost of a die-cast part are material cost, tooling cost, processing cost, and post-processing (machining) cost. The large number of factors involved in cost-estimation make it very difficult to generate quotes for die-cast parts in an accurate and consistent manner. Manual cost-estimation that is commonly being practiced in die-castings companies is time-consuming and potentially error-prone. The quality of such cost-estimates depends heavily on the expertise of the cost-estimator. While in the past, the profits and losses from over-estimated and underestimated costs of jobs often canceled each other and allowed these companies to survive, the growing customer demand for lower cost, quicker response, lower production volumes, and flexibility will require companies to improve their cost-estimation procedure. Since good models for the die-casting process and the interactions between design requirements and manufacturing costs do not exist at present, complete automation of the entire cost estimation process does not appear feasible in the immediate future. However, the current cost estimation process can be significantly improved if estimators are provided with a tool that would facilitate quicker generation of quotes and validation of cost estimates.
1.4. Organization of this Paper
Section 2 of this paper describes the application of Group Technology principles in developing a tool that will assist the cost estimator in developing a quote. Section 3 describes the framework for developing an intelligent cost-estimating system that is capable of generating a cost estimate of a die cast part through geometric and knowledge based reasoning of its CAD model. The paper concludes with a summary. 2. DEVELOPMENT OF A GROUP TECHNOLOGY BASED SYSTEM TO ASSIST COST ESTIMATION OF DIE-CASTINGS
2.1. Applicability of GT for Cost Estimation
Group Technology (GT) can be a very effective tool in developing cost estimates. One of the most important and practical benefits of applying GT is the ability to identify and retrieve similar parts. This ability can be used effectively in generating the cost estimate for a new part in the following manner. First, a database of the actual costs of the parts under production or those that were manufactured in the past needs to be created. The cost estimation of a new part can be started by developing a GT code to describe the part, and conducting a search for parts with similar characteristics in the database. The estimator can then use the costs associated with the similar parts in two ways. First, the cost estimate for a new part can be generated by modifying the quote sheet of a similar part. Second, the estimator can validate the cost estimate for a new part on the basis of the costs of similar parts stored in the database. This approach will be more effective in developing cost estimates than that of estimating every new part from scratch. It will result in reduced cost estimation time, consistent cost estimation, and improved quality of cost estimates. One of the requirements of implementing the above approach is the development of a coding and classification (C&C) system for die-cast parts. The objective of a typical C&C system is to classify parts by their features and to code these features so that parts having similar code numbers possess similar features. Although there are many ways to classify parts (such as on the basis of main shape, function/application, or manufacturing process requirements), the key to developing an effective C&C system to facilitate cost-estimation of die-castings is to group parts with similar cost driving characteristics. Many different types of C&C systems have been developed around the world. However, these systems typically consider simple shape features only and are not suitable for capturing the cost driving characteristics that are important in die-casting such as cavity complexity, machining complexity, and die design.
287 Hence, a C&C system that addresses the unique needs of the die-casting industry has been developed with the goal of applying it to improve the cost estimation process. This approach is similar in spirit to variant process planning. The system does not generate a cost estimate for a part; it acts as an assistant to the estimator in developing a cost estimate for a new job.
2.2. Issues in the Design of a Coding and Classification System for Die-Cast Parts Three key issues were addressed in developing the coding and classification system: 9 What is the appropriate code structure? 9 What information should the code represent? 9 What is the desired resolution of the coding scheme? 2.2.1. Code Structure The code structure chosen for developing the C&C System is a hybrid structure that is a combination of a monocode (or hierarchical) structure and a polycode (or chain type) structure. In a polycode structure each part attribute is assigned to a fixed position in a code. The meaning of each digit in the code is independent of any other digit. In contrast, in the monocode structure, the interpretation of a digit is dependent on the value of the previous digits in the code. The hybrid structure provides two advantages namely reduction of the length of the code and ease of decoding the independent digits by computer. In addition, the chosen code structure enables the retrieval of parts on the basis of varying degrees of similarities (by specifying some but not all of the code digits during the search and retrieve procedure). 2.2.2. Code Information It is unlikely that any C&C system based solely on design (especially shape) features will be effective in identifying families that are suitable for cost-estimation. In practice, there is a lot of interaction between manufacturing and design attributes that affect cost. Hence, the C&C system developed encapsulates both design and manufacturing attributes that affect cost. Through examination of several die-cast parts, the following eight general characteristics that drive the cost of a die-cast part were identified: material, volume/size, basic shape complexity, cavity complexity, machining complexity, porosity specification, surface finish specification, and secondary operation (i.e., inserts or assembly). 2.2.3. Resolution of Coding Scheme One of the key issues that had to be addressed in designing the C&C system was the resolution of the coding scheme. If the resolution was very high (that is, if the code was very detailed), then code construction for parts would be cumbersome. In addition, retrieval of "similar" parts would also be difficult since each part might end up having its own unique code. On the other hand, if the resolution of the coding scheme was low then one would face the risk of grouping "dissimilar" parts under the same code. By the same token, the coding scheme would allow retrieval of large number of parts for each code number. Fortunately, in estimating the cost of a die-cast part, the estimator is more interested in aggregate descriptions of the part's characteristics, such as the complexity of the die cavity and machining operations, than the specific numbers and types of features. While specific part characteristics, such as fibs, impact the cost of the part, their presence does not have to be explicitly represented in the part's code. Hence, in the C&C system described herein, the part's code uses only one digit to represent the overall complexity of the cavity. However, the determination of the value assigned to this digit requires the consideration (weighted sum) of the specific features that contribute to the complexity of the cavity. It is important to reiterate that the purpose of the system is to assist the generation of the cost estimate by a human by retrieving parts with similar cost driving characteristics, and not to generate the cost estimate automatically. Hence, it is sufficient to have a degree of resolution of the system that enables recognition of similarities in cost driving features in an aggregate manner and within a desired range of values.
288 2.3. Description of the C&C System for Die-Cast Parts A seven digit code that encapsulates the key cost driving characteristics of die-cast parts has been developed. The code takes into account a variety of factors including part geometry and size, material type, complexity of the die cavity, post-processing requirements, and part function or application. This coding scheme is summarized in Table 1. A more detailed representation of the seven-digit code is shown in Table 2. The identification of the third and fourth digit values requires the use of additional tables (see description below).
Table 1 Summary of C&C System for Die-Cast Parts Digit
Design and
Related Major Cost
Manufacturing Attribute
Component
Volume or Size
Die Construction Cost Material Cost Processing Cost Packaging and Shipping Cost
Characteristic
General Geometric Shape and Basic Mold Information
Basic Shape and Type of Processing Cost Die Construction Cost Parting Line Basic Cavity Complexity Die Construction Cost Material Cost and Material Type Die Maintenance Cost Machining Complexity and Secondary Operation Surface Finish and Porosity Standard 6&7
Machining Cost Tooling Cost Processing Cost Die Construction Cost Surface Finishing Cost Quality Inspection Cost
Function or Application Packaging and Shipping Cost Quality Inspection Cost
Internal Features and Material Information Machining Characteristics and Secondary Operations Standard and Requirement Basic Part Function
The first digit divides parts of different sizes into five categories. The user is responsible for calculating the projected volume of the part. The projected volume is defined as the smallest imaginary rectangular box or projected envelope that completely encloses the part. The second digit is shape oriented. Three fundamental shapes are considered namely rotational, non-rotational, and slender (flat). Each shape is further divided into those that can be molded using a planar parting surface between the two die halves, and those that require a non-planar parting surface. The user is responsible for identifying the fundamental shape of the part through visual inspection and determining the location and shape of the parting line based on the blueprint and personal experience.
289
Table 2 G T Main Code for Die-Cast Parts Digit Coding 1,2 General Code 3,4,5 Supplementary Code 6,7 Additional Code GENERAL D
I
G I T 1 Main Shape Parting Line DIGIT 2 U
Projected Volume (in cu. in.) 0to 20 Greater than 20 to 40 Greater than 40 to 60 Greater than 60 to 80 Greater than 80 Rotational Planar Planar in not in Non half half Planar 0 I 1 i 2
i
Planar in half 3
Non Rotational Planar not in half 4
Slender Non Planar 5 I I
(Flat)
Planar in half
Planar not in half
6
J
7
Non Planar 8
SUPPLEMENTARY Basic Cavity Complexity Rating DIGIT 3 Material
Simple
Moderately Complex
Complex
Very Complex
0 Zn
1 Zn
2 Zn
3 Zn
Machining Complexity Rating DIGIT 4 Machine Insert
Simple
Moderately Complex
Complex
Very Complex
0 Yes
1 Yes
2 Yes
3 Yes
i
Simple
Moderately Complex Very Complex Complex
Simple
Moderately Complex Very Complex Complex
4
5
6
7
No
No
No
No
Surface ~ Utility I Commer- I Superior I Utility I Commer- [ Superior
Appearance DIGIT 5 Porosity
grade 0 Allowed
cial grade 1 Allowed
grade 2 Allowed
grade 3 Not Allowed
cial grade 4 Not Allowed
grade 5 Not Allowed
ADDITIONAL
~
n ~ I ~ l l :mtumzua,.~,m~an~mI I I ~ I B ~
Screw Shaft Switch
---VT-lp
I Wheel
290 The third digit contains information that classifies the Basic Cavity Complexity (BCC) and the type of casting material. The BCC rating attempts to capture and classify the elements that contribute to the overall die construction cost. Some of these cost driving elements include cores, EDM related features, bosses, fibs, thread, and tolerance. The weights used in the B C C rating were developed through a survey of "expert" cost estimators. To find the value of the third digit, the users have to complete the basic complexity rating table and select the score with the corresponding material and level of complexity (Table 3). Table 3 Basic Cavity Complexity Table Basic Cavity Complexity RatinffCavity Sub-code Digit ~ Weight II Item II x0 1 . 1 Cast-in Thread . None , 2 0.8 Number of Dimensions Between 0-50 3 0.8 Cast Tolerance N/A ,
!
4
0.75
5 6
,
0.7 0.7
!
Number of EDM related None features 0 Number of Ribs (Heat Sinks, None Fins, or Strengthen Ribs) , Number of Parting Planes 1
xl Straight Between 50-100 Easy to hold Fewer than 7 Fewer than 6 2
7
0.6
Type of Groove/Slot
None
Plain
8
0.5
Number of Brackets/Legs
None
2
0.5
Total Number of Cores in the Cavity , Cast in Insert Number of Slide Cores , Number of Solid Bosses
None
Fewer than 4 Yes
L/W Ratio of the Smallest Boss
None
9
i
10 0.4 11 , 0.4 12 0.25 13
0.25 1Comment:
Score 0 < SC < 6 6 < SC _<11 11 < SC < 16 16 < SC < 23
None None None
1
lto4 O
x2 Pipe Tap Between 100-150 Difficult to hold Between 7-15 Between 6-10 3
x3 ACME Greater than 150
IISc~
Digit
Greater than 15 Greater than 10 Greater than 3 Contour Plain and Contour Greater 3 than 3 Between 4 Greater than 15 to15 3 or more More than 6 I
2 2 5to6
Total
11 Complexity Rating ]1 Simple 11 Moderately Complex Complex Very Complex
The fourth digit covers the machining complexity and secondary operations (i.e., insertion or assembly). The Basic Machining Complexity (BMC) rating is designed to provide a relative measure of the machining complexity of different parts and is a function of volume of production, n u m b e r of machining axes, tolerance, number of features that need to be machined, and the type of machining operation (Table 4).
291 Table 4 Basic Machinin[ Complexit~r Table Basic Cavity ComplexityRatinffCavity Sub-code Digit II Weight ~ Item II xO Closest Tolerance 0.8
Numberof features needed None to machine 0.75 Numberof planes needed to None machine 10.6 [ Numberof operations/processes involved in machinint] Score ~ ComplexityRating [ 0 < sc_< 2.3 II Simple I 2.3 < XC < 4.7 [1 ModeratelyComplex I 4.7 < SC < 7.0 II Complex [ 7.0 < SC < 9.5 ii Very Complex ]
x2 x3 II Score II Digit Between 0.01 Tighter to 0.001 than 0.001 5 or less Between5 to 13 or 13 more 3 or more xl
None 4~
Betw n4t~176176 Wotat
II
II
The fifth digit is used to characterize the surface finish and the porosity requirement of the part. Utility grade implies that no surface finishing operations are needed. Commercial grade implies that vibratory or barrel finishing is needed. Superior grade requires additional work and special treatment (such as plating). The sixth and seventh digits of the coding scheme classify the functions and applications of the part. The main reason for including this information in the main code is that it facilitates retrieval of similar parts. It may sometimes be necessary to retrieve parts on the basis of specific features, such as brackets or fibs. The main 7-digit code described above does not contain enough detail to facilitate retrieval of parts on the basis of such detailed features. Hence, two auxiliary or subcoding systems have been developed to retain information on specific characteristics that contribute to cavity complexity and machining complexity. The cavity sub-code consists of 13 digits with each digit representing a part characteristic relevant to cavity complexity. Similarly, the machining sub-code consists of 4 digits that represent the characteristics contributing to machining complexity. The sub-codes for each part is created based on the information provided by the user for the main code construction. Therefore, additional effort on the part of the user is not required. For instance, if the cost estimator is interested in examining all aluminum parts having more than 3 brackets or legs, the availability of the sub-codes will enable the system to retrieve parts having this characteristic from the database. 2.4. Implementation of a Computer Aided Tool for Code Construction and Part Retrieval A computer program and a database system has been developed on an IBM PC to aid the users in code construction and retrieval of parts with similar cost driving characteristics. To construct the code for a part, the users have to respond to a series of questions. At the end of the question and answer session, the computer will generate a GT code for the part. The program also allows the user to specify a code and retrieve parts with varying degrees of similarity to the input part. The user can also retrieve parts based on specific digit values or specific features. An example of code construction is shown in Figure 1. Figure 2 illustrates the retrieval of similar parts.
292 Samole oart A
General Projected volume =LxHxW = 4.5 x 6 x 2.75 = 74.25 cu. in. ::0 Digit 1 = 3 Non-planar parting line Non-rotational part =, Digit 2 = 5 Supplementary Material: Aluminum From Table 3, Basic Cavity Complexity score = 12.6 =0 rated
complex
=, Digit 3 = 6 From Table 4, Machining Complexity score = 5.95 ~ rated Also, insert present. =, Digit 4 = 2
complex
Porosity not allowed due to machining operation Special surface finish not needed (commercial grade sufficient) =, Digit 5 = 4 Additional This part is a housing Digit 6 = 3 =~ Digit 7 = 3 For this part, the Main Code is: The Cavity Sub-Code is : The Machining Sub-Code is 9
3562433 0223320220132 3211
Figure 1. An example of code construction. The cost model underlying the system was validated by comparing the relationship between the complexity scores and the actual costs. For instance, data on the die construction cost was compared with the basic cavity complexity scores for a number of parts. This analysis revealed a high degree of correlation (0.85) between the basic complexity score and the die construction cost (Figure 3). Although the BCC will not provide an exact cost for die construction, it will nevertheless provide the estimator a basis for comparing and verifying the die construction cost estimate for a new part against existing similar parts.
293
Sample A
Figure 2. An example of retrieval of similar parts.
15
8 ~
~' o5
o $20,000
o o
$40,000 $60,000 $80,,000 Cost ($)
Figure 3. Scatter plot of Basic Cavity Complexity Score versus Die Cost.
294 2.5. Strengths and Weaknesses of the GT Based Computer Aided Tool The system described above has several strengths. It is capable of retrieving parts based on different degrees of similarity or on the basis of special features (by using the sub-coding schemes). It is relatively simple and easy to use. The C&C system encapsulates the most important cost driving characteristics of die-cast parts. The system also has a number of limitations. First, it is incapable of generating a cost estimate. It is a tool designed to assist the cost estimator rather than to replace him. Second, the system is incapable of reasoning with geometric (CAD) models. It is not capable of providing an estimate of the volume of the part for instance. The user needs to estimate the volume which in itself is a time-consuming procedure. The development of the code for a part requires the user to answer a number of questions to provide information that could potentially be extracted directly from a CAD model. Third, the system addresses the important, but not all, cost elements associated with a die-cast part. A number of commercial cost-estimating software packages are available. However, these packages are, in essence, spread-sheet/database packages that still require the user to do a lot of cost-related design analysis for each design. Perhaps the only significant effort in costestimation for die-castings is that due to the research group at the University of Massachusetts [ 1, 2]. They have developed a Group Technology based coding scheme to capture the relative cost implications of various design features. However, their approach deals with the complexity of a design only at an aggregate level and does not capture the impact of specific features (for instance, cavity details) on cost. The limitations of the GT-based system described above and other systems available commercially has motivated efforts towards the development of an intelligent computerintegrated cost-estimation system that is capable of integrating geometric reasoning of CAD models with the decision logic and expert knowledge about the interaction between design specifications, processes, and costs. This generative cost-estimation system is described in the next section. 3. AN INTELLIGENT COST ESTIMATION SYSTEM FOR DIE-CAST PARTS
3.1. Cost Estimation with CAD models There is a growing trend towards solid-modeling based CAD and time-based competition. Indeed, CAD models are on their way to becoming the design blueprints of the future. Currently, die-castings companies do not have any tools for cost-estimation that take advantage of the strengths of CAD models to compete in an environment where response time is becoming increasingly important. Die-castings companies that lack the ability to respond quickly to customer requests for cost-estimates or are unable to effectively use CAD models for generating consistent and accurate cost-estimates by integrating the company-specific expert knowledge with CAD information may be unable to compete and survive in the future. Therefore, there is a need for a computer-integrated cost-estimation system that is capable of analyzing a CAD model from the perspective of material, tooling, processing, and postprocessing costs, and generating a cost-estimate accurately, consistently, and quickly by integrating geometric reasoning of the CAD model with expert knowledge about interactions between design specifications, processes, and costs. The section below describes such a system, named Intelligent Cost Estimator (ICE), under development at the University of Wisconsin-Madison. 3.2. Structure of the Intelligent Cost Estimator (ICE) The structure of the intelligent cost-estimation system is shown in Figure 4. Under this paradigm, the CAD model of the design provided by the customer serves as the input to the cost-estimation system. Module 1 extracts cost-driving shape features and other design specifications (such as material, surface finish, tolerance information, etc.). This information is then fed to the cost estimation modules.
295
Module #4
Module #2
Material Cost
Tooling Cost Module # 1 CAD
Model
9Part material cost 9Insert cost
9Die Design 9Die Fabrication
Feature Recognizer/ Analyzer
Cost Estimate
Module #5
Module #3
Post-processing cost estimator 9heat-treatcost 9cleaning cost 9machining cost
Processing Cost 9Equipment Selection 9Number of cavities 9Die life estimation 9Cycle Time estimation
Cost Estimation System
~
[
Figure 4. Structure of the Intelligent Cost Estimator (ICE).
The tooling cost estimator (Module 2) consists of two sub-modules: (1) for die design, and (2) for die fabrication. The die design module evaluates the CAD model for identifying key characteristics of the die (such as parting surface, number of slides and cores, etc.). This enables the die fabrication module to identify the operations needed to manufacture the die and thereby to estimate the cost of the tooling. The die design module is also responsible for designing the trimming die that is necessary to remove unwanted material such as flash. These modules are all linked to knowledge-bases that encapsulate the expertise of die-makers. These modules also have interfaces to Module 3 to obtain information such as number of cavities and equipment choice. Estimation of processing costs is performed by Module 3. This module contains submodules for equipment selection, determination of the number of cavities, die life estimation, cycle time estimation, etc. These modules integrate process-specific experience-based knowledge. Module 4 estimates the material cost based on the volume of the part (this can be readily obtained from the solid CAD model) and the cost of inserts (such as bearings) in the die castings. Module 5 is dedicated to estimation of post-processing costs. This module includes submodules for heat-treat cost estimation, cleaning cost estimation, machining cost estimation, etc. The machining cost estimation module is capable of selecting machines, operations, cutting tools, and machining parameters, and is capable of estimating the overall cycle time.
296
3.3. Stages in the Development of ICE Development of an intelligent cost estimating system is not a trivial task. The ICE project has focused its efforts on finding answers to a number of key questions that are critical to the development of such a system. 1. What are the cost-driving form features in a die-cast part? 2. What are the cost models that accurately capture the impact of each type of form feature on the various types of costs (tooling, processing, material, and post-processing)? 3. How can these form features be automatically recognized in a CAD model? 4. (Tooling cost issues) How can the die be designed automatically from the CAD model? How can the die fabrication cost be subsequently estimated? 5. (Processing cost issues) How can decisions such as equipment selection, number of cavities, estimated life of die (i.e., # of shots), and cycle time be estimated from the CAD model? 6. (Post-processing cost issues) How can processes, equipment, cutting-tools, and machining parameters for post-processing operations be selected? How can cycle time be estimated? The following stages outline the methodology being used in the development of ICE. 1. Development of a taxonomy of the various geometric and non-geometric aspects of design specifications of die-cast parts that impact cost. 2. Development of a ranking and classification of cost-driving shape features. 3. Development of a set of algorithms for recognizing these shape feature from a CAD model. 4. Creation of knowledge-bases that encapsulate the decision-logic and expert knowledge used in estimating tooling, processing, material, and post-processing costs. 5. Development of cost models that reflect the impact of design specifications on various types of costs.
3.3.1. Development of a taxonomy of cost-driving design specifications A taxonomy is being developed through examination of the parts currently being made by a die-castings company to identify all the design specifications which impact the cost of the parts. One of the key characteristics of die-cast parts is that localized features (such as fibs) or feature interactions have a significantly greater influence on cost and manufacturability than do overall design specifications (such as length of the part). Knowledge about cost and manufacturability implications of various shape features and other design specifications is being gathered through discussions with experts within the company. This knowledge will assist not only in the cost estimation effort but also in evaluating the product design in terms of its suitability for manufacturability. By identifying difficult to manufacture shape features, the die-casting company has an opportunity to recommend design changes that eventually can result in cost savings to the customer. 3.3.2. Development of a ranking and classification of shape features Having identified the features that impact cost, the next step is to rank them in order of importance. In addition, similar features can be classified into families. This will be useful in the development of cost-models. A preliminary ranking can be obtained through discussions with experts. This ranking can subsequently be validated and refined by comparison with actual costs incurred in manufacturing various parts with those features.
297
3.3.3. Development of algorithms for shape feature recognition
Considerable research has already been done on feature recognition in CAD models using a variety of methods, such as syntactic pattern recognition [3], logic programming [4], graph theory [5], and convex decomposition [6]. However, most extant approaches to feature recognition are capable of dealing with only simple faceted and circular features. These algorithms are of limited utility when dealing with designs of die-cast parts where it is more common to find curved (often sculptured) surfaces than flat surfaces. Therefore, a set of feature recognition algorithms (some of which build on prior work) are being developed.
3.3.4. Creation of Expert Knowledge-Bases for Cost-Estimation
The die-casting process is highly complex, and hence, it is very difficult to develop a process model for it. However, experts in the industry do not think in terms of equations and process models while estimating costs; yet, they are reasonably good at estimating the process behavior (for instance, cycle time, metal flow patterns, etc.). It is, therefore, prudent to gather this expert knowledge and encapsulate it in a knowledge-based system that can support the proposed computer-integrated cost estimation system (till accurate process models are developed). Indeed, much of this expert knowledge is experience-based. Hence, additional knowledge can be accrued from a detailed study of historical cost data of parts being manufactured in the company. As indicated earlier, these knowledge-bases will also assist in the intelligent evaluation of part designs from the perspective of manufacturability. Knowledge-bases for estimation of tooling, processing, material, and post-processing costs are being developed. In developing the knowledge base for tooling costs, considerable time is being spent on understanding the die design process and in relating it to the die fabrication process and associated costs. Similarly, in developing the knowledge base for processing costs, rules that experts use to select equipment, number of cavities, etc. are being documented. Indeed, this does not imply that the knowledge-bases would be created blindly on the basis of expert input. Concurrent to knowledge extraction from experts will be an effort to rationalize and possibly model the logic for these decision-making processes. For instance, currently the estimation of the die life is purely based on subjective judgment. The goal is to integrate the experience-based knowledge of operators with process knowledge and geometric reasoning, so as to develop a more reliable model for estimating the life of a given die.
3.3.5. Development of Cost Models
A set of preliminary cost-models is being developed to represent the cost implications of various design specifications. Cost models for each type of cost are being developed on the basis of expert advice and by studying real cost data. These cost models will not 0nly form the basis for cost-estimation but will also facilitate the identification of opportunities for cost reduction through design modification. Methods by which such feedback can be provided are also being considered. The cost estimation models and procedures incorporate a combination of variant and generative approaches. When the cost for a certain part needs to be estimated, the system will first evaluate the part features in order to identify a family of parts in the database having similar cost driving features. Once a family has been identified, then the generative approach will be employed to construct the detailed cost estimate by following the cost models associated with the part family. This hybrid approach therefore enables the system to take advantage of the company's experience in manufacturing similar parts while retaining the flexibility and the strengths of being able to generate the cost estimate from part specific characteristics. These cost models are intended to be dynamic in nature. In other words, since it will be difficult to define precise cause-effect relationships between cost driving part characteristics and manufacturing cost, the initial goal is to develop preliminary cost models that are representative and consistent with real cost data. These cost models will then be refined continually by using the insight and data gained from the manufacture of new parts. A neural network based approach to continuous improvement of the cost models is being considered.
298 4. SUMMARY Cost-estimation plays a crucial role in the die-castings industry. Its importance will continue to grow in the future as customers become increasingly sensitive to response-time, cost, flexibility, and quality. Die-castings companies, therefore, need to be provided with tools that will enable them to respond quickly to customers with accurate cost-estimates. In particular, these companies need computerized cost-estimation tools that can generate price quotes on the basis of CAD models that are destined to become the design blueprints of the future. This is precisely the focus of this research project. The proposed computer-integrated intelligent cost-estimation system would not only eliminate the errors and time-delays that result from extant manual cost-estimation practices, but will also provide companies with the ability to generate consistent and accurate quotes. In addition, the cost-estimation system can also be utilized to gain insight into opportunities for cost reduction through design modification. These strengths of the proposed computer-integrated intelligent cost-estimation system will be crucial for die-castings companies to succeed in the cost and time-based competitive business environment of the future. 5. REFERENCES
[1]
Poli, C., Kuo, S.M., and Sunderland, J.E., "Keeping a Lid on Mold Processing Costs," Machine Design, October 26, 1989.
[2]
Poli, C. and Fredette, L., "Trimming the Cost of Die Castings," Machine Design, March 8, 1990.
[31
Staley, S.M., Henderson, M.R., and Anderson, D.C., "Using Syntactic Pattern Recognition to Extract Feature Information From a Solid Geometric Data Base," Computers in Mechanical Engineering, pp. 61-66, September 1983.
[4]
Henderson, M.R. and Anderson, D.C., "Computer Recognition and Extraction of Form Features: A CAD/CAM Link," Computers in Industry, Vol. 5, pp. 329-339, 1984.
[51
Joshi, J. and Chang, T.C., "Graph-based Heuristics for Recognition of Machined Features From a 3D Solid Model," Computer-AidedDesign, Vol. 20, No. 2, pp. 58-66, 1988.
[61
Kim, Y.S., "Recognition of Form Features Using Convex Decomposition," Computer-Aided Design, Vol. 24, No. 9, pp. 461-476, 1992.
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
299
Production flow analysis using STORM S. A. Irani and R. Ramakrishnan D e p a r t m e n t of Mechanical Engineering, University of Minnesota, 125 Mechanical Engineering, 111 Church Street SE, Minneapolis, MN 55455, USA ABSTRACT Production Flow Analysis (PFA) is a systematic manual method for cell formation and layout design. However, even after nearly thirty years since its appearance in the literature, there is no commercially available computer software for it. This is surprising, especially when there is at least one analytical method available in the literature that would solve each phase as accurately as would a human analyst. This chapter demonstrates a step-by-step implementation of the first three phases in PFA (Factory Flow Analysis, Group Analysis and Line Analysis) using standard algorithms available in the STORM package. A sample data set from the literature was used for illustrative purposes. Data collection sheets, analysis sheets and typical results expected from each step are presented. Companies interested in implementing cells will find that these analytical methods effectively complement their manual analyses. 1. INTRODUCTION A company producing a range of assembled products operates in a medium to low vloume and high variety environment. It faces the problem of coordinating the manufacture and assembly of batches of a large number and variety of parts. The task of coordination can be complicated if the parts have (a) a variety of operation sequences and (b) the choice of shop layout is poor. The specific manufacturing requirements for a part generate a unique operation sequence for it which translates into a particular route through the machines on the shopfloor. Due to the variety of parts, their operation sequences will also exhibit considerable differences in routing. Hence, if these sequences are merged, a complex material flow network on the shopfloor results with no single dominant flow path. The complexity of this material flow network will increase if the shopfloor has a pure process (or functional) layout. In this type of layout, the manufacturing equipment is classified by function (or type). Machines with identical functions are located in the same section. Such a layout requires that each batch of parts
300 pass through a different section for each operation, often backtracking to previously visited sections. This results in poor machine utilization due to frequent machine setups as dissimilar parts are loaded successively on a machine. Work-in-process levels are high since batches of parts reside in queues at most machines. Additional delays are experienced in transporting these batches between different sections for successive operations. Hence, the average throughput time of a part increases. Part production and product assembly schedules experience considerable revisions in their projected completion dates due to inefficient work progressing. 2. P U R P O S E OF CELL F O R M A T I O N Given the above scenario in batch manufacturing, the purpose of cell formation is to simplify material flow control on the shopfloor. This is achieved by dividing the large n u m b e r of p a r t s into families which use similar combinations (or sequences) of machines. Simultaneously, groups of machines (or cells) are created by a physical decentralization of the functional layout. This is usually done by matching the operation sequences of the parts to identify families using the same set of machines for their operations. A cell is usually dedicated to a single part family and each part family m u s t preferably be produced completely within its cell. Ideally, each cell should operate independently of the other cells to deliver batches of parts belonging to a particular family to meet an assembly or delivery schedule. Within a cell, production scheduling, operator-machine assignments, tool and fixture supply, machine maintenance, raw material ordering, finished stock storage, inspection, etc. are the responsibility of the cell foreman. In some cases, parts from one cell may be allowed to travel to and from another cell for further processing prior to assembly or delivery. However, this would violate the fundamental assumption of cell design that the cells should have no intercell flows. 3. Q U E S T I O N S TO ANSWER D U R I N G CELL D E S I G N In order to successfully design and implement a Cellular Manufacturing System for a factory, several questions need to be answered at different levels of decomposition of the factory. Much depends on the extent to which the company plans to reorganize the existing factory into part family-based or product-based cells. The issues to be addressed have been expressed as a set of questions at the factory, shop and cell levels below:
For the entire Factory: * How many shops in the factory are suitable for conversion into cells? * Can the complete part population be broken down into families? If so, how many part families exist? How many parts do not conform to any of the existing families?
301 * If the parts are derived from several products, are these products produced in the same period? Or will the compositions of the part families change over time due to product mix and volume changes?
For each Shop in the Factory: * How much of each shop can be converted into cells? * How m a n y cells can be formed? Do they experience intercell flows? Which parts, machines and cells are involved in these flows? * Are the cells formed for each family independent? If machine utilization is important, will some cells need to be merged to prevent duplication of equipment across several cells? * Is the part mix and demand for each part fixed? Would a change in the part mix and/or d e m a n d volumes significantly d i s r u p t the compositions of the existing part families and cells? * If the cells will experience fluctuating intercell flows over time, can (a) those parts whose family assignments are subject to change be identified, and (b) can those machines whose distribution among the cells is fluctuating be identified? If so, should only a partial conversion to a cellular layout with an efficient handling system be designed to control the intercell flows? * What is the optimal layout for the shop?
For each Cell in a Shop: * Which part family is assigned to the cell? * Which machine types are placed in the cell? * What are the total capacity requirements for each machine type? * How m a n y machines of each type can be assigned to the cell? * W h a t intercell flows exist because of exception operations? If so, which cells, machines, and parts are involved? * W h a t intercell flows exist because of machine overload? If so, which cells, machines or parts are involved? * Which external resources m u s t be used by the cell? W h a t are the locations of these resources with respect to the cell? * Which other cells have common machine types to which the parts can be rerouted in case of breakdowns, worker absenteeism, maintenance stoppages or machine overload due to demand changes?
302 * What is the optimal layout for the cell? 4. P R O D U C T I O N FLOW ANALYSIS Production Flow Analysis (PFA) [BURB93] is an excellent technique developed by Prof. J. L. Burbidge for machine grouping, part family formation, cell layout and shop layout design that is suitable for industrial implementation. When applied to a single factory, it consists of four stages, each stage achieving material flow reduction for a progressively reducing portion of the factory. In Factory Flow Analysis (FFA) (Figure 1), if parts are observed to backtrack between any of the shops such as the Machine Shop, Forge, Foundry, Press or Assembly Shop, these flows are eliminated by a minor redeployment of equipment. FFA may often be redundant for a factory which essentially consists of a single machine or fabrication shop. In Group Analysis (GA) (Figure 2), the flow in each of the shops identified by FFA is analyzed. GA analyzes the flow interactions among the facilities and operation sequences of the parts to identify manufacturing cells. Loads are calculated for each part family to obtain the equipment requirements for each cell. Each cell usually contains all the equipment necessary to satisfy the complete manufacturing requirements of its part family. Due to equipment sharing problems, some intercell material flow may exist. In Line Analysis (LA) (Figure 3), a layout is designed for the machines assigned to each cell. It considers the operation frequencies and sequences of the parts and develops a cell configuration which allows efficient transport. The layout shape must also encourage multi-machine tending by some operators. In Tooling Analysis (TA) (Figure 4), the principles of GA and LA are integrated with Classification & Coding data on the shape, size, material, tooling, fixturing, etc. attributes of the parts. TA helps to schedule the cell by identifying sub-families of parts with similar operation sequences, tooling and setups. It seeks to sequence parts on each machine tool and to schedule all the machines in the cell to exploit setup similarities or dependencies on the machines in order to achieve short throughput times for the parts. An additional step is missing in PFA - Shop Layout Analysis (SLA). When multiple interdependent cells are created, the design of the shop has to be planned to minimize flow times for the intercell flows between cells caused by any unavoidable machine sharing and exception operations.
4.1. A major limitation of production flow analysis A major limitation of PFA as a tool for the design of cells is the persistent use of manual and visual methods for solving the individual problems related to machine grouping, part family formation, machine distribution among the cells subject to capacity constraints, cell layout design and shop layout design: (i) choosing the number of groups that one wishes to form and determining the size of each group a p r i o r i (this is an open problem in the Cluster Analysis literature)
303 (ii) identifying the groups from the machine-module matrix (this is essentially a variation of the machine-part matrix clustering problem that has been addressed in [ARVI93] and [CHEN93]) (iii) merging modules subject to load balancing and cell size constraints, (iv) splitting modules formed around S machines which contain parts from different families (v) deciding which parts to subcontract or reallocate to other cells, etc. The computer is utilized only for storage, retrieval and sorting of data. Due to the absence of analytical methodology, considerable reliance is placed on the judgement of the analyst ([LAW80], [RAY84], [SCHO73]). This may be the reason that, since the introduction of the method in the 1950's, only 36 factories have reportedly implemented PFA for cell design [(BURB92), (BURB93)].
4.2. Computer models for production flow analysis
The first reported applications of quantitative methods to PFA are McAuley's [McAU72], Carrie's [CARR73] and Rajagopalan's [RAJA75]. A modern development is COALA, a full-fledged computer implementation of a suite of computer models for PFA that has been developed jointly by the INRIA Lorraine in France and the Systems Research Center at the University of Maryland [PROT91]. If one studies the structure of Production Flow Analysis, it will be observed that all four stages possess the structure of analytical problems in the literature for which efficient solution techniques have been developed. Briefly, some of the relevant problems which are analogous to each of the four stages are as follows:
Factory Flow Analysis: Linear Placement Problem (a one-dimensional version of the Quadratic Assignment Problem (QAP)) [CHEN93] (this is useful for arranging the shops in the factory in a unidirectional sequence of dominant forward flows), Circuit Detection in Directed Graphs (this is useful for detecting material flow paths which involve returning to the same shop for subsequent operations), Minimum Equivalent Digraph (this is useful for eliminating flows between certain pairs of departments already included in other dominant flow paths), Strong Components in a Digraph (this helps to identify a subset of shops which are highly interconnected by material flows and could be merged into a single shop), String Clustering (this is useful for identifying PRN's which connect the same subset of shops in a similar sequence), etc. These algorithms are described in any standard text on computer algorithms [BAAS88].
Group Analysis: Linear Placement Problem [CHEN93] (this is useful for reordering the initial machine-part matrix by permuting the rows and columns to yield a Block Diagonal Form). Line Analysis: Quadratic Assignment Problem ([BUFF64], [KUSI90]) (this is
useful for obtaining the layout of machines within a cell, regardless of whether or not the cell shape is known or the machines have footprints with equal areas and/or identical shapes).
304
Shop Layout Analysis: VLSI cell placement algorithms [SHAH91] (these are
useful for placing the cells on the shopfloor, regardless of whether or not the cells have equal areas and/or identical shapes).
Tooling Analysis: Traveling Salesman Problem [CHEN93] (this is useful for
sequencing a family of parts on the primary machine/s in a cell to minimize setup changeover times for consecutive parts), Cluster Analysis (this is useful for grouping parts into setup-based or tooling-based or geometry-based families to minimize setup changeover delays). 5. ANALYTICAL MODELS FOR PRODUCTION FLOW ANALYSIS Our objective in writing this chapter is to encourage the use of Operations Research models and computer software that has implemented them for rapid deployment of PFA in companies. Some of the advantages of being able to implement PFA in a company using a computer package such as COALA are (a) large data sets can be analyzed, (b) what-if analyses can be conducted quickly to adapt the results to specific situations in each company, (c) the standardization of the results will allow easier dissemination of the results among interested companies, (d) some sense of confidence can be associated with the computergenerated results and (e) the analyses can be repeated in several other compaies without requiring specially trained personnel. We have identified a set of easyto-use models that have already been implemented in educational software such as the STORM package [EMMO92]. STORM provides a suite of modules for Linear Programming, Graph and Network Theory, Inventory Control, Facility Layout, Material Requirements Planning, Production Scheduling, etc. Even though the complete cell design problem cannot be solved by these modules in STORM, they are sufficient for performing a quick-and-dirty feasibility study for the implementation of PFA in a small or medium-sized company. The use of the different relevant modules in STORM to partially computerize and analytically solve the first three stages in PFA has been described in this chapter.
5.1. Factory flow analysis with STORM In FFA, the detailed routes for each part are converted to Process Route Numbers (PRN's) to capture the movement of the part between the different shops in the factory. Figure 5(a) lists the major shops in a factory described in [BURB93A]. Figure 5(b) lists the different PRN's and the number of parts that follow it. A Pareto analysis of this data is shown in Figures 5(c) and 5(d). Figure 5(d) clearly shows that PRN's 16, 8, 32, 35, 30 and 20 have the dominant flow volumes. Using the FROM/TO chart in Figure 5(e) generated from this data, the unidirectional sequence of the shops shown in Figure 5(f) was generated using the Facility Layout module of STORM. The same figure shows how the PRN's with small numbers of parts can be mapped onto this factory layout to detect parts which backtrack to shops located earlier in the factory-level production flow sequence. These results obtained using STORM clearly demonstrate the feasibility of rapid and effective computer implementation of FFA in practice.
305
5.2. Group analysis with STORM
The foundation of Group Analysis is the generation of machine groups and matching part families from a 0-1 machine-part matrix. This is achieved by reordering the initial matrix by generating new machine and part permutations which would match the corresponding clusters in both dimensions in a Block Diagonal Form [CHEN93]. The permutations for the final matrix can be generated by solving the appropriate Linear Placement Problem (LPP) using the Facility Layout module in STORM. The student version of STORM (manual included) retails for about $80 and the Facility Layout module can handle 50 machines. This version of STORM could be used to solve significantly large machine-part matrices by exploiting the concept of the "module" proposed by Burbidge. The module is essentially a small machine-part matrix identified with the remaining parts in the machine usage list of a key machine type. Using the sorting techniques adopted by Burbidge to create the modules, the usually large number of parts will always be reduced to a smaller number of modules, a number which can never exceed the number of machine types in the shop. Hence, a 50 X 50 machine-module matrix representing a medium-sized machine shop could easily be solved using STORM to identify potential machine groups and part families. In fact, instead of purchasing the professional version of STORM which costs about $1000 and can handle 150 machines, a company could easily use in-house programmers to implement this particular module. We propose the following steps for quick implementation of GA by a company using STORM:
Step 1: If an actual industry implementation of GA were planned for a shop, the following data would be required:
Machines: The information that must be collected includes Machine Identification Number, Description of machine type, Current location in the shop, Number available in the shop which are available for relocation into the cells, Purchase Cost, Special environmental and utility requirements (dust, vibration, fumes, power, compressed air, etc.), Processing capabilities (type of process performed, size ranges of work pieces accommodated, surface finish, tolerances, materials processed, shapes generated, production rate, available ranges for process parameters, level of operator skill required), Capacity data (scrap rate, average setup time, machine hour rate, failure rate), Other machine types with similar processing capabilities, Availability of other support equipment for relocation into the cells, Machines to be replaced or purchased, Restrictions on the inclusion of certain machines in the cells due to ventilation needs, supply of electricity and/or compressed air, vibrations caused by operation of certain machines, furnace heat, etc., Footprint dimensions for each machine, Weight, Cost of removal and reinstallation of the machine in another location, Current shop layout showing locations of the available machines.
306 Parts:
Part Name, ID Number, Functions, Assembly where used, Number required per assembly, Route sheet and operation sequence which shows details such as machines used, tools, fixtures and process parameters, Design drawing (key dimensions, shape features, hardness, tolerances, surface finish, etc.), Material composition, Raw material shape, Production batch quantity, Annual demand volume, Inspection requirements. In practice, all of the information listed earlier would be required for proper implementation of GA. For our case study, we have used a data set from the literature [VAKH90] which provides the minimum information required for initiating GA viz. list of parts, setup and operation times on the particular machine type used for each operation on each part, batch quantity for each part, list of machine types and number of machines of each type available. Step 2 9
Using the data obtained in Step 1, develop the route sheets for the parts. The results from this step should appear as in Figure 6. For each operation on a part, the machine type used and the total set up plus operation time on it for the complete batch is shown as a single number. Step 3 9
Create the initial machine-part matrix as shown in Figure 7 using the results from Step 2. Using the operation sequence for each part, enter 1 at the intersection of row I and column J if part I uses machine J ; otherwise, place 0, regardless of the number of times a particular machine type appears in the sequence. Step 4 9
Compute the similarity coefficients for machines. These are computed as below:
P
N KL
M
SKL NKPK
where
NP
LL
P
N KL
307 S ~M is the similarity coefficient for machines K and L P
N KL is the number of parts processed by both machines K and L. N2K is the number of parts processed by machine K.
N[,
is the number of parts processed by machine L.
Example:
The similarity coefficient for Machines 1 and 4 is calculated as 4/{7+8-4}=0.36 The same computation is to be repeated for all pairs of machines. The complete results are shown in Figure 8(a). Step 5 9
Compute the similarity coefficients for parts. These are computed as below:
N M
KL
P
S KL
m
N:
M
_
N M
where P
SKt. is the similarity coefficient for parts K and L N M is the number of machines used by parts K and L. KL is the number of machines used by part K.
N
M LL
is the number of machines used by part L.
308 Example:
The similarity coefficient for Parts 4 and 12 is calculated as: 1/(4+3-1)=0.17 The same computation is to be repeated for all pairs of parts. results are shown in Figure 8(b).
The complete
Step 6 :
Generate the least cost machines permutation for the final matrix using the matrix of similarity coefficients in Figure 8(a) as a travel chart in the Facility Layout module of STORM. The necessary steps for using this module in STORM are explained in the Appendix. Step 7 :
Generate the parts permutation for the final matrix using the matrix of similarity coefficients in Figure 8(b) as a travel chart in the Facility Layout module of STORM. The necessary steps for using this module in STORM are explained in the Appendix. Step 8 :
Create the final machine-part matrix shown in Figure 9 by combining the two permutations obtained in Steps 6 and 7. Notice how the entries in the initial matrix have been clustered along the diagonal to yield a block diagonal form (BDF) which enables the cells to be identified Step 9 :
Partition the BDF obtained in Step 8 into machine groups and part families. Figure 10(a) shows an initial 2-cell partition of the BDF. Figure 10(b) shows how the boundaries between the two cells may need to be adjusted by duplicating machines to eliminate the intercell flows caused by the exception operations in Figure 10(a). Figure 11 shows a 3-cell partition of the same BDF. Steps 10-17 are intended to determine those machine types which are being shared by several cells and, if machine duplication is required, how many of each shared type must be allocated to each cell. The notations that will be used in these steps have been described below: Tij = Capacity requirement for operation i on machine type j SUij = Set up time for the batch of parts needing operation i on machine type j OTij = Operation time per part requiring operation i on machine type j
309 Qi = Batch quantity of part needing operation i AR = Reject or scrap allowance for any machine type Aj = Capacity available per machine of type j AU = Machine utilization factor C = Duration of production period Nj = Number of machines of type j required in the cell (can be rounded up or down). Step 10 :
For each part i, compute the total capacity requirement for all operations that it requires on machine type j as follows: Tij = SUij + OTij*Qij*(1 + AR); In our case study, these times have already been computed and would appear as shown earlier in Figure 6. S t e p 11 :
Transfer the total capacities computed in Step 10 to the final machine-part matrix. On the final machine-part matrix, substitute these values for the '1' representing all operations that Part i requires on Machine type j. Figure 11 shows these times from Figure 6 entered in the 3-cell partition, which will be discussed from hereon. Step 12 :
Depending on the cell partitions identified in Step 9, we need to compute the total capacity requirements for each machine type associated with each cell. For any particular cell of interest, compute the total capacity requirement for all the operations assigned to it on machine j as follows: ZTij = Z {SUij + OTij*Qij*(I+AR)} i i Repeat the above calculations to find the total capacity requirement for each machine type required in the cell. These values are shown in the rows marked 'M/C-CELL 1', 'M/C-CELL 2' and 'M/C-CELL 3' for the 3-cell partition in Figure 12.
310 Step 13 :
Compute the total available capacity per machine of type j for the entire production period as follows: A = AU*C For our case study, we have assumed AU = 80% and C = 8, hours whereby A = 384 minutes. Step 14 :
Using the total capacity requirements calculated in Step 12, compute the number of machines of type j required for the cell as follows.
Sj = Z Wij/Aij 1
Repeat the above calculations to find the number of machines of each type required in the cell. They are shown in brackets beside the values for the actual machine requirements shown in the rows marked 'M/C-CELL 1', 'M/C-CELL 2' and 'M/C-CELL 3' for the 3-cell partition in Figure 12. Step 15:
Repeat steps 12-14 for each of the remaining cells identified in Step 9. Step 16 :
Compare the machine requirements and available number of machines to check whether or not it is possible to distribute the required number of machines of each type in each of the cells. If appropriate, adjust the cells to match the requirements and availability of the machines. These adjustments include distributing machines of the same type among several cells, assigning particular operations on parts to cells other t h a n the host cell in which most of the operations for the part will be performed, moving complete parts into other cells, placing shared machine types in a centrally located Common Facilities cell accessible to all the cells, etc. This step is extremely important because it determines the exact number of machines of each type assigned to a cell, those operations on parts which will be performed within the cells and those operations on parts which must be sent to machines available in other cells. For the 3-cell partition, these adjustment will now be discussed in detail with reference to Figure 12. Cell 2. Machine TvDe 8:
The 0.06 machine requirement is not enough to justify a machine in the c e l l . - P u t the single available machine of this type in Cell 1 1 and let P a r t 6 move from Cell 2 to Cell 1 for the required operations.
311 Cell 2, Machine Tvoe 4:
The 0.09 machine requirement is not enough to justify a machine in the cell. Put both machines in Cell 1 and let Part 4 move form Cell 2 to Cell 1 for the required operations.
Cell 3, Machine TvDe 1:
The 0.08 machine requirement does not justify placing a machine in the cell. Assign the 2 operations for Parts 15 and 16 to Cell 2. Cell 1 will then have a 0.59 machine requirement which justifies allocation of one machine of this type to the cell. Thereby, Cell 2 will have a nearly equal machine requirement of 0.58 which would justify the allocation of one machine of Type 1 to it, also.
Cell 1, Machine Type 7;
There are 2 options which we can explore to determine which parts to send from Cell 1 to either Cell 2 or Cell 3: Optionl : Assign all three parts to Cell 2, which results in machine requirements of 1.71 in Cell 2 and 1.61 in Cell 3. Option2 :Assign Parts 2 and 3 to Cell 2 and Part 10 to Cell 3, which r e s u l t s in machine requirements of 1.66 in both cells. Option 1 is better if you wish to minimize the difficulty of tracking the flow of parts to several cells. Option 2 is better if you wish to have uniform machine loads on the machine type in both cells.
Cell 3, Machine Tvoe 10:
Assign one machine to Cell 3 to satisfy the 360 minutes (requirement of approx. 1 machine) required by Part 14. Move the other 2 parts (#15, #16). with a total machine requirement of 0.23 to Cell 2, which will now require exactly two machines. However, this machine type's distribution p r e s e n t s a problem because breakdowns on this overloaded machine type can severely affect the performance of the cell.
_
_
Step 17 :
Update the machine and part compositions of the cells to reflect the decisions taken in Step 16. Step 18 :
At this stage, a company need not go to the next two stages in PFA viz. Line Analysis and Shop Layout Analysis. Instead, the task of eliminating the intercell flows and making the cells independent of each other could be
312 undertaken. This task is best performed by a Concurrent Engineering team of machinists, process planners and part designers working together. Intercell flows arise either because an exception operation requires a machine type not assigned to the cell or because the available capacity on a machine type is not sufficient for the operations on all the parts in the family requiring that machine type. Some of the strategies that could be used for eliminating the need for additional capacity on identical machines outside a particular cell are: a) E l i m i n a t e e x c e p t i o n operations: This includes redesigning parts to eliminate those features requiring operations on machines outside the cell or rerouting them to alternative machines available within the cell.
b) Increase capacity on o v e r l o a d e d m a c h i n e s by e l i m i n a t i n g w a s t e d time on them: This can be accomplished by optimizing process parameters to reduce processing times, performing in-process gauging instead of stopping machines to eliminate idle time due to inspection, group scheduling to reduce setup changeover times, reducing labor absenteeism and encouraging multimachine tending by a skilled operator, reducing machine downtime through preventive maintenance, using overtime, speeding transfer of parts between consecutive machines, etc. c) R e d u c e S c r a p Levels: This can be done by improving quality control within the cell to reduce the machine capacity wasted making defective or reject parts.
d) P u r c h a s e multi-function m a c h i n i n g or turning centers: This must be done to reduce the number of manufacturing stages for each part and to increase available time due to unmanned operation of these automated machine tools. For a more exhaustive list of strategies, the reader is referred to the literature [ARVI93].
5.3. Line analysis with STORM Having completed Group Analysis, the layout for each cell (Line Analysis) must be planned. STORM has limited capacity for Line Analysis because it cannot account for machine shapes and sizes, non-linear cell layout shapes such as U or H or W, duplication of the same machine type at two or more locations within the cell and unequal material handling costs for forward vs. back vs. cross flows of parts in a non-linear layout shape. It can be used to generate a linear layout for the machines in each cell which can then be bent into a U-shape. However , having observed the simplicity of cell layout design techniques employed in industry [WREN94], the capabilities of STORM are sufficient for rough cut and rapid modeling purposes. In terms of data requirements for this stage, for our case study, we used only the following data obtained from Figure 12: List of parts assigned to the cell, Operation sequence and batch quantity for each part, List of machine types assigned to the cell, Number of machines of each type assigned to the cell and
313 Machines in other cells that one or more parts must visit. However, in an actual industrial implementation, the shape and area requirements for each machine type and the infeasibility of relocating certain machines required by the cell within it must also be taken into consideration. For each of the three cells for the 3-cell partition created in Step 9, the routing data for its family of parts shown in Figure 6 was converted into a Travel Chart, as shown in Figures 13(a)-(c). A single Input and Output station was assumed for each cell when entering the Travel Chart into the Facility Layout module of STORM. The best layout generated for each cell is shown in Figure 14. The layouts for the cells created for the 2-cell partition created in Step 9 are shown in Figure 15.
5.4. Shop layout analysis with STORM Having completed Line Analysis, the layout for the entire shop (Shop Layout Analysis) showing the relative locations of the cells with respect to the equipment external to the cells that is in the same shop and the other shops must be planned. STORM has limited capacity for Shop Layout Analysis because it cannot account for the sizes and shapes of the cells, the boundary shape of the shop floor, locations of the Input/Output points for the individual cells, interactions between the shapes chosen for the cells and location constraints forced by the structure of the shopfloor, the configuration of the network of material handling aisles connecting the cells, supply points for electricity or compressed air or gas, and other general considerations for facility layout design. This problem is equivalent to the VLSI layout problem and has been solved using more sophisticated algorithms in the COALA package for the case of random placement of cells in the Euclidean plane. In terms of data requirements for this stage, for our case study, we used only the following data: Number of cells, List of parts involved in intercell flows, Operation sequence and batch quantity for each of these parts, Machine type required for each external operation and the cell in which that machine type occurs. To design the layout for the shop, a Travel Chart for the intercell flows, as shown in Figure 16, was input to STORM. Assuming a linear layout for the shop, Figures 17(a)-(f) show the optimal layout for the three cells produced by STORM with the flow paths for each of the parts moving between the cells. In the figure, the cell number 'X' for each operation is denoted as a '#X' beside the machine used for that operation. For cases where a larger number of cells is involved, the following approach using the Distance Networks module in STORM can be adopted: Generate the initial asymmetric Travel Chart for the intercell flows, as shown in Figure 18(a). Convert this chart into a symmetric Travel Chart showing the total flow between any and every pair of cells, as shown in Figure 18(b). Obtain the Maximum Spanning Tree (MST) from this data, as shown in Figure 18(c), using the
314 Distance Networks module in STORM. This tree can then be further modified to generate a spine layout ([LANG94], [MONT91]) for the cells by adjusting their final locations, as shown in Figure 18(d). This layout will further reduce the travel distances for those flows which were not included in the MST. Note however that, if Step 16 is successfully executed and extensive machine duplication is permitted, the intercell moves may be completely eliminated by absorbing the exception operations into the cells or making excess capacity on the shared machines available within the cells. In this case, the shop layout would be determined by the assembly sequence for the parts being produced by the cells and other non-flow considerations such as the restrictions on machine duplication imposed by the SICGE categories of machine types. According to Burbidge [BURB93B], the machines in a shop can be classified into five classes: S(pecial) = There is only one of each type and it would be very difficult to transfer the work it does onto any other machine type ex. bar lathe, crankshaft grinding machine, gear tooth rounding machine, etc., I(ntermediate) = Same as S but there is more than one of each type, C(ommon) = There are several of each type and it is easy, if necessary, to transfer the work they do to other similar machine types ex. lathes, mills, drills, etc., G(eneral) = There are few machines of each type. They are used for a high proportion of the parts or for many different types of parts They are unlikely to be suitable for inclusion in groups ex. saws, x-ray machines, painting and electroplating equipment, E(quipment) = Machines used to assist manual operations ex. benches, vices, surface plates, manual power tools. Machines in the E category could be duplicated among the cells. This would eliminate intercell flows to a central facility or to any other cell. Based on capacity calculations, machines in the C category can be duplicated among the cells to eliminate intercell flows by parts from different families which require these same machines. It is the machines in the remaining three categories (G, I, S) which influence the placement of the cells in the shop. Machine types in the G category will need to be located in protected areas away from the cells. Hence, those cells which must send parts to these machines must be located close to those areas of the shop. Similarly, each machine type in the I category, perhaps due to special skills required to operate them or environmental restrictions, is preferably located in a functional (or process) section accessible to all the cells. Lastly, machine types in the S category must definitely be located in a Common Facilities cell placed centrally with respect to the other cells. Therefore, in practice,the machine duplication decisions demonstrated in Step 16 for our case study must usually be integrated with Shop Layout Analysis. 6. S C H E D U L I N G THE I N T R A C E L L AND I N T E R C E L L FLOWS Having developed different configurations for a shop by varying the n u m b e r of cells, the performance of each layout could be evaluated using packages such as MPX [SURI93] and STARCELL [STEU91]. These studies are beyond the scope of PFA and this chapter.
315 7. CONCLUSION This chapter sought to demonstrate the feasibility of rapid computer implementation of the first three stages of Production Flow Analysis - Factory Flow Analysis, Group Analysis and Line Analysis - in small and medium-sized companies. Each of these stages is analogous to fundamental Operations Research problems for which effective solution algorithms are available in a commercial package such as STORM. A complete case study from the literature was analyzed and sample results from STORM presented in this chapter. References
[ARVI93] Arvindh, B. (1993). Studies in the design of cellular manufacturing system~, M.S. Plan B Thesis, Department of Mechanical Engineering, University of Minnesota, Minneapolis, MN. [BAAS88] Baase, S. (1988). Computer algorithms: Introduction to design and analysis. Reading, MA: Addison-Wesley. [BUFF64] Buffa, E.S., Armour, G.C. & Vollmann, T.E. (1964). Allocating facilities with CRAFT. Harvard Bu~ines~ R~vicw, 30(5), 136-158. [BURB92] Burbidge, J.L. (1992). Change to GT: Process organization is obsolete. International Journal of Production Research, 42(2), 1209-1219. [BURB93A] Burbidge, J.L. (1993). Production flow analysis for planning group technology. Journal of Operations Management, 10(1), 5-27. [BURB93B] Burbidge, J.L. (1993). Comment on clustering methods for finding GT groups and families. Journal of Manufacturing Systems, 12(5), 428-9. [CARR73] Carrie, A.S. (1973). Numerical taxonomy applied to group technology and plant layout. International Journal of Production Research, 11(4), 399-416. [CHEN93] Chen, C.Y. & Irani, S.A. (1993). Cluster first-sequence last heuristics for generating block diagonal forms for a machine-part matrix. International Journal of Production Research, 31(11), 2623-2647. [EMMO92] Emmons, H., Flowers, A.D., Khot, C.M. & Mathur, K. (1992). STORM Personal Version 3.0: Quantitative modeling for decision support, Englewood Cliffs, NJ: Prentice Hall/Allyn & Bacon. [GALL73] Gallagher, C.C. & Knight, W.A. (1973). Group technoloeT. London: Butterworths. [KUSI90] Kusiak, A. (1990). Intelligent manufacturing system~ (Chapter 10); Models and algorithms for machine layout. Englewood Cliffs, NJ: Prentice Hall/Allyn & Bacon.
316
[LANG94] Langevin, A., Montreuil, B. & Riopel, D. (1994). Spine layout design. International Journal of Production Research, 32(2), 429-442. [LAW80] Law, S.S. (1980). Materials flow reduction through production flow analysis. Proceedings of the Eighth North American M~n~facturing Research Conference, pp. 58-63, Dearborn, MI: Society of Manufacturing Engineers. [McAU72] McAuley, J. (1972). Machine grouping for efficient production Production En~neer, 51(2), 53-57. [MONT91] Montreuil, B. & Venkitadri, U. (1991). Strategic interpolative design of dynamic manufacturing system layouts. Management Science, 37(6), 682-694. [PROT91] Proth, J.M. & Vernadat, F. (1991). COALA: A new manufacturing layout approach. Proceedings of the 1991 ASME Winter Annual Meeting: Svmposium on Design, Analysi~ and Control of Manufacturing Cells, pp. 15-29, New York, NY: American Society of Mechanical Engineers. [RAJA75] Rajagopalan, R. (1975). Design of cellular production systems: A graph theoretic approach. International Journal of Production Research, 13(6), 567-579. [SCHO73] Schofield, R.E. & Masey, N.C. (1973). The production flow analysis of large quantities of uncoded components by computer sampling methods. Proceedings of the Second International Conference on Production Systems, pp. 317-326. [SHAH91] Shahookar, K. & Mazumder, P. (1991). VLSI cell placement techniques. ACM Computing Surveys, 23(2), 143-220. [STEU91] Steudel H.J. (1991). U~er's Manual for STARCELL. Madison, WI" Steudel H.J. & Associates, Inc. [SURI93] Suri, R. (1993). Instructor's Manual for MPX, Burlington, MA: Network Dynamics, Inc. [VAKH90] Vakharia, A.J. & Wemmerlov, U. (1990). Designing a cellular manufacturing system: A materials flow approach based on operation sequences. IIE Transactions, 22__(1),84-97. [WREN94] Wrennall, W. & Lee, Q. (1994). Handbook of r162 industrial facilities management. New York, NY: McGraw-Hill.
and
317
MATERI AL
28 15
84
r
Figure 1 Factory Flow Analysis
1 2 3 4. :5 6 o
= = = = = = =
B L A N K S DEPARTMENT SHEET METAL WORK FORGE WELDING DEPT MACHINE SHOP ASSEMBLY ou~s,o~ ~,~
318
OMT(3)
X
X
X
X
PC
oxY(~)
X
X
ou<3) x
x
X X
X
X
X
X
x
X X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
P&'GR M PGR A C PGH
X
X
X
E PGG X
P&'G
X
X
X
X
X
X
X
X
RP
X
X
PGB W/b~
X
X
X
X
X
X
X
X
X
X
X
WG3
X COMPONENT -
MACHINE CHART. INITIAL RECORD. FORGE. PART
PG
X
X
X
X
DM 3/1
X
X
X
X
oxY 3/1;
x
x
X
RP
X I
X
o
FAI~ ILY - 1 P&'G
X
X
X
DMT 3 / 2
X
X
X
X
oM3/2
x
x
x
x
x
x
x
x
i
X
X
X
W&P
X
X
X
X
X
M oxY~/2
I
X
X
X
INE "EX(:EPTON" ~ )
X X
X
x
X
x
x
x
wr~
x
I o.
X
F, kMIL
f
~
2
P0B
X
X
X
X
X X
X
X
X X
PGR
DMT 3/'J
X
ou 3/3
X
X
9 MACHINE CHART. AFTER FINDING FAMIUES AND CROUPS
Figure 2 Group Analysis
!
X
X
X
X
PS~GR
COMPONENT -
F~JvIIL - 3
,,3
X
I
319
MATERIAL
65
I
1
7
2
HS4
#0
5
I T
~!'
T
9
15
9
GROUP FLOW NETWORK DIAGRAM -
9 GROUP 2
MATERIAL
~t'
SIMPLIFIED GROUP
FLOW NETWORK - GROUP 2
Figure 3 Line Analysis
I
320 Ferrous. 2 or marc I
O/Oias. {successively increasing | I/Oia.
o~ more 7
Turret Posn. 2
Ccnt rc {:)rill "
m/. 0
S
/,
3
Ocscription
i=o~, ~ R g t . x ~ , . ~ 2 ,
z-3 \
Tool
1
.
.
.
.
.
8oring. .F . .i .n.i .s h
6
Frtt
7
Free
e
.
3
Po ~t
Turn
off
zz Change to suit rcquircmcn
N o t e s . - A d d i t i o n a l tools should be placed free position where possible t h u s preserving the basic settings
A composite component and t~e turret s~t-up to m ~ i n z it
r
A tjpir.~ component rm~,efor the turret ut.up s~ton
Figure 4 Tooling Analysis [GALL 73]
in a
321
CODE 1 2
PROCESS Blank Production Sheet Metal For~ing Welding Machining Assembly Subcontracting
Figure 5(a) List of Shops in the Factory Process
Route
Number
Quantity 1
1-2-3-4 1-2-6 1-3-2-4 1-3-2-6 1-3-4 1-3-5-6 1-3-6 1-4 1-5-1-3-2-6 1-5-2-6 1-5-3-5-6 1-5-3-6 1-5-4 1-5-4-6 1-5-4-9-5-6 1-5-6 1-5-6-4 1-5-9-5-6 1-6 2-6 3-1-2-6 3-4 3-5-6 3-6 4-1-4-6 4-1-5-6 4-2-6 4-5-4-6 4-5-6 4-6 5-4 5-8 5-9-5-6 5-9-6 6 9-6-5-6
,, 8
1 1
3 3 18 131 1
3 1 2 3 1 142 1
1 1 28
12 2 1 1
7 46 4 120 t ,,
1 59 1
Figure 5(b) Process Route Numbers and Production Quantities for all Parts
322 Process Route Number
Quantity
16 8 32 35 30 20 '7 24 2 29 31 13 5 6 10 12 25 3 1 4 9 11 14 28 27 26 36 34 33 23 18 17
142 131 120 59 46 28 18 12
22 21 19
1
7 4 3 3 3 3 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Cumulative Quantity 142 273 393 452 498 526 544 556 564 571 575 578 581 584 587 589 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610
,,
Figure 5(c) Pareto Analysis of Process Route Numbers by Production Quantity
323 700
"
600 500 I=
0
O t~
,..,i
400 300 200
m
r~
100
16 8 32 35 30 20 7 24 2 29 31 13 5 6 10 12 25 3
1 4 9 11 14 28 27 26 36 34 33 23 18 17 15 22 21 19
Process Route Number Figure 5(d) Pareto Chart of Process Route N u m b e r s b y P r o d u c t i o n Quantity
Io
1
2
3
4
5
6
9
10 3 1
27 1 -
133 1 5 -
157 5 8
1
-
43 32 50
1
FROM 1 2
3 4
-
1 3
5
-
-
-
1
1
-
-
6
-
-
-
1
1
-
-
9
.
3
2
-
.
.
.
Figure 5(e) Travel Chart d e v e l o p e d f r o m data o f Figures 5(a) and 5(b)
324 Route followed by PRN 28
BLANK PRODUCTION
....1
I I I
WELDING
I I
I
FORGINGI
I T METAL Route followed by PRN 1
I
[
I MACHINING
I I ..._1 "-I
i
ASSEMBLY
[
Figure 5(f) Factory Layout for Unidirectional Material Flows
ROUTING DATA SHEET PART
#
TOTAL BATCH TIME (MINS) /PER M/C
SEOUENCE
PARTS PER BATCH (PER DAY)
1
1. 4. 8. 9
96-36-36-72
2
2
1. 4. 7. 4. 8. 7
36-120-20-120-24-20
3
3
1 , 2, 4, 7, 8, 9
96-48-36-120-36-72
1
4
1, 4, 7, 9
96-36-120-72
3
5
1, 6 , 10, 7, 9
96-72-200-1
6
6, 10. 7, 8. 9
36-120-60-24-36
1
7
6, 4, 8, 9
72-36-48-48
2
8
3, 5. 2. 6. 4. 8, 9
144-1 20-48-72-36-48-48
1
9
3. 5. 6. 4, 8. 9
144-120-72-36-48-48
1
10
4. 7. 4, 8
120-20-1 20-24
2 3
20-72
2
11
6
72
12
11. 7, 12
192-150-80
1
13
11. 12
192-60
1
14
11. 7, 10
288-180-360
3
15
1. 7. 11. 10. 11. 12
15-70-54-45-54-30
1 2
16
1, 7, 11, 10, 11, I 2
15-70-54-45-54-30
17
11. 7. 12
192-150-80
1
18
6, 7, 10
108-180-360
3
19
12
60
2
325
Figure 6 Essential Data for Production Flow Analysis
MATRIX
MACHINES
Figure 7 Initial Machine-Part Matrix
w
326
INITIAL MACHINE-PART
h,
m
327
13
1
2
.
36
33
3
9
25
33
25
4
i00 25
5
9
46
25
29
.
25
13
8
25
25
29
6
.
19
7
30
18
18
25
.
.
.
.
.
50
38
29
17
.
.
25
78
60
25
25
25
25
36
8
36
25
.
.
50
.
8
i0
.
9
30
60
9
.
9
33
ii
12
9
9 9
9
20
71
Figure 8(a) Similarity Coefficients for Machines (x 100)
mP4m m . m I ( |
I~|
If| i|J
il:m |
nmlmem~mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmn klimm
m m~m~mmmL~mmmmmm~rammm~mmmmn ~ 50
m mmmmmmmm m m
m
m
m
m
m
l 29
I 33
i 22
I 25
l 40
I
.
l 17
i
.
I 17
~
~ ~ ~
I 29
I 29
I 17
mmmmmmmm~mmmmmmm m ~ m m m s m m m m m ~ m m ~ m m mmmmmmmmmmmmmm~mmmnmmm~~mmmm
m
m
m
~
mmumm
m
m
m
m
n
n
n
n
mmmmmmmu
n
n
m
m m m m m m m m m i ~ i l i ~ m m m m m m i l iimmmmmmmmmmnimanEImmaiqmin m m m i m m m m m m i m m m m m m l l l n m ~ m
m
m
m
m
m
m
m
m
m
m
m
m
m I 17
m~ll Dr,41Hl'~aHm
iF~ll ma
m m m i m m m m m m m m m m a m a l i l m m i l mlmmmmmmmmmmmmmmalaaammmmamm 20 17 II
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
18
TY Figure 8(b) Similarity Coefficients for Parts (x 100)
I
I 20 I 33
328
W
h)
FINAL MACHINE-PART
MATRIX
MACHINES
Figure 9 Final Machine-Part Matrix
00
FINAL MACHINE-PART MATRIX
I,
MACHINES
I
EXISTING
MACHINES
1
1
1
2
1
2
2
Figure 10(a) 2-cell Partition of BDF
4
3
3
1
329
2
330
w W
FINAL MACHINE-PART MATRIX TWO CELL SOLUTION
0
MACHINES
5
3
2
11
8
1 (120)
1 (144)
9
1 (120)
1 (144)
7
6
8
4
9
1
7
1 (72) 1 (48)
1 (72)
1 (48)
1 (36)
1 (48)
1 (72)
1 (48)
1 (36)
1 (48)
1 (72)
1 (48)
1 (36)
1 (48)
FigurelO(b1 Elimination of Exception Operations in 2-cell Partition
10
11
12
FINAL MACHINE-PART MATRIX THREE CELL SOLUTION MACHINES
/ I
I
EXISTING MACHINES
1
1
1
2
1
2
2
2
3
3
1
331
Figure 11 3-cell Partition of BDF
4
332
FINAL MACHINE-PART MATRIX THREE CELL SOLUTION
EXISTING MACHINES
1
I
I
2
1
2
2
2
Figure 12 Adjustment of 3-cell Partition
4
3
3
1
CELL 1 - INTRACELL FLOWS
333
Figure 12 Adjustment of 3-cell Partition
334
CELL 2 - INTRACELL FLOWS CELL: 2 A C T l V N OR
I
Figure 13(b) Travel Chart for Cell 2 i n 3-cell Partition
335
Figure 13(c) Travel Chart for Cell 3 in 3-cell Partition
336
161
117
I2]
141
Is1
141
131
I 8 '!
17
!1ol
ii ~ I
11ol
Ill
191
I.. 1' I
191
|
|
@
|
CELL 1
,111
Ill1 I lOl
i71 @
CELL 2 Figure 14 Final Cell Layouts for 3-cell Partition
i-12 ! |
CELL 3
337 Cell I
Cell 2
m
m m
part 4 I (#2)- >2 (#2)- > 0 (#2)- >I (#1)- >4 (#1)- > 0 (#1)- >I (#2)- >7 (#2)- >9 (#2)- > 0 (#2)
part 6 I (#2) -> 6 (#2) -> 10 (#2) -> 7 (#2) -> 0 (#2) -> I (#1) -> 8 (#1) -> 0 (#1) -> I (#2) -> 9 (#2) -> 0 (#2)
Figure 15 Cell and Shop Layouts for 2 Cell Partition
MACHINE
MACHINE
MACHINE
Figure 16 Travel Chart for Intercell Flows in 3-cell Partition
338
INTERCELL FLOWS
W W 00
339 CELL 1
CELL2
CELL 3
16]
I7 !
1,11
/
L
I ,o I
I i, I
~
.'0
!7 I
I'
'1
171
11 [
7r~
|
Fig 17(a)
Flow Path for Part # 10 in the 3 Cell Partition
I (#1)- >4 (#1)- >O (#1)- >I ( # 2)- >7 (# 2)- >O (#2)- >I (#1)- >4 (#1)- >8 (#1)- >O (#1)
|
340 CELL 1
CELL 2
CELL 3
[11[
171
i
161 121141 151
141
131
181
Ill |
191 |
F i g 17(b)
171
11~
6I
I ~~ i
Flow Path for Part #15 in 3 Cell partition
I (#2). >1 (#2) - > 0 (#2) -> I (#3) - >7 (#3)- >11 (#3) - > 0 (#3) - >I (#2)- >10 (#2) - > 0 (#2)- >I (#3) - >11 (#3) - >12 (#3) - > 0 (i
341
CELL 1
CELL 2
161
CELL 3
lTf
121 I /
!111
Ilol lloj
I111
!1ii
171
ilol
/~
112j
@ \
|
/
Fig 17(c) F l o w p a t h for P a r t #2 in 3 Cell p a r t i t o n (#1)- >1 (#1) - > 4 (#1) -> 0 (#1) -> I (#2) -> 7 (# 2)- >O (#2)- >I (#1)- >4 (#1)- >8 (#1)- >0 (#1) -> I (# 2)- >10 (#2) -> O (#2)
342 CELL 1
CELL 2
CELL 3
16i
17 I
I"1
-/14 t
I, ~1,o i
!" I
I,, I
i 7I
1,01
171
1,21
@
F i g 17(d)
|
F l o w P a t h f o r P a r t #3 in 3 Cell p a r t i t i o n
I (#1)- >1 (#1) - >2 (#1) - >4 (#1)- >O (#1) - >I (#2)- >7 (# 2)- >O (#2)- >I (#1)- >8 (#1)- >9 (# 1) - >o(#1)
343 CELL 1
CELL 2
lol 121 I5 I
CELL 3
171 141
.~
171xll01
,6 i lO,
i"i I I'l
I"l
171
I,ol
171
1,21
@
|
Fig 17(e) Flow Path for P a r t #4 in 3 Cell partition I (#2)- >1 (#2)- >O (#2)- >I (#1)- >4 (#1)- >O (#1)- >I (#2)- >7 (#2)- >9 (#2)- >O (#2)
344
CELL 2
CELL 1
CELL 3
l lll
161
121
141
151
141
13t
I,I
171 lloi
D
i,ll
I I, I
171
l,ol
171
1121
|
|
F i g 17(t") F l o w p a t h f o r P a r t # 6 i n 3 C e l l p a r t i t i o n I (#2)- >6 (#2) - >10 (#2) - >7 (#2) - > 0 (# 2) - >I (#1)- >8 (#1)- > 0 (#1) - >I (#2) - >9 (#2) - > 0 (#2)
345
I-o
1
2
3
4
5
6
FROM 1
-
-
-
1 2 3
2 3 4
568 483
.
-
172 .
-
-
2 0
.
.
.
6
576
.
.
-
-
12
54 -
.
-
-
-
380
-
.
-
-
F i g u r e 18(a) A s s y m m e t r i c T r a v e l C h a r t f o r I n t e r c e l l F l o w s
To FROM 1 2 3
1
2
3
4
5
6
-
568 -
483 -
123 172 20
12 -
630 -
.
5
.
.
.
.
. .
.
.
. .
380 .
.
F i g u r e 18(b) S y m m e t r i c T r a v e l C h a r t f o r I n t e r c e l l F l o w s
346
, ,,,,
(~)i .,~~~,Q ,,,oo 1
'ok [ 1
12
Fig 18(c) Maximum Spanning Tree generated from the Symmetric Travel Chart
AISLE
/
/
J
Figl8(d) Shop Layout generated by manipulation of the Maximum Spanning Tree
APPENDIX
348
S T O R M EDITOR : F A C I L I T Y L A Y O U T M O D U L E Title: Permutation of Machines Number of departments 912 Number of departments down "1 Department height 91 Successful evaluations : 66
R15 :C6 FIXED LOC USER SOLN FLOW 1 FLOW 2 FLOW 3 FLOW 4 FLOW 5 FLOW 6 FLOW 7 FLOW 8 FLOW 9 FLOW10 FLOWl 1
M1
M2
Distance(EUCL/RECT) :RECT Department width 91 Symmetric Matrices 9BOTH
M3
M4
33.
36. 25. 25.
13.
M5
M6
33. 100. 25.
9. 13. 29. 25. 29.
349 FACILITY LAYOUT :PROCESS 1. Edit the current data set 2. Save the current data set 3. Print the current data set 4. Execute the module with the current data set Select Option 4
I
FACILITY LAYOUT :ITERATION 1 1) Go to final solution 2) Go to next solution 3) Draw layout 4) Department interaction values 5) Objective function by departments I
Select Option 1
I
FACILITY LAYOUT : POST SOLUTION ANALYSIS 1) Draw Layout 2) Department interaction values 3) Objective function by departments 4) Generate a random initial solution and resolve 5) Perform user defined exchanges 6) Restart solution process from current solution 7) Save best solution found as user solution Select Option 1
I
This Page Intentionally Left Blank
Planning, Design, and Analysis of Cellular Manufacturing Systems A.K. Kamrani, H.R. Parsaei and D.H. Liles (Editors) 9 1995 Elsevier Science B.V. All rights reserved.
351
A Simulation A p p r o a c h for Cellular Manufacturing S y s t e m D e s i g n and A n a l y s i s Ali K. Kamrani a, Hamid R. Parsaei b, and Herman R. Leep b aDepartment of Industrial & Manufacturing Systems Engineering, The University of MichiganDearborn, Dearborn, Michigan, 48128-1491, USA bDepartment of Industrial Engineering, University of Louisville, Louisville, Kentucky 40292, USA
1. ABSTRACT Often, real world decisions require some degree of human judgment. These decisions may require a set of tools that can assist the decision maker. Simulation modeling, MRP, MRPII, decision trees, and linear programming are some examples of the types of tools used for Decision Support Systems. This chapter presents the application of linear programming to develop a methodology that uses design and manufacturing attributes to form machining cells. The methodology is implemented in four phases. In Phase I, parts are coded based on the proposed coding system. In Phase II, parts are grouped into families based on their design and manufacturing dissimilarities. In Phase III, the optimum number of resources (e.g., machines, tools, and fixtures) are determined and grouped into manufacturing cells based on relevant operational costs and the various cells are assigned part families. Finally, in Phase IV, a simulation model of the proposed system is built and analyzed. This model is executed so that data from the proposed system may be gathered and evaluated to justify the feasibility of the system by introducing real-world scenarios such as breakdown, maintenance, and on-off shifts. The developed mathematical and simulation models are used to solve a sample production problem. The results from these models are compared, and are used to justify the final design of the cell. By using these modeling techniques and tools, cellular manufacturing systems can be designed, analyzed, and finally optimized. 2. GROUP TECHNOLOGY The philosophy of group technology (GT) is an important concept in the design of flexible manufacturing systems and manufacturing cells. Group technology is a manufacturing philosophy that identifies similar parts and groups them into families. In addition to assigning unique codes to these parts, the group technology developers use the part similarities during the design and manufacturing phases. GT is not the answer to all manufacturing problems, but it is a good management technique with which to standardize efforts and eliminate duplication.
352 Group technology classifies parts by assigning them to different families based on their similarities in the following areas: design attributes (physical shape and size) and/or manufacturing attributes (processing sequence). Methods that are available for solving the GT problems in manufacturing can be classified into the following categories: classification, production-flow analysis, and cluster analysis. Classification and cluster analysis methods are the two most widely used.
2.1. Classification Classification is defined as a process of grouping parts into families based on some set of principles. This approach can be further categorized into the visual method (ocular) and coding procedure. Grouping based on the ocular method is a process of identifying part families by visually inspecting parts and assigning them to families and the production cells to which they belong. This approach is limited to parts with large physical geometries and it is not an optimal approach because it lacks accuracy and sophistication. The coding method of grouping is considered to be the most powerful and reliable method. In this method, each part is inspected individually by means of its design and processing features. Coding can be defined as a process of tagging parts with a set of symbols that reflect the part's characteristics. A part's code can consist of a numerical, alphabetical, or alpha-numerical string. Three types of coding structures exist. These structures are described below. 9 Hierarehial (Monocode) Structure: In this structure, each digit is a further expansion of the previous digit. This method requires that the meaning of a digit is dependent on the meaning of the previous digit in the code's string. The advantage of this method is due to the amount of information that the code can represent in a relatively small number of digits. However, a coding system based on this structure is complicated and very difficult to implement. 9 Chain (Attribute or Polycode) Structure: In this structure, the meaning of each digit is independent of any other digit within the code string. In this approach, each attribute of a part is tagged with a specific position in the code. This structure is simple to implement, but a large number of digits may be required to represent the characteristics of a part. 9 Hybrid Structure: Most of the coding systems available are implemented using the hybrid structure. A hybrid coding system is a combination of both the monocode and polycode structures, taking advantage of the best characteristics of the two previously described structures.
2.2. Clustering The clustering process involves the grouping of similar objects. This approach has been practiced for many years. This method requires the calculation of a clustering factor known as the similarity or dissimilarity coefficient by using a clustering criterion as an objective in order to optimize the system performance. Similarity and dissimilarity coefficients are calculated values that represent the relationships between parts. Most research in this area has been based n the fact that these coefficients range from 0 to 1. This condition indicates that the dissimilarity
353
coefficient = 1.0 - similarity coefficient or vice versa. Clustering methods for the grouping of parts and the design of manufacturing cells have gained the attention of both researchers and practitioners. Both hierarchical clustering and nonhierarchical clustering methods are available. The hierarchial method results in a graph known as dendrogram, which illustrates the data grouped into smaller clusters based on their similarity or dissimilarity measures. The hierarchial method is accomplished in two forms, agglomerative and divisive. In the agglomerative hierarchial approach, the procedure begins with m objects that are to be classified. At each step, the two most similar objects are merged into one single cluster. After m-1 steps, all objects belong to one large cluster. Many such methods, differing in criteria, are used to decide which individual elements or clusters should be merged together and how the similarity between a newly obtained cluster and other clusters or objects is defined. In the hierarchial method, the structure of the set of objects can also be obtained by dividing the set into two or more subsets and continuing the division until all objects have been completely separated. This hierarchial clustering method is known as the divisive method. The divisive method has been studied and used much less than the agglomerative procedures. The nonhierarchial method uses partitioning clustering algorithms to search for a division of a set of objects into a number of clusters, k, in such a way that the elements of the same cluster are close to each other and the different clusters are well separated. Because the k clusters are generated simultaneously, the resulting classification is nonhierarchial. The number of clusters can be either given or determined by an optimization algorithm. When k is unknown, several iterations of the algorithm can provide several values of k. In this way, it is possible to analyze the performance of the developed clusters and select the optimal one. 3. CELLULAR MANUFACTURING The main objective of designing manufacturing cells is to develop a production environment of machining centers, either as a line or in cells, operated manually or automatically for the production of part families that are grouped according to a number of similarities in their design and manufacturing features. This type of manufacturing is known as cellular manufacturing and is used for manufacturing a product in batches. Cellular manufacturing or group production will foster an environment where the cost effectiveness of mass production and the flexibility of job-shop production can be achieved for a batch production environment. This approach will require the analysis of parts and the selection of part families where the part's attributes are similar enough to allow processing with a minimum number of or no change overs. When families are created and the lot sizes are acceptable, the layout of the cell can be establish and the production of part families can start. The advantages derived from using cellular manufacturing include the reduction of work in process (WIP), improved quality, better utilization of high-investment machines, and reduction of scrapped parts. A number of methodologies have been developed and are available for grouping parts and developing machines cells. The next section will introduce a new design methodology using a customized coding system for the design of flexible manufacturing cells.
354 4. A N E W D E S I G N M E T H O D O L O G Y
FOR MACHINE CELLS FORMATION
A new methodology is proposed and implemented. The design is described below: 4.1. C o d i n g System and its Structure A new coding system is proposed for part code assignment. The required information for this coding system can be easily retrieved from the firm's design and manufacturing databases. This system consists of 18 digits and is based on the hybrid structure. The attributes and components used for this coding structure are as follows:
Attribute 1: General Shape of the Part 9Rotational 9 (CAI-1) 9 (CA1-2) 9 (CA1-3) 9Nonrotational 9 (CA1-4) 9 (CA1-5) 9 (CA1-6) 9 (CA1-7)
R-Bar R-Tube R-Hexagonal Bar NR-Plate NR-Square Bar NR-Sheet Plate NR-Rectangular Bar
Attribute 2: Material 9 (CA2-1) 9 (CA2-2) 9 (CA2-3) 9 (CA2-4) 9 (CA2-5)
Aluminum Alloys Copper-Zinc Alloys Steels Cast Irons Plastics
Attribute 3: Maximum Diameter 9 (CA3-1) 9 (CA3-2) 9 (CA3-3) 9 (CA3-4) 9 (CA3-5)
D ~ 0.75 in. 0.75 < D ~ 1.50 in. 1.50 < D ~ 4.00 in. D > 4.00 in. N/A (Nonrotational Part)
Attribute 4." Overall Length 9 (CA4-1) 9 (CA4-2) 9 (CA4-3) 9 (CA4-4)
L ~ 6 in. 6 < L ~ 18 in. 18 < L ~ 60 in. L > 60 in.
Attribute 5: Diameter of Inside Hole 9 (CA5-1)d ~ 0.5 in. 9 (CA5-2) 0.5 < d ~ 1.0 in. 9 (CA5-3) 1.0 < d ~ 5.0 in. 9 (CA5-4) d >5.0 in.
355 9 (CA5-5) No Hole Attribute 6. Product Type 9 (CA6-1) Commercial 9 (CA6-2) Electrical 9 (CA6-3) Industrial 9 (CA6-4) Military 9 (CA6-5) Special 9 (CA6-6) Other Attribute 7. Number of Processing Steps Attribute 8. Processing Type and Sequence 9 ( C A 8 - 1 ) Turning 9 (CA8-2) Drilling 9 (CA8-3) Reaming 9 (CA8-4) Boring 9 (CA8-5) Tapping 9 (CA8-6) Milling 9 (CA8-7) Grinding 9 (CA8-8) Broaching 9 (CA8-9) Sawing Attribute 9. Minimum Number of Machines Required for Processing Attribute 10. Processing Machine Type 9 (CA10-1) CNC Turning 9 (CA10-2) CNC Drilling/Tapping 9 (CA10-3) Vertical/Horizontal CNC Milling 9 (CA10-4) External/Internal Grinding 9 (CA10-5) Broaching 9 (CA10-6) Band/Circular Sawing Attribute 11: Number of Tool Types Attribute 12. Tool Type and Sequence 9 ( C A 1 2 - 1 ) Insert 9 (CA12-2) Twist Drill 9 (CA12-3) Adjustable Reamer 9 (CA12-4) Adjustable Boring Bar 9 (CA12-5) Tap 9 (CA12-6) Milling Cutter 9 (CA12-7) Grinding Wheel 9 (CA12-8) Broach 9 (CA12-9) Band/Circular Saw Blade
356
Attribute 13."Number of Fixture~Jig Types Attribute 14: Fixture~Jig Type 9 (CA14-1) Special Fixture 9 (CA14-2) Multipurpose Fixture 9 (CA14-3) Adjustable Fixture 9 (CA14-4) Rotational Adjustable Fixture 9 (CA14-5) Nonrotational Adjustable Fixture 9 (CA14-6) Special Jig 9 (CA14-7) Multipurpose Jig 9 (CA14-8) Adjustable Jig Attribute 15: Number of End Operations Attribute 16: End Operation Type and Sequence 9 (CA16-1) Clean 9 (CA16-2) Polish 9 (CA16-3) Buff 9 (CA16-4) Coat 9 (CA16-5) Paint 9 (CA16-6) Assemble 9 (CA16-7) Pack 9 (CA16-8) Inspect Attribute 17: Minimum Number of Devices Requiredfor End Operations Attribute 18: Devices Usedfor End Operations 9 (CA18-1) Dip Tank 9 (CA18-2) Disk Grinder 9 (CA18-3) Process Robot 9 (CA18-4) Material Handling Robot 9 (CA18-5) Painting Robot 9 (CA18-6) Assembly Robot 9 (CA18-7) Packaging Machine 9 (CA18-8) Vision System 4.2. Part-Family Formation By examining the structure of the proposed coding system, four types of variables (binary, nominal, ordinal, and continuous) are identified. The linear disagreement index between parts i and j for attribute k, which is either a binary or nominal variable type, is measured by the following: 1, if Rik * Rjk dijk =
{
(1) 0, otherwise
357 where
disagreement index between parts i and j for attribute k Rik- rank of part i for attribute k Rjk= rank of part j for attribute k. dijk =
The linear disagreement index for an ordinal variable is determined by the following equation: dijk = [Rik--Rjk where
m m-1
I/(m-I)
(2)
= number of classes for attribute k = maximum rank difference between parts i and j.
The linear disagreement index for a continuous variable is determined by the following equation: dijk = I Rik-Rjk
I/x
(3)
where Xk is the range of values for the variable. The linear disagreement index for Attributes 1,2, and 6 is calculated using Eq. (1) because the general shape of part is considered to be a binary variable and the material and product types are considered to be nominal variables. The linear disagreement index for Attributes 3, 4, and 5 is calculated using Eq. (2) because there is class associated with these variables and, therefore, they are considered to be ordinal variables. The linear disagreement index for the processing and end-operation sequences is calculated using McAuly's equation as follows: dij = 1- ~ (qio * qjo)/oX (qio + qjo- qio * qjo)
(4)
where 1, Part i requires operation o
qio = { 0, otherwise. The linear disagreement index for tools and fixtures can be calculated by the following: dij = (NTi + NTj- 2NTij) / (NT~ + NTj)
(5)
where NTj = number of tools required by part i NT~j= number of tools common to both parts i and j. The linear disagreement index for processing machines and end-operation devices is calculated using the Hamming Metric as follows:
358 dij = Z ~(Xim, m
Xjm)
(6)
1, if part i is made on machine m where
Xi m = {
0, otherwise 1, ifXim ~ Xjm and
~(Xim , Xjm )
-- {
0, otherwise. After the evaluation of these parameters, the analyst can assign weights to represent his or her subjective evaluation of the variables and group parts based on their assigned priority. These weights can be categorized as critical (1.00), very important (0.75), important (0.50), and not important (0.25). Finally, the weighted dissimilarity measure (DISij) between parts i and j can be determined by the following: DlSij = ]~(Wk * dijk)/E Wk k
k
(7)
where DISij Wk
dijk
= weighted dissimilarity coefficient between parts i and j = weight assigned to attribute k = disagreement index between parts i and j for attribute k.
In order to select the part families, the K-Medium formulation for nonhierarchical clustering is used to minimize the total sum of dissimilarities as follows: Minimize ~I~I DISij * xij for all i,j = 1,2 ..... p ij
subject to x 0 = 1 for all i = 1,2,...,p j =1.....p
~
xjj = K
j=l .....p
x~j ~ xjj for all i,j = 1,2,...,p where P k DISij
= number of parts = required number of part families = dissimilarity measure between parts i and j, DISij = DISji
(8)
359 1, if part i is a member of group j and
Xij --- { 0, otherwise.
4.3. Machine Cell Formation In this section of the methodology, the main objectives are to determine the number of machines, tools, and cells for the assignment of the proper families to the proper cells. This phase is more of an economic issue rather than one of design. The methodology takes into account several relevant operating costs and intends to minimize the total sum of these costs. This study uses several costs, including the following:
9 Machine investment cost 9 Tool investment cost 9 Fixture investment cost 9 Machine production cost 9 Inspection cost 9 Setup cost 9 Intracell material handling cost A mixed-integer mathematical model for the overall operating cost was developed and used in this part of the methodology. A number of assumptions were made in order to develop this mathematical model. These assumptions include the following: 9 Operating time for each part is known. 9 Demand for each part is known. 9 Machine types can be placed in any selected cell. 9 Each part has a fixed routing. 9 No intercellular material handling is allowed. 9 Inspection time for each part is known, and each part is inspected after being processed on a machine type. The mathematical model developed for this stage is as follows: Index sets
o: Operation p: Part m: Machine type t: Tool type f: Fixture type g: Part Group c: Cell, C ~ G
o = 1,2 .... ,O p - 1,2,...,P m = 1,2,...,M t - 1,2 .... ,T f - 1,2,...,F g = 1,2,...,G c - 1,2,...,C
360
Machine investment cost MIC = Illl CM m * NMmc
(9)
me
where C M m = annual investment and maintenance costs for machine type m NMme = number of machine types m assigned to cell c.
Tool investment cost TIC = II II II C T t * NTtme* O.'tm
(10)
tmc
where CTt = annual investment and maintenance costs for tool type t NTtmc = number of tool types t assigned to machine type m in cell c 1, if tool type t is required by machine type m
,
Otm
--
{ O, otherwise.
Fixture investment cost FIC = II II ~ C F f * NFfme* Ofm
(11)
free
where C F f = annual investment and maintenance costs for fixture type f NFfmc = number of fixture types f assigned to machine type m in cell c
1, if fixture type f is required by machine type m Ofm = { 0, otherwise.
Machine production cost In order to calculate the machine production cost, the processing time required for each part type should be determined on all machine types required for its production. This time is dependent on the type of operation and its processing time, and the machine performance rates for that particular operation, because a process can be performed by more than one machine type. Total demand of a particular part, the total machining time required to meet the annual demand of the part is calculated as:
361 TMDpm = Y, PRFom * TOop * dp * ~omp
for all m,p
(12)
o
where TMDpm = total time required to meet annual demand of part p on machine type m PRFom = machine performance rate for operation o on machine type m TOop = time to complete operation o on part p dp = total number of parts p demanded annually
~omp
-{
1, if operation o of part p requires machine type m
O, otherwise Because it is assumed that each part is inspected after each operation, the cost of rework and machine reliability should be included in the equation. This machine production cost (MPC) is calculated by the following expression: MPC =~m~ [CPpm * Rm + C R W m * (1- Rm)] * Bpm * TMDpm * Y~
(13)
where CPpm = cost of processing part p on machine type m
Rm = machine type m reliability C R W m = rework cost on machine type m
1, if part p is processed on machine type m Bpm = {
O, otherwise 1, if part p is assigned to cell c Yp~= { 0, otherwise.
Inspection cost IC =II II II Clpm * Tlpm * dp * ~omp * NMmr pmo
(14)
where CIpm = inspection cost of part p after being processed on machine type m TIpm= inspection time required to inspect part p after being processed on machine type m dp -- total number of parts p demanded annually
362 1, if operation o of parts p requires machine type m ~l,omp
"-- {
0, otherwise NMmr = number of machine types m is assigned to cell c. Setup cost SC=Yl, ~ ~ TSmgc * CSmg c * iI'mg * Xg c rngc
(15)
where TSmg e = average setup time for machine type m for group g in cell c CSmgc = average setup cost for machine type m for group g in cell c
rng
{
1, if machine type m is used for group g production
O, otherwise 1, if group g is assigned to cell c Xg c
--- {
O, otherwise. Intracell material handlin~ cost
IMHC = Y,~ ( N M V p - 1 ) * dp * CH~ * Yp~
(16)
pc
where
N M V p = number of moves for part p dp = number of parts p demanded annually CHp~ = average material handling cost of part p in cell c
1, if part p is assigned to cell c Yp~={ O, otherwise. The objective of this mathematical model is to minimize the total sum of the costs described above. The constraints for this mathematical model are formulated as follows. Budget v
The following constraints will restrict the amount of capital expenditure to the annual budgets for machines, tools, inspection, and material handling.
363 X] X] C M m * N M m c < B M
(17)
YI.X] C T t * NTtm c * Otm ' ~' B T tmc
(18)
me
Xl~Z CFf * fmc
NFfmc *
(19)
Otm < B F
(20)
X]]~Z CIpm * YIpm * dp * ~tomp * NMme ~ BI pmo X]X ( N M V p - 1 ) * dp * CHpe * pc
Ypr ~ B M H
(21)
where BM BT BI BMH
= = -
budget available to purchase and maintain machines of all types budget available to purchase and maintain tools of all types budget available for inspection of all parts budget available for material handling of all parts in all cells.
Machine Capacity Equations for this constraint will ensure that the capacity of each machine type in each cell is not exceeded. If it is exceeded, the number of duplicate machines available will be calculated and proposed to meet the annual demand for parts. ~] gpm * TMDpm * Ypc ~ TPm * NMmc for all p
m,c
(22)
1, if part p is processed on machine type m where
Bpm
-- {
0, otherwise TMDpm
=
total time required to meet annual demand of part p on machine type m
Ype
-{
1, if part p is assigned to cell c
O, otherwise TP m = total annual processing time available on machine type m NMme = number of machine types m assigned to cell c.
Tool life Equations developed in this section will ensure that the tool life of each tool type on each machine assigned in each cell is not violated. If a cutting time exceeds the life of the tool in question, the number of duplicate tools available will be calculated.
364 P Bpm * B'tp * O'tm * TMDpm * Yp~ ~ TL t * NTtm c for all m,c,t
(23)
where 1, if part p is processed on machine type m Bpm -- { 0, otherwise TMDpm = total time required to meet annual demand of part p on machine type m 1, if tool type t is required by part p ~'tp: { 0, otherwise 1, if tool type t is required by machine type m
O'tm'-{ O, otherwise 1, if part p is assigned to cell c
0, otherwise. TL t = total life of tool type t NTtm~ = number of tool types t assigned to machine type m in cell c
Machine-Fixture Balance Since each process machine will require at least one fixture for its production this constraint will assure the minimum number of required fixture in case of machine duplications, NFfmc * o fm> NMmr
(24)
Cell Capacity In order to have a high degree of flexibility in each cell, a limit must be set for the total number of parts assigned to each cell. This constraint is formulated as follows: p
dp * Yp~ ~ ICe
for all c
(25)
365 where dp
=
total number of parts p demanded annually. 1, if part p is assigned to cell c
Ypc= { O, otherwise ICc= maximum number of parts allowed in cell c
Part-Group Assignment In order to reassure the assignment of each part family to only one cell and the assignment of the parts within these families to the same cell, the following constraints are proposed: l~ Xgc = 1
c
for all g
Igpg * Ypr Xg c for all p,g,c
(26)
(27)
where 1, if group g is assigned to cell c
Xg c -- { O, otherwise
1, if part p is a member of group g
IXpg= { O, otherwise.
1, if part p is assigned to cell c Yr~={ O, otherwise
Binary and integerality conditions of decision variables Xgc Ypr NMmc NTtmc
e (0,1) for all g,c e (0,1) for all p,c > 0 and integer for all m,c > 0 and integer for all t,m,c.
(28) (29) (30) 01)
366 NFfm~ 2 0 and integer for all f,m,c.
(32)
5. NUMERICAL EXAMPLE Production of 15 parts is required. These parts require 15 operations (nine process operations and six end operations). There are eight process machines and five end-operation devices available. A rating factor of 1 is assumed for all machines performing the process operations. This assumption indicates that the machine selected is the most suitable one for performing the process operations. Nine types of tools are available for the process operations. Using the proposed formulation in Stage II, and by setting K to be 4, parts and their associated families are as follows: G1(1,3,5), G2(2,4,6,8,11,14), G3(7,9,10,15), and G4(12,13). Table 1 lists the annual investment and maintenance costs associated with each machine and tool. It also contains the annual available machining time on each machine and the tool life associated with each tool. The annual demand for each part is given in Table 2. Machine reliability is illustrated in Table 3. The inspection time and cost are assumed to be similar for all parts, and all machines have the same rework cost. Setup cost and time are assumed to be similar for all machines. The intracellular material handling cost associated with parts is similar in all cells. Table 4 illustrates these values. The annual budgets for machines, tools, inspection, and material handling, and the upper limits for the number of parts in each cell, in order to maintain cell flexibility, are also illustrated in Table 4. The model is solved by using LINDO software and the results are listed in Tables 5, 6, 7, and 8. 6. THE SIMULATION MODEL The output from the mathematical model forms the numerical basis for the simulation model. Further assumptions for development of the simulation model are required. The incorporated assumptions in this model are as follows: Operational times (for both processing and end operations) are represented by exponentially distributed random variables. The cell produces part types in random sequence, but each part type is produced in proportion to its share of overall demand. The machine cell is operated for 20 hours out of every 24-hour period, with the remaining four hours being devoted to preventive maintenance, tool
367 Table 1 CMm, TPm, CMt, and TLt Values Machine Type
6 7 8 9 10 11 12 13
CMm
TPm (min)
$ 30,000 $ 42,000 $ 24,000 $ 35,000 $ 41,000 $ 22,000 $ 20,75O $ 30,00O $ 22,460 $ 27,000 $ 32,000 $ 26,000 $ 30,490
102,000 139,000 111,000 114,000 140,000 100,000 162,000 156,000 130,000 155,000 90,000 99,000 89,000
Tool
CMt
TLt (min)
$ 2538 $ 2576 $ 2526 $ 2436 $ 2562 $ 2334 $ 2454 $ 2154 $ 2244
490 444 430 427 488 413 412 414 442
Type
Table 2 Annual demands for various parts (dp) Table 3 Machine reliability (Rm)
Part Type 1
2 3 4 5 6 7 8 9 10 11 12 13 14 15
1728 2000 2145 1729 1948 2263 2226 2236 2160 1777 1758 1824 2089 1929 2308
Machine Type 1
2 3 4 5 6 7 8 9 10 11 12 13
Rm 93 % 85% 94 % 89% 75% 85% 95 % 94 % 97 % 85% 82 % 85% 91%
368
TIpm = 0.5
min/part
CIpm = $ 1 . 5 0 / m i n / p a r t
CHp~ = $ 2.00 C R W m = $ 6.00 TSmgc = 15.0 m i n CSmg c -- $ 5.00
Table 6 Number of machine types and their assignments
BM = $1,500,000 BT = $15,000,000 BI = $ 750,000 BMH = $ 250,000 IC 1,2 = 12,000 parts I C 3 , 4 -- 10,000 parts
Machine Type
Table 4. Data for the Sample Problem Table 5 Cell Configuration Part Number
Family Number
1 3 5 12 13
1,4
2 4 6 8 11 14 7 9 10 15
Cell Number
1 2 3 8 9 10 11 12
5 8 9 10 11 12 13 1 3 4 8 9 10 11 12 13
Cell Number
Number of Duplicates
369 Table 8 Number of fixture types and their assignments
Table 7 Number of tool types and their assignments Tool
Machine Type
Cell Type
No. of Duplicate 176 194 404 91 263 137 83 49 196 189 136 763 51 71 78 318 121 117 73 227
Fixture Type
Machine Type
Cell No.
No. of Duplicate
370 reconditioning, setup, and other activities of this nature. The cell is operated six days per week. A transient period is incorporated into every simulation run. A duration of 400 simulation hours is estimated. This value was determined by observing the behavior of several random varieties and choosing the time at which these values reached steady state. The preprocess buffer capacities for processing machines are set at 100 parts. Queue capacities for inspection stations and end-operation machines are set at 50 parts. Parts are removed from queues for processing according to the first-in-first-out rule. Each processing operation has a corresponding inspection operation. These inspection operations will have durations that are exponentially distributed, each with a mean of 0.5 minute. If more than one operation is performed on a part as it is processed by a particular machine, then a corresponding number of inspections will be performed sequentially on that part at the inspection station. In this case, the inspection time will be distributed as the sum of the appropriate number of exponential functions, each with a mean of 0.5 minute. In other words, duration = Erlang(0.5 minute, n) where n = cumulative number of exponential distributions 0.5 min = mean of each exponential distribution. There are no inspections after end operations. The nature of the end operations being performed is such that they are completed successfully virtually 100% of the time. The probability of any part failing inspection is given by the reliability of the machine on which the part was processed. The probability of a part failing inspection after undergoing multiple processes on a single machine is given by 1 - R n, where R = machine reliability n = number of processing operations performed on a single processing machine. Parts failing inspection are returned to the proper processing machine for rework. Parts may be reworked twice before being scrapped. Rework operations are assumed to take the same amount of time as the original processing operation. As a result, the cost of the rework operation, in terms of time, tool wear, and machine operating cost, is exactly the same as the cost of producing a "raw" part. When parts are returned to the queues serving processing machines for rework, they will not be given priority, but will instead be placed at the end of the queue according to the first-in-first-out scheduling rule.
371 The simulation model does not attempt to account for fixture wear, replacement, or duplication. Both processing and end-operation machines are assumed to periodically break down as the result of machine failures, part jams, broken tools, and so forth. The only exception is the dip tank, which (due to its simplicity) is assumed never to malfunction. Machine breakdowns are th___~emost significant source of randomness in discrete manufacturing processes. Parts undergoing process operations at the time that a machine breakdown occurs are not damaged. Parts being processed at the time of a breakdown are completed before the machine is shut down. These parts will have the same probability of meeting specifications as parts that were processed when no breakdown occurred. The time intervals between breakdowns are based on "calendar time" (elapsed simulation time), and not on machine busy time (time that the machine is actually in operation). Assuming the Central Limit Theorem applies, the time intervals between breakdowns are modeled with normal distributions. The practical effect of the use for the normal distribution is that the value of the duration for the time interval between breakdowns can be allowed to vary significantly from one breakdown to the next. The mean time between breakdowns for processing machines is assumed to be 325 hours. The standard deviation of the normal distribution used to model this time interval is assumed to be one tenth of the mean, or 32.5 hours. The mean time between breakdowns for end-operation machines is assumed to be twice that of the interval between breakdowns for processing machines, or 650 hours. The standard deviation of the distribution used for these machines is likewise taken to be twice the value of the standard deviation used for processing machines, or 65 hours. Repair of machines is assumed to occur according to the following phases: diagnosis, disassembly, and reassembly. The time required to perform each of these tasks will be assumed to be exponentially distributed, and each phase will be assumed to last approximately two hours. The entire repair procedure will be modeled as an activity with a duration following the Erlang distribution. The exponential distributions contributing to these Erlang distributions will each have a mean value of two hours, and the number of contributing distributions will be 3.25. Machine capacity is known for each machine. When a machine has reached a time-inoperation which exceeds its life, it is duplicated. Tool life is assumed to be known for each type of tool. Tools are replaced when their time-in-operation exceeds the known life of the tool.
372 7. ANALYSIS OF SIMULATION OUTPUT SLAMSYS software, due to its ease of use and capabilities, was selected to perform this analysis. Figures 1, 2, 3, 4, 5, and 6 illustrate some sample SLAMSYS networks developed for modeling the machining cell.
1r1.167856 I,+-I I,XX(2
I,l.1kS111
111.189929
@
8.18766__....._.~3
N]
0.147S4S
0r0.161897
I:PT2 I
I
N
1.5
I NTlUB(2) - 2 1~ ~----rt ATRIB(3) EXPOH("21671
-
~i,mtocs) u-m,+661>: I"I
ATRIB(S)
- EXPOH(I.2SO0) 1 _.,,., ")
.I ''''+('' " '
I I
nTn[O{2) - il ATRtB(3)
s
i1~
i PT11I ATRI9(2) - 111 1~ ATRIB(3) EXPOH(0.2000)
I"'"" I
Figure 1. Creation of entities and assignment of characteristics for Cell 1
I**i-, !*' I *x,+', 101,)
-,.x,.2, -t~,,,,. 9 **,,+,.,
+++t,y
.(,+, I.--.*LL i
Figure 2. Disjoint network to model tool replacement due to excessive time-in-operation
373
~(zzs,zz.s) e
,
x
x
(
z
~
e,xx(zT).(o,
i
I,.,<1
XX(MI.) - (TNF~ - ~ j ~.
_ _ ~ ~ ( Z .
3), NNA$C(S)*NRUS(($).LT.
O,NN~SC(5)NRUSE(S).
Figure 3. Disjoint network to model breakdowns of CNC mills
IHF xx(zl)
xx(z?) = e xx(sq) = -1.xx(sq) xx(6s) = -~.xx(ss)
-i,,xx(zl)
-i.xx(Is) )xx(z,)-,.xx(z,)~. xx(,~) txx(ls) :
:~.xx(,~) xx(6s) -1.xx(6s)
XX(Te) = XX(I$)-NNR$C(5)-NRU$E(5)I - ' ~ XX(T1) : XX(21)-NNR$C(8)-NRU$E($)
Ixx(T~) 9xx(z~)-~sc(i i )-MRu~(ii)
XX(n) 9XX(S~)-~SC(la)-NRUSE(IS) lxx(,,) xxi-~,)--=(,,)-m~(,,) [ ,~~-I xx(Ts) = XX(eS)-~RSC(Zl)-NRUSE(Zl) | xx(Te) xx(_z~)-~sc(13)-~u~(~3) ....
xx(T~) L~
~
'HNI~OIBOT1~ xx(ts)
xx(T6)
: 1{1
Figure 4. Disjoint network to model the on-shift-off-shift operations for Cell 1
20,1
374
1
'
"-I I
- ,' , " ~ " D ......
,x.x(~,,)
/
~,,;,
9xx( 3,,)+.rRzo( 11 ) I
I:~',::.'"'~' !),
Figure 5. End operations, collection of statistics, and termination of entities for part Type 2
~ C N C H Z L L ~ OTRIB(3),,...1xx(,o,I) 1 '"
i]] "--I
: xx(,I.) ,,~Tnze( s ) I.~ nrnze( ski i
XX(q5) : XX(q5)+RTRZO(5) XX(2$) XX(28)+RTRZO(5)
Figure 6. Machining and inspection operations for part Type 2 in Cell 1
375 7.1. Determination of Transient Period Figure 7 depicts the transient behavior of the simulation model after start-up. The variable chosen to plot against time is the time between arrivals for entities arriving at COLCT (statistics collection) Node 2. This is the final node that entities representing part Type 4 pass through before leaving the system. Several test runs were made during which records of individual times between arrivals at COLCT Node 2 were kept. In each of these runs, the oscillations of this variable appeared to reach steady state at t = 375 hours. An average (p) and standard deviation (o) for the data collected for t ~ 375 hours were estimated at p = 2.015 hours and o = 1.462 hours. If only random variation is present in the system, then there is a high ( 99%) probability that all values recorded will fall within the interval 0~TBA~(p+3o), where TBA represents the time between arrivals.
25
I
O2
2O
/- -
0 Z [-,
I
I
I
1
I
I
i
1
o
9W a r m - U p
Period For t 2 3 7 5
15
= 2.015 a = 1.462
0 rj
10
.<
~
5
~
0
9
-
i"
....
'~,~
I
: ....
"I
"*~ ,1 f
i r:r, i
. . . . . . . .
i
#+3a=6.402
~
.....
m
~
.,-~ [....
-5
. a
0
I
50
a
I
100
~
I
150
J
I
200
,
1
250
,
I
300
,
I
350
,
,
400
t
450
,
I
500
Simulation Time (hrs.)
Figure 7. Warm-up period estimation All data collected after t = 375 hours fell into the range calculated, indicating that any bias due to initial start-up conditions had been damped out, and that the system had reached steady state. On the basis of this analysis, the warm-up period of 400 hours was chosen. 7.2. Selection of the Optimum Cell Configuration Several simulation runs were executed for each of the configuration tested, each for one year (simulated time) duration. The configuration for the cell was chosen such that production targets for each part type were met or exceeded using the smallest possible number of machines of each type. In cases where production of a part type exceeded the demand, it was assumed that excess product could be stocked as inventory. Another possibility is that the machine cell could be shut down more often for more preventive maintenance, or used for the production of other products. A third option is that the part mix may be varied slightly if needed, with more
376 of one part type being produced while production of a second part type is decreased. In any event, the slight over-capacity built into the system provides management with a source of additional flexibility. 8. D E V E L O P M E N T OF RELATIONSHIPS TO PREDICT PRODUCTION RATE Manufacturing costs can be divided into two categories; fixed costs which do not vary with the level of output of the system and variable costs which are affected by the level of output. The level of output of the system is proportional to the production rate (Rp), therefore, a prediction of the variation for Rp with respect to certain system variables could be made. Two such relationships are discussed here. The developed relationships are used to predict the rate of production for part types produced by the machine cell. These Rp values can be substituted into the equation for cost per unit produced, as follows: Cpr i - Cmi + C o(1/Rpi) + Cno
(33)
where Cpci = cost to produce one unit of part type i Cmi = material cost per unit of part type i Co = cost per hour of operation for the machine cell R p i - production rate for part type i Cno = nonoperation cost per unit(associated with material handling and so forth) 8.1. Relationship of Production Rate to Number of Parts Produced
Each simulation run of the system model produced a slightly different quantity of each part type. The results from individual simulation runs of Cell 1 were plotted, and the production rate for each part type appeared to vary linearly with the quantity of that part type produced. For each part type, a linear regression analysis was performed on the data using SYSTAT, a statistical analysis software package, resulting in the generation of a relationship of the following form: Rpi = AoPPi + A1
(34)
where = production rate for part type i PPi = quantity of part type i produced A0, A1 = constant parameters
Rpi
A coefficient of determination (R2) value, defined to be the proportion of the observed dependent variable variation that can statistically be explained by a linear relationship between dependent and independent variables, was also determined for each data set. Intuitively, it is expected that these R 2 values should be high, which proved to be the case. The constants for each of these equations are listed in Table 9.
377 Table 9 Coefficients for R,i v e r s u s PPi Part Type
R2
A0
A1
2
0.000138
-0.012
0.998
4
0.000139
-0.015
0.993
6
0.000142
-0.026
0.994
8
0.000131
0.008
0.989
11
0.000134
-0.002
0.990
14
0.000131
0.005
0.990
These equations offer the user a quick "rule of thumb" method of predicting the required production rate given the quantity required for a certain part type, assuming the change in "part type mix" is small ([PPi old - PPi new[g 0.1 PPi o~d),and overall machine cell production capacity remains relatively constant. Part type mix is defined to be the set of values defined by PPi/(Y, PPi). The constant cell capacity constraint requires that increases in the quantity of one part type produced be offset by decreases in quantity produced for another part type, unless the cell will be operating below capacity. The use of these equations provides not only values of R p i , but also an indication as to whether the indicated Rpi values are achievable. If radical changes in part type mix (more than approximately 10% of the magnitude of any PPi) are to be made, the simulation model should be modified to reflect these changes, and a new set of equations must be generated. As changes in part type mix become progressively larger, the equations presented here become increasingly poor predictors of system behavior.
8.2. Relationship of Production Rate to Operation Times Several simulation rtms were made during which the average duration of each operation in the production of each part type was recorded. These data were subjected to a multiple linear regression analysis, and relationships between operational times and production rate were generated. Each of these relationships were of the following form: Rpi = A1OP1 + A2OP2 + ... + AnOPn + C
where Rpi - production rate for part type i OPj - time duration for operation j n = number of operations required to produce the given part type Aj = constant coefficients C = a constant
(35)
378 The coefficients, constants, and R 2 value for each of these equations are listed in Table 10. Table 10 Coefficients for epi versus. OPj Part Type
11
14
Am
1.20
-3.10
-0.52
.39
37.90
-0.79
A2
-0.22
-1.40
0.79
-0.13
76.82
-0.40
A3
1.28
-3.85
-1.05
-0.20
0.199
-2.95
A4
-0.09
3.32
0.52
0.24
-48.75
0.77
A5
-0.28
0.15
-36.37
0.69
21.77
-0.33
A6
R2
-0.09
1.60
0.31
0.64
-6.46
0.53
0.97
0.80
0.79
0.96
1
1
The same constraints and assumptions hold for the equations developed here as for the relationships Rpi = f(PPi). It should also be noted that the following relationship applies"
fiRpi/bOPij = Aij
06)
where
Rp i OPij Aij
= production rate for part type i = time duration of operation j for part type i = coefficient associated with operation j for part type i in the above equations.
This equation indicates that the rate of change of production rate for any part type i with respect to any operational time is given by the Aij coefficient, or the sensitivity of Rpi to changes in operational times is evaluated by A i j. The magnitudes of the Aij coefficients relative to each other provide valuable information to management when changes in machining operations, equipment upgrades, and other such changes in the machine cell are contemplated. 9. S U M M A R Y AND CONCLUSION
A new methodology is proposed for the design of flexible manufacturing cells. A coding system is devised for the part code assignment. The information required for coding the parts using the proposed method can be retrieved from the firm's databases. The formation of part families was based on all the attributes used for part identification that increase the degree of association of parts within the families. A nonhierarchial approach was applied in order to form part families. The final portion of the methodology involves the design of machine cells. The
379 optimum numbers of machines, tools, and fixtures are determined from the annual demand for parts and availability of required machines and tools. A simulation analysis was also performed to study the feasibility of the results from the mathematical model by incorporating a real-world scenario and its impact on the solution of the problem. 9.1. Comparison of Results from the Mathematical and Simulation Models
The number of duplicates and replacements of machines and tools predicted by the simulation model was estimated to be three times larger than that predicted by the mathematical model. This disagreement is due to the fact that the simulation model allows both part scrapping and reworking to take place. Parts failing inspection may be reworked twice before being scrapped, and as a result the number of operations carried out by each processing machine in the cell may be much higher than the number of operations required to produce the final number of finished parts. While 16,277 finished parts were produced by the cell, there were 11,627 reworked parts and 426 scrapped parts. In addition, the simulation model incorporates machine breakdowns. While these breakdowns do not affect the predictions of the model concerning machine duplications due to excessive time-in-operation, they do lower the productivity of the cell. The mathematical model, therefore, is more optimistic in its predictions of the performance of the cell. The simulation model, because it incorporates a higher degree of realism, is more pessimistic in these projections. 9.2. Effectiveness of the Simulation Model
The simulation model can easily be modified to reflect changes in the machine cell such as changes in the processing rate of machine tools, increases and decreases in machine reliability (due to the purchase of higher quality machines, for instance), and changes in part type mix. The results of these changes in cell parameters can be evaluated using the simulation model, providing management with valuable information upon which to base decisions. 9.3. Recommendations for Further Research
The areas of future investigation and analysis available to the user are virtually unlimited. By modifying the existing models, experiments could be performed to examine the effects of the following: 1) Iinteractions with the machine breakdowns 2) Changes in the statistical distributions used to describe the time between machine failures 3) Changes in the part type mix 4) Addition of material handling and setups to the simulation model. Using further regression analysis and data collected from the existing model, the designer can generate any number of mathematical relationships for further study and evaluation. Of particular interest is the effect of material handling considerations on cell performance. The addition of material handling and transport times to the model would allow evaluation of the effects of different cell layouts (placement of machines within the cell). SLAM II offers the model developer the ability to animate and simulate a design model through the use of the SLAMSYS package. Animations can be invaluable in visualizing the
380 processes taking place in the system, and in presenting those processes to the model user.
REFERENCES
.
.
10. 11. 12. 13. 14. 15. 16. 17.
Anderberg, M.R., Cluster Analysis for Applications, Academic Press, New York (1973). Change T.C., R.A. Wysk, and H.P. Wang, Computer-Aided Manufacturing, International Series in Industrial and Systems Engineering, Prentice-Hall, Englewood Cliffs, New Jersey (1991 ). Chu, C. and P. Pan, The Use of Clustering Techniques in Manufacturing Cellular Formation, Proceedings of International Industrial Engineering Conference, IIE, Toronto, Canada, 495-500 (1988). Dutta, S.P., R.S. Lashkari, G.Nadoli, and T. Ravi, A Heuristic Procedure for Determining Manufacturing Families from Design-Based Grouping for Flexible Manufacturing Systems, Computers & Industrial Engineering, Vol. 10, No. 3, 193-201 (1986). Eades, D.C., The Inappropriateness of the Correlation Coefficient As a Measure of Taxonomic Resemblance, Systematic Zoology, Vol. 14, pp. 98-100 (1965). Goodall, D.W., A New Similarity Index Based on Probability, Biometrics, No. 22, 882-907 (1966). Groover, M. P., Automation, Production Systems, and Computer-Integrated Manufacturing, Prentice-Hall, Englewood Cliffs, New Jersey (1987). Houtzeel, A. and B.A. Schilperoort, A Chain-Structured Part Classification System (MICLASS) and Group Technology, Proceedings of the 13th Annual Meeting and Technical Conference, Cincinnati, Ohio, pp. 383-400 (1976). Kamrani, A. K., Computer Applications in a Manufacturing Environment: Planning, Justification, and Implementation Handbook, University of Missouri Rolla, Missouri (1993). Kusiak, A., Intelligent Manufacturing Systems, International Series in Industrial and Systems Engineering, Prentice-Hall, Englewood Cliffs, New Jersey (1990). Kusiak, A., The Part Families Problem in Flexible Manufacturing Systems, Anr~lg of Operation Research, Vol. 3,279-300 (1985). Law, A. M., Models of Random Machine Downtime for Simulation, Industrial Engineering, 58-61 (1990). McAuley, J., Machine Grouping for Efficient Production, Production Engineer, Vol. 51, 53-57 (1972). Opitz, H., A Classification System to Describe Workpieces, Pergamon Press, Elmsford, New York (1970). Pritsker, A. B., Introduction to Simulation and SLAM II, Third Edition, John Wiley & Sons, New York (1986). Pritsker, A. B., C.E. Sigall, R.D. Hammesfar, SLAM lh Network Models for Decision Support, Prentice-Hall, Englewood Cliffs, New Jersey (1989). Senath, P. H. A. and R. R. Sokal, Numerical Taxonomy: The Principles and Practice of Numerical Classification, Freeman Press, San Francisco (1973).
381 18. 19. 20.
Snead, C. S., Group Technology: Foundation for Competitive Manufacturing, Van Nostrand Reinhold, New York (1989). Steudel, H. J. and L. E. Berg, Evaluating the Impact of Flexible Manufacturing Cells via Computer Simulation", Elsevier Science Publishers (1985). Willett, P., Similarity and Clustering in Chemical Information Systems, John Wiley & Sons, New York (1987).
This Page Intentionally Left Blank
383 SUBJECT INDEX
Evaluation, 67 A Adaptive clustering, 257 Algorithm 6, 7 Analytical models, 168 Array based clustering methods, 4
Flexible manufacturing systems, 3, 4, 73 Flexibility, 3 Flow analysis, 229 Flow time, 214, 215 Fuzzy ART Neural network, 257 Fuzzy clustering, 5
Boring, 284 G Capacity limitations, 9 Cavity complexity, 290 Classification and Coding, 4 Cell formation, 3, 4, 8 Cell loading, 98 Cell priority, 103 Cell scheduling, 98, 115 Cell size constraints, 9 Cellular manufacturing, 47, 73, 130, 147, 203,351,353 Classification, 352 Clustering based approaches, 252 Code construction, 291 Coefficient based measures, 10 Computer aided tool, 294 Cost based measures, 10 Cost estimation, 283 Cross clustering method, 229 D
Database, 65 Design quality, 63 Die-castings, 283 Direct methods, 80 Drilling, 284
Earliest due date, 104
Graphic theoretical approaches, 5, 48 Group technology, 63, 167, 283 H
Heuristic procedures, 3, 5 Holistic approach, 129
Indirect methods, 80 Integer programming, 7, 78 Intercell, 10 K
Knowledge based, 5
Lexicography, 120 Logical constraints, 9 M
Machine investment, 16 Machine sharing, 203 Management, 60, 61 Manufacturing cell, 181 Manufacturing cell loading, 97 Manufacturing constraints, 9
384 Mathematical programming, 3, 4, 5 Material costs, 284 Matrix techniques, 76 Mixed integer programming, 7 Model development, 19 Multiple objectives, 187
Simulation, 351 Simultaneous, 263 Solution methodologies, 74 Statistical analysis, 195 Statistical clustering algorithms, 5 Surface finish, 288
N
T
Network flow, 47, 49 Neural network, 6, 213 Notations, 20
Taguchi method, 189, 195 Tardiness, 117 Trade-off, 24 Transient period, 375
O Objective function, 8 Optimization production technology, 3
Parametric analysis, 167, 174 Part families, 12, 54 Part retrieval, 291 Partitioning techniques, 73 Pattern recognition methods, 5 Performance evaluation, 275 Performance measures, 116, 152, 181,229 P-median model, 23 Primary cell rule, 104 Primary product rule, 104 Processing costs, 284 Product priority, 103 Production flow analysis, 229
Q Quadratic programming, 49 R
Relaxation methods, 47
Setup cost, 16 Similarities coefficients, 16