MAKING COMMON SENSE C O M M O N P R AC T I C E
models for manufacturing excellence third edition
This Page Intentionally Left Blank
MAKING COMMON SENSE C O M M O N P R AC T I C E
models for manufacturing excellence third edition
RON MOORE
AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Elsevier Butterworth–Heinemann 200 Wheeler Road, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright © 2004, Elsevier Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail:
[email protected]. You may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting “Customer Support” and then “Obtaining Permissions.” Recognizing the importance of preserving what has been written, Elsevier prints its books on acid-free paper whenever possible. Library of Congress Cataloging-in-Publication Data Application submitted. ISBN: 0-7506-7821-6 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. For information on all Butterworth–Heinemann publications visit our Web site at www.bh.com 04 05 06 07 08 09 10 9 8 7 6 5 4 3 2 1 Printed in the United States of America
Contents
Acknowledgments
xiii
Preface
xvii
1
Manufacturing and Business Excellence The Scene The Players Aligning the Manufacturing and Marketing Strategies Becoming the Low-Cost Producer Application of Increased Capacity Beta’s Beaver Creek Plant—RoNA vs. Uptime A Model for Becoming the Low-Cost Producer Steps to Manufacturing Excellence Measuring Losses from Ideal Sample Calculation of Batch Plant OEE Discussion of Sample Measurement A Special Case—Beta’s Dwale Plant Differences between Batch and Continuous Manufacturers Lean Manufacturing, Six Sigma, and Focused Factories Lean Manufacturing Six Sigma Focused Factories Summary
1 1 3 5 8 13 14 15 17 19 26 31 32 38 38 39 42 44 45
V
VI
MAKING COMMON SENSE COMMON PRACTICE
References
2
Benchmarks, Bottlenecks, and Best Practices Benchmarking—Finding Benchmarks and Best Performers Making the Comparison The Numerator The Denominator Bottlenecks—A Dynamic View Manufacturing Uptime Improvement Model References Additional Reading
3
Aligning the Marketing and Manufacturing Strategies
47
49 50 53 55 57 62 67 74 75
77
Beta’s Pracor Division 78 Market/Product Success Factors 79 Volume and Market Growth Analysis 82 Manufacturing Capability for Supporting Market/Volume Goals 84 Performance by Plant and by Product Line 85 The Plan 87 Revised Pricing Strategy 89 Plant Performance Requirements 90 Other Actions Required 91 The Expected Results 91 Effect of Product Mix on Manufacturing Performance 96 Beta’s Sparta Plant—Rationalizing Product Mix 96 Benchmarking Audit 97 Product Mix 99 Product Mix Optimization Process 101 Beta’s Leets Division—Rationalizing Customers and Markets 103 The Results 106 Summary 106 References 108
4
Plant Design and Capital Project Practices The Design Process Design Objectives Key Questions Operations and Maintenance Input
109 112 114 115 119
CONTENTS
Estimating Life-Cycle Costs Additional Case Histories Payback Analysis—Too Simple and Too Expensive—Payback Is Hell! Summary References
5
Procurement Practices A Model for Development of Strategic Alliances —Goose Creek Plant Pump Reliability Best Practices Supplier Selection Process for Improved Specifications for Equipment Reliability—Mossy Bottom Plant Vibration Standards for Rotating Machinery Balancing Specifications Resonance Specifications Bearing Installation Specifications Machinery Alignment Motor Specifications General Requirements Raw Material Cost Improvement Improving Supplier Reliability—A Case Study Supply Chain Principles—A Strategic Approach References
6
Stores/Parts Management Practices What Are Stores? The “Cost” of Stores What Stores Are Needed—Kind and Quantity Bill of Material and Catalog Management Methods Partnerships with Suppliers Standardization The Store Layout Procedures Work Environment and Practices People Training Contracting the Stores Function Key Performance Indicators
122 123 125 129 130
131 133 136 137 139 141 143 143 144 144 144 145 148 149 153 156
157 158 159 160 160 161 162 163 163 164 164 166 167 167 168 169
VII
VIII
MAKING COMMON SENSE COMMON PRACTICE
Minimizing Stock Levels—A Case Study Beta’s Wheelwright Plant Spare Parts Criticality Ranking Model Summary References
7
Installation Practices Follow Existing Standards and Procedures Verify and Use Appropriate Manufacturer Recommendations Installation and Commissioning of Rotating Machinery Flange and Joint Installation Workshop Support Housekeeping Use of the Predestruction Authorization Installation and Startup Checklists Equipment Process Commissioning Test Periods Quick Changeover Principles Summary References
8
Operational Practices Improving Operating Practices to Reduce Maintenance Costs Consistency of Process Control Control Loops Process Conformance for Operational Excellence Operator Basic Care Establishing Operator Care/Ownership Operator-Owner and Maintainer-Improver Principles Shift Handover Process Production Planning Advanced Process Control Methods Summary References
9
Maintenance Practices Going from Reactive to Proactive “Worst Practices” Best Practices
170 171 177 179 180
181 183 183 184 186 187 187 187 188 189 190 191 192 193 194
195 196 199 200 203 206 207 209 211 212 213 214 218
219 219 225 226
CONTENTS
10
Beta’s Repair Culture Maintenance Practices Preventive Maintenance Predictive Maintenance Beta’s Current Predictive Maintenance Practices Beta’s Predictive Maintenance Deployment Plan Proactive Maintenance Focused Factories and Maintenance Practices: Centralized vs. Decentralized Maintenance The Need for Integrating Preventive, Predictive, and Proactive Practices Life Extension Program Maintenance Practices and Safety Performance Summary References
254 257 258 260 261
Optimizing the Preventive Maintenance Process
265
First Things First Creating Equipment Histories from “Scratch” The Model Step 1—Set up Your Database Step 2—Determine PM Necessity Step 3—Analyze Your PMs Step 4—Optimization Some Examples Case Histories Mechanical Integrity Summary References
11
Implementing a Computerized Maintenance Management System Case History Preliminary Authorization and Selection Selection Criteria Impediments and Solutions Pilot Plant Implementation Procedures Results Total Cost Savings Amounted to Well over $6,000,000 Reference
226 227 234 237 241 242 243 251
266 267 268 268 269 270 271 271 272 274 275 275
277 280 281 282 283 284 285 287 288 288
IX
X
MAKING COMMON SENSE COMMON PRACTICE
12
Effective Use of Contractors in a Manufacturing Plant The Reliability Improvement Program Consolidating Maintenance Contractors What Happened? Current Situation Lessons Learned Best Use of Contractors Contractor Selection Summary References
13
Total Productive and Reliability-Centered Maintenance Total Productive Maintenance Principles—TPM Reliability-Centered Maintenance Principles—RCM Case Study—Applying FMEA/RCM Principles at a Business System Level Some Additional Benefits Results Case Study—Improving Packaging Performance Business System-Level Failure Modes and Effects Analysis (FMEA) Application of TPM Principles Clean, Inspect and Restore the Packaging Line The Result Summary References
14
Implementation of Reliability Processes Whamadyne’s Journey to Manufacturing Excellence Whamadyne’s Operational Characteristics Whamadyne’s Reliability Improvement Plan Background Improvement Opportunities Recommendations Unexpected Obstacle Results—World-Class Performance Constraints to the Improvement Process
289 289 291 298 300 302 304 306 310 310
313 314 319 322 328 329 330 331 333 334 337 339 340
341 346 347 352 352 354 355 358 359 359
CONTENTS
Summary References
15
Leadership and Organizational Behavior and Structure Leadership Vision, Reality, Courage, and Ethics Personal Humility and Professional Resolve Building Character through Principles Aligning the Organization Common Leadership Characteristics Organizational Issues Empowerment Empowerment as a Disabler Teams An Analogy Current Teamwork Practices The Reliability Process—A Better Way Reliability Improvement Teams Some Rules Organizational Structure Centralized Manufacturing Support Specialists Virtual Centers of Excellence Reliability Managers/Engineers Predictive Maintenance Centralized vs. Decentralized Organizations Mission Statements Communication of Performance Expectations Implementing Reliability and Manufacturing Excellence in a Union or Nonunion Plant Compensation Managing Change Articulate a Compelling Reason for Change Apply Leadership and Management Principles Communicate Your Strategy and Goals Facilitate Employee Implementation of the Change Process Measure the Results: Reward Good Behavior; “Punish” Bad Behavior Stabilize the Changes/Organization in the New Order Summary
360 360
363 363 367 368 368 369 370 371 372 374 375 376 377 379 381 382 383 383 384 385 385 386 387 389 389 390 392 393 395 396 397 399 400 402
XI
XII
MAKING COMMON SENSE COMMON PRACTICE
References
16
Training The Strategic Training Plan Boss as Trainer Training vs. Learning Training for Pay Training in Appropriate Methodologies How Much Training Is Enough? Multiskilling and Cross-functional Training Intellectual Capital Summary References
17
Performance Measurement Sample Production Measures Sample Maintenance Measures Other Corporate Measures Industry-Specific Measures Leading and Lagging Indicators Return on Net Assets or Return on Replacement Value Plant Age and Reliability Measure for Weaknesses References
18
Epilogue Beta’s 10-Point Plan
A
B
403
405 407 408 410 411 412 412 413 413 416 417
419 426 427 429 429 429 430 431 431 432
433 434
World-Class Manufacturing— A Review of Several Key Success Factors
441
Introduction Definitions of Key Success Factors Leadership and Managment Practices Operating and Maintenance Practices General Findings Summary
441 442 442 443 444 452
Reliability Manager/Engineer Job Description Index
453 457
Acknowledgments
Thanks to those in manufacturing plants who, given the proper tools, encouragement, and leadership, can and are providing manufacturing excellence: folks like Larry Funke and Mike Barkle; Mickey Davis; John Kang; Lester Wheeler, Joe Robinson, and Jerry Borchers; Glen Herren; Bob Seibert; Chuck Armbruster, Pat Campbell, and Al Yeagley; Ian Tunnycliff, James Perryer, and Andy Waring; Chris Lakin and Paul Hubbard; Wayne Barnacal, Rozano Saad, David Callow, Ben Coetzee, Colin Deas, Xavier Audoreen, Bill Beattie, Jim Holden, and Otto Moeckel; John Guest, Ron Morgan, Bill Elks, and Dan Merchant; Ron Rath, Ken Rodenhouse, Bill Karcher, Chris Sigmon, and Mike Mercier; Garry Fox; Dan Duda; Reggie Tolbert and Don Niedervelt; John Lightcap, Gary Pellini, and Bill Blanford; Chris McKee; Ken Lemmon; Phil Bates; Jim Lewis; Rob Grey; Jim Schmidt; George Schenck; Martin Markovich; Jerry Putt; Eric Sipe, Stefan Stoffel, and Bernold Studer; Bryan Mears and Leonard Frost; Bob Hansen and Ian Gordon; Tom Strang and Ron Lerario; Larry Payne; Brendan Keane; Mike Stumbo; Colin Ferguson; Grant Ferguson; Gilbert Savard; Michel Giasson; Royce Haws; Danny Reyes; Joseph Motz; Martin Cheyer; Peter DeQuattro; Pat Pollock; Dave Boyd; Mark Dunlay; Ed Potter; Alberto Fernandez; Kel Hoole; Phil Cooke; Nelson Dube; Terry Strange; Simon Acey; Eva Wheeler; Bob King; Mike Byrne; Ian Tunneycliff; Noel Judd; Clive Moore; Dan Tenardi; and literally hundreds of others.
XIII
XIV
MAKING COMMON SENSE COMMON PRACTICE
Thanks to those in the “corporate office” who, in spite of a sometimes thankless task, work diligently with those in the plants to facilitate the improvement process: folks like Rick Thompson; Harry Cotterill; Steve Schebler; Bob Pioani; Jerry Haggerty; Joe McMullen; Stu McFadden and Butch DiMezzo; Carl Talbot; Dick Pettigrew; Kevin Bauman; Joe Wonsettler; Rick Tucker; Erich Scheller; Pat DiGiuseppe; Gary Johnson; Dennis Patrick; Al Foster; Tim Goshert; Rick Baldridge; Paul DeRuijters; Phil Conway; Alan Tudehope; Greg Bredsell; Hugh Blackwood; and David Brown. Thanks to executives who have the courage to think a little differently about manufacturing excellence and reliability—Rick Higby; Sandy Anderson; Warren Wells and Ron Plaisier; Adrian Bromley and Doug Coombs; Suhardi; Thomas Bouckaert; Dianne Newhouse; Larry Jutte; Ken Plotts; Eion Turnbull; Bob Williams; R. Paul Ruebush; K. B. Kleckner; Jeff Redmond; Ger Miewessen; John Manning; Vince Adorno; Tim Eberle; Jim Whiteside; Greg Waters; Bernie Cooper; and Ron Christenson. Thanks to those who created the original reliability-based maintenance strategy and paper—Forrest Pardue and Ken Piety— and to those who helped further develop the concepts—Jeff Wilson and particularly Alan Pride for adding depth of knowledge to reliability requirements. Thanks to all the folks who have been especially influential in helping create a vision for the way manufacturing plants ought to be run and continue to sustain and take reliability methods forward, regardless of any position in corporate hierarchy: David Burns and Bill Holmes; Andrew Fraser, John Billington, Neil Mann, and David Ormandy; Red Slater; Bob Hansen; Tim Murnane; Tommy Fagan; Stu Teffeteller; Joe Petersen and Kay Goulding; and Vince Flynn, Roy Schuyler, David Ellison, and Ed Jones. Thanks to Carol Shaw and Laura Hooper at the University of Dayton for helping create the manufacturing reliability series and making manufacturing excellence an integral part of the University’s vision. Thanks to Alan Pride for contributing portions of certain chapters; to Winston Ledet, Jr., David Ormandy, Ron Rath, and Andy Remson for providing feedback on the manuscript and making it a better product. Thanks to Mike Taylor and Chris Crocker for providing the statistical analysis of Appendix A.
ACKNOWLEDGMENTS
Thanks to Kris Jamsa, Tim Calk, Leigh Owen, Cheryl Smith, Phil Carmical, and all the staff at Gulf Publishing for their hard work in getting this book completed and advising along the way. Thanks to Mom and Dad for teaching me that you’re only limited by your willingness to try. Thanks to my children (grown now) who taught me more about management and leadership and the difference than all the business management books written. Thanks to those who I may have forgotten. Please accept my apology and my thanks to you, too. And, finally, my biggest thanks to Kathy, my wife and best friend, for all her love, patience, encouragement, guidance, and belief in me.
XV
This Page Intentionally Left Blank
Preface
This book is about maximizing manufacturing business performance through superior (and yet very basic) practices that allow a given company, industry, and economy to prosper in an increasingly competitive world. Key to this wealth creation are concepts related to uptime and asset utilization rates as compared with ideal; accounting for losses from ideal; unit cost of production (vs. costs alone); applying best practices to minimize losses from ideal, integrating the various manufacturing functions using a reliability-driven process; and, finally but certainly not least, integrating the marketing and manufacturing strategies. It is based on the best practices of some of the best plants in the world, and it serves as a model for those seeking to achieve world-class performance in a manufacturing business. It is fundamentally simple in concept, but far more difficult in execution. Indeed, the striking difference between the best manufacturers and the mediocre ones is that the best companies do all the little things exceptionally well and in an integrated way. Others may only feign their understanding of excellence. The practices described require leadership (more so than management), tenacity, teamwork, respect for the dignity and contribution of each individual, and fundamentally a shift from the historical mode of managing through straight cost-cutting to a more successful method of managing through improving processes for manufacturing reliability and expecting the costs to come down as a consequence. I have worked with many manufacturers, and most of them have been about aver-
XVII
XVIII
MAKING COMMON SENSE COMMON PRACTICE
age. A few have been very poor, and even fewer still have been excellent. The models presented represent a mosaic of the best practices of the best plants encountered. The case histories provide examples of some of the less than best practices encountered. I know of no single plant that does all the best practices outlined. I challenge all who read this book to become the first—you will be well rewarded. There are, as you know, many books on manufacturing and operations management, and I thank you for choosing this one. What makes this one different? I believe that it fills a void in current literature on manufacturing practices, particularly in how we design, buy, store, install, operate, and maintain our manufacturing plants; and, more importantly, in how we integrate these activities, including the integration of the manufacturing function with marketing and research and development. There are many books on operations management, TPM, Kaizen, etc. However, in my view most of these focus on production flows, organizational structures, material management, etc., and therefore ignore other major issues that this book addresses, particularly the integration of these practices. Indeed, if those books, and the understanding of manufacturing practices they bring were so good, why does the typical manufacturer have an asset utilization rate, or alternatively overall equipment effectiveness rate, of some 60% of ideal for batch and discrete manufacturers, and 80% for continuous and process manufacturers; whereas, world-class manufacturers, few that they are, typically run in the range of 85% to 95%? Clearly, this gap is too large to have been accommodated by current knowledge and practices. I hope to close that gap with this book, because it is about the practices of those that have achieved 85% to 95%, as well as those that are moving in that direction. This difference could indeed represent a competitive edge for a company, an industry, or even a country. Business Week has often reported that manufacturers are operating at 83%+ of capacity, suggesting that this is the point at which inflationary pressures increase. While this may be true of their historical capacity, it is not true of their real capacity. Most manufacturers have a “hidden plant” within their operation that could lead to a better competitive position for them, their industry, and ultimately for greater economic strength for this country, or for the companies and countries that achieve superior performance. Imagine the economic advantage of this country if we combined our innovation, marketing, and distri-
PREFACE
bution skills with superior manufacturing capability. Imagine if we don’t and others do. This book could well have been called The Manufacturing Prism, based on my observations of the behavior of manufacturing managers, particularly the more senior managers. Merriam Webster’s Collegiate Dictionary (1996) defines a prism as (among other things) “a medium that distorts, slants, or colors whatever is viewed through it.” This definition fits my observations very well in that every manager (and individual) “distorts, slants, or colors” whatever is viewed by him or her according to his or her own position in the business spectrum. Each one filters light (or information) based on a unique position, experience, value system, etc., in the organization. Because we all do this to one extent or another, I find no particular fault with this behavior, except that many fail to recognize the other functions, or “colors,” associated with running a world-class organization, and the implications of their decisions on the organization as a whole. Even when these “colors” are recognized, most also tend to draw distinct boundaries “between colors,” where in fact the reality is a continuum in which one color “melts” into another. Typically, each function, department, group, etc., operates in a “silo mentality” (the only color), optimizing the processes in their silos. Unfortunately, this generally fails to optimize the system as a whole, and most systems analysts advise that all systems must be optimized as a whole to ensure truly optimal performance. I hope the models in this book help ensure that optimizing your manufacturing organization is an integral part of a business system. Alan Greenspan is reported to have said that there are three ways to create original wealth—mining, agriculture, and manufacturing. With the dramatic expansion of global telecommunications, the blossoming of the Internet, etc., he may have since modified that opinion to add information systems. Nonetheless, it is clear that an economy’s health and the creation of wealth is heavily dependent upon its manufacturing base. Indeed, according to Al Ehrbar, manufacturing productivity is the single most important factor in international competitiveness. I hope this book will help enhance your competitive position, as well as your industry’s and your country’s. In The Borderless World, Kenichi Ohmae seems to express the view that manufacturing is not necessarily the key basis for wealth creation, but rather that trading, distributing, servicing, and supporting may actually be greater ways of adding value in a given
XIX
XX
MAKING COMMON SENSE COMMON PRACTICE
economy. While his points are well taken, I personally believe that all countries, and particularly the United States, must have an excellent manufacturing base as part of an overall infrastructure for original wealth creation. This is true in two ways. First, it is at the heart of the long-term health of a given economy and improves the opportunity for greater overall economic success and power. Not to do so, and to simply rely on second-order effects related to distribution and support, places any economy at a greater risk long term. Second, it places that country in a better position to exploit opportunities in other markets, either by using existing manufacturing assets or by creating new ones, closer to those markets of interest, that are more competitive. The world is indeed increasingly competitive. The advent of GATT, NAFTA, and other trade agreements are clear indicators of increased trade and competition. But, having traveled literally all over the world, I believe that there is something more fundamental at play. Increasingly, the application of global telecommunications technology has made the access to knowledge—of markets, of technology, of almost anything—readily accessible. This means that you could be on the island of Borneo and readily access the latest technology in Houston for improving manufacturing productivity. (I’ve seen and helped it to be done.) It also means you have access to capital not heretofore as readily available. Stock markets in Tokyo, New York, London, Moscow, etc., are routinely accessible through global media, forcing all companies worldwide to compete for the same capital and to perform at a superior level to be worthy of that capital investment. In his book Post Capitalist Society, Peter Drucker states that the key to productivity is no longer land, labor, or capital, but knowledge. Knowledge leads to increased efficiency and increased access to capital. He further states that “The only long-term policy that promises success is for developed countries to convert manufacturing from being labor based into being knowledge based.” I couldn’t agree more. However, most cost-cutting strategies as currently practiced do not facilitate becoming knowledge based. Indeed, just the opposite may be true, because cost-cutting typically results in even less time to improve knowledge, skills, processes, etc., and frequently results in the loss of knowledge of skilled workers to ensure competitive position. The thrust of this book is to provide you with the basic knowledge required to ensure superior manufacturing performance, so that you can compete in the global marketplace,
PREFACE
attract investment capital, can ensure wealth creation, and can understand and sustain world-class performance. Perhaps Larry Bossidy, chairman and CEO of AlliedSignal said it best in 1995— “We compete in world-class markets in which capacity often exceeds demand. It is essential, therefore, that we become the lowcost producer.” Granted there are other issues related to marketing, R&D, etc., that are also essential, but manufacturing excellence and low-cost production surely places those companies that achieve it in a superior position for business success. This book began around 1990. For 2 years, I had been the president of a small, high-technology company that supplied instruments and software to large industrial manufacturers to help them monitor the condition of their equipment and with that knowledge minimize losses due to unplanned downtime and catastrophic failures. We had recently developed additional products, not because of any grand strategic plan but because we just “knew” our customers would need them. However, it soon became apparent that it would be very difficult to sell these products if we didn’t have a systematic approach that demonstrated how these products related to one another in a manufacturing environment and how they fit into the broader requirements of a manufacturing business. Contrary to our marketing and sales manager’s opinion, we couldn’t just walk into a manufacturing plant loaded like a pack mule with the latest technology, showing them the “neat stuff” we had to sell. We had to have a strategy. To his credit, that manager rose to the occasion. About that same time, and through the creative thinking of the staff of that same marketing manager, we had also launched a program to identify and give an award to those companies that had made best use of our technology—higher production rates, lower costs, better safety, etc. Frankly, the thrust of this effort was to identify these best companies so that we could hold them up to prospective customers and be able to say, in effect, “See what they’ve done, you can do this too!” It was part of our effort to further expand a growing market for our products and to garner more sales for our company. Fortunately, the need for a strategy, combined with the creation of a program to identify some of the better manufacturing companies in the country, led to an understanding of some of the best manufacturing practices, which are still surprisingly absent in typical manufacturing plants. What ultimately resulted from these initial
XXI
XXII
MAKING COMMON SENSE COMMON PRACTICE
efforts, including many lively discussions, was a paper titled “The Reliability-Based Maintenance Strategy: A Vision for Improving Industrial Productivity,” a descriptor of some of the best practices in some of the best manufacturing plants in the country at that time. If that paper was the beginning, then this book is an intermediate step. Some will say the content of this book is just common sense. I couldn’t agree more, but in light of my experience in working closely with dozens of manufacturers, and having a passing knowledge of hundreds of manufacturers, common sense just isn’t common practice—hence, the title of this book for making common sense common practice. If manufacturers would just do the basics very well, their productivity would increase substantially, almost immediately. I have no doubt of this. This view is developed and reinforced by the experiences of this book and coincides with the experiences of others. In The Making of Britain’s Best Factories (Business Intelligence Ltd., London, 1996), the overwhelming conclusion is that most factories don’t do the basics well, and that, conversely, the best do. Put another way, if people in manufacturing plants simply treated the plant (or were allowed to treat the plant) with the same care and diligence as they do their cars or homes, an immediate improvement would be recognized. Why then doesn’t this happen? Perhaps the book will shed light on the issues, but creating a sense of ownership in corporate assets and performance is an essential element. Creating this sense of ownership requires respect for the dignity and contribution that each person makes to the organization, as well as trust between the shop floor and management, something that is lacking in many companies. People who feel a sense of ownership and responsibility for the care of the place where they work achieve a higher level of performance. I’ve often told people, particularly at the shop floor level, “If you don’t take care of the place where you make your living, then the place where you make your living won’t be here to take care of you,” an ownership philosophy strongly instilled by my parents. By the same token, most managers today are driven by cost-cutting and labor force reductions in an effort to improve productivity and the proverbial bottom line. Do these actions create a sense of ownership and responsibility on the part of the individuals on the shop floor? Do they feel “cared for” and therefore “caring for” the company where they work? Generally not. An individual who has gained a
PREFACE
certain notoriety for improving corporate performance has the nickname “Chainsaw,” presumably for his severe approach to cost-cutting. Will the “chainsaw” approach to corporate management work? Can you cost cut your way to prosperity? He appears to have personally succeeded in this, but will it work for a company over the longer term? Perhaps, especially when a company is in dire straits, but not likely for most companies according to the following data. The real issues are related to whether or not the right processes have been put in place to address the right markets for the company, long term; whether the marketing and manufacturing strategies have been fully integrated into a business strategy; and whether employees have the understanding and motivation to implement the right processes to achieve world-class performance. If these issues are addressed, success will be achieved; costs will come down as a consequence of good practice and expectations, not as a fundamental strategy; and the company will have a sustainable business culture for the long haul. Several recent studies suggest that cost-cutting is not a very effective long-term tool for ensuring business success. For example, in a study of several hundred companies that had gone through major cost-cutting efforts from 1989 to 1994, published by The Wall Street Journal on July 5, 1995, it was found that only half improved productivity; only a third improved profits; and only an eighth improved morale. In a study published in The Australian newspaper in September 1995, it was found that in those companies that engaged in substantial cost-cutting, their share price led market indices in the first 6 months following the cost-cutting, but lagged market indices in the following 3 years. In another study published by the US Conference Board, reported on August 15, 1996, in The Age, also an Australian newspaper, it was found that in those companies that had undergone substantial layoffs in a restructuring, 30% experienced an increase in costs; 22% later realized they had eliminated the wrong people; 80% reported a collapse in employee morale; 67% reported no immediate increase in productivity; and more than 50% showed no short-term improvement in profits. The Queensland University of Technology, Australia, reported in June 1998 that only about 40% of firms that downsize manage to achieve significant productivity gains from downsizing; that only half the organizations that downsize cut costs; and that downsizing can result in a downward spiral into decline.
XXIII
XXIV
MAKING COMMON SENSE COMMON PRACTICE
Finally, more recently Gary Hamel (Business Week, July 17, 2000) reported on the concept of “Corporate Liposuction,” a condition in which earnings growth is more than five times sales growth, generally achieved through cost-cutting. In a review of 50 companies engaged in this approach, referred to as corporate liposuction, 43 suffered a significant downturn in earnings after 3 years. These companies were notable members of the Fortune 500, such as Kodak, Hershey, Unisys, and others. He emphasizes that growing profits through cost-cutting is much less likely to be sustainable, and must be balanced with sales growth through innovation, new product development, and process improvement. With data like these, why do executives continue to use cost-cutting as a key “strategy” for improvement? Perhaps, as we who are blessed with healthy egos tend to do, each of us believes that we know better than the others who have tried and failed, and we can beat the odds. More likely though, it’s because it’s easy, it’s simple, it demonstrates a bias for action, it sometimes works at least in the short term, and it doesn’t require a lot of leadership skill—anyone can cut costs by 5% and expect the job to be done for 5% less. A friend of mine once remarked that “insanity is repeating the same behavior, but expecting different outcomes.” In any event, with these odds, simple cost-cutting appears to be a poor bet. Rather, it is critical to put the right systems, processes, and practices in place so that costs are not incurred in the first place; and to set clear expectations for reduced costs as a result of these actions. This is not to say that cost-cutting never applies. Not all the companies studied suffered as a result of their cost-cutting. It may benefit a company, for example, that’s near bankruptcy, that’s a bloated uncompetitive bureaucracy, that’s faced with intransigence in employees or unions, that may need to eliminate so-called “dead wood,” that has situations of obvious waste, or that’s in a major market downturn. But, from the above data, it’s not a good bet. As Deming observed, “Your system is perfectly designed to give you the results that you get.” Costs and performance are a consequence of our system design, and if we reduce the resources available to our system, without improving its basic design, our performance will likely decline. Perhaps we are rethinking the cost-cutting “strategy.” The Wall Street Journal (November 26, 1996) headlined “Re-Engineering Gurus Take Steps to Remodel Their Stalling Vehicles (and Push
PREFACE
Growth Strategies),” acknowledging that reengineering as it has been practiced hasn’t met expectations in many companies and to a great extent has ignored the human element in its approach. I would add that not achieving ownership and buy-in for the reengineering process is a key fault in many companies that have tried it. In my experience, this sense of ownership for the change process is especially needed at the middle management level. Further, capital productivity in a manufacturing plant can be more important than labor productivity. Both are important, no doubt. But, “How much money do I get out for the money I put in?” is more important than “How many units were produced per employee?” especially when many of the people working in the plant are contractors and don’t count as employees. Putting the right processes in place to ensure capital productivity will ensure labor productivity. In my experience, cutting head count adds far less value to an organization than putting in place best practices, improving asset utilization, gaining market share, and then managing the need for fewer people using attrition, reallocation of resources to more productive jobs, improved skills, reduced contract labor, etc. Indeed, as the book illustrates, cutting head count can actually reduce production and business performance. Labor content in many manufacturing organizations is only a small fraction of the total cost of goods manufactured. Concurrently, as noted, asset utilization rates in a typical manufacturing plant run 60% to 80% of ideal, as compared with world-class levels of 85% to 95%. Given that you have markets for your products, this incremental production capacity and productivity is typically worth 10 times or more than could be realized through cost-cutting. And even if markets are soft in the short term, it still allows for lower production costs so that greater profits are realized and strategically market share is enhanced. Put the right processes in place, get your people engaged in a sense of ownership, create an environment for pride, enjoyment, and trust, and the costs will come down. Operating capital assets in an optimal way will provide for minimum costs, not cost-cutting. Consider the US economy, which has a manufacturing base of well over $1 trillion. Imagine the impact on a given manufacturing plant, industry, or even economy, if it could even come close to world-class performance. For a typical manufacturing plant it means millions of dollars; for the US economy, hundreds of billions. As a bonus, we get lower inflation, lower interest rates, greater competitive position, etc.
XXV
XXVI
MAKING COMMON SENSE COMMON PRACTICE
This book is an effort to provide ideas to those who are engaged in trying to become world-class performers as to how some of the best companies have accomplished this. Each chapter has been written such that it could stand alone, so you may see some overlap between chapters. I hope this will only serve to reinforce the principles espoused. However, the book is best when read in its entirety to provide a complete understanding of manufacturing and how it fits into business excellence. The processes provided have been shown to work, as the case histories will illustrate, but should not be applied as a literal recipe. Each plant or company has its own culture, its own needs, markets, etc., and therefore this book should be used as a guide, not as a literal mandate. As you read the different sections, you should ask yourself, “Are we doing this?” and if not, “Should we be doing this?” and if so, “What is my personal responsibility to make sure this gets done?” Or, if you disagree with the models presented (because they may not always apply to all situations), use your disagreement to develop your own, and act on them. At its heart, manufacturing excellence requires leadership, but also that each individual assumes a personal responsibility for excellence, working hard individually, and with others, to get as close to ideal performance as reasonably possible. Finally, W. Edwards Deming once said that “Profound knowledge comes from the outside and by invitation. A system cannot understand itself.” I hope this book, which is inherently by invitation, offers you profound knowledge that supports a journey to world-class performance. I thank you for the invitation to share this with you. Ron Moore Knoxville, Tennessee
Manufacturing and Business Excellence
1
You don’t do things right once in a while, you do them right all the time. Vince Lombardi
The Scene Kaohsiung Industries, an international manufacturing company, has recently made substantial inroads into US and European markets, shaking the confidence of investors in competing US and European manufacturers. Around the globe, manufacturers like Kaohsiung are capitalizing on lower trade barriers, a growing global economy, and substantial growth in the Asia-Pacific economies, challenging other manufacturers’ long-standing positions in their traditional markets. Beta International, a large manufacturing conglomerate, is under intense pressure from domestic competitors with newer technologies, from foreign competitors with cheaper products, and from within— there’s so much politicking that much of the work that’s done by many individuals is related to positioning for survival in a streamlined organization. Beta has just named a new CEO, Bob Neurath, who is determined to get the company back on track and reestablish itself as the leader in its markets, especially its chemicals sector. On reviewing current marketing plans, the new CEO has found that the marketing division is really more like a set of sales departments, focused more on selling whatever products R&D develops for
1
2
MAKING COMMON SENSE COMMON PRACTICE
it than on understanding and targeting markets and then developing marketing and sales strategies accordingly. The R&D department has lots of neat ideas, but too often these ideas are not fully linked to a thorough market analysis and understanding of customer wants and needs. Manufacturing is not integrated into either process. There seems to be a “silo” approach to the business. Everybody’s in their silo doing their very best to survive first, and ensure their silo’s success second, but few take an integrated perspective of the business as a whole and how they support the company’s success. Several issues have been developing for some time in its manufacturing division, and much of this is coming to a head at its new Beaver Creek plant. It’s not yet apparent to the new CEO that the Beaver Creek production plant, which manufactures one of its new premier products, is in deep trouble—the process isn’t performing as expected, the plant is frequently down because of equipment failures, the shop floor is hostile toward management, and morale is at an alltime low. This plant was to be the standard by which its other plants were operated, but it has fallen far short of its goals. A new plant manager has been recently assigned to correct these problems, and seems somewhat overwhelmed by the magnitude of the job. Moreover, it’s his first assignment as a plant manager for a plant this size and complexity, having spent most of his career in purchasing, quality assurance, and then marketing. While there are spots of isolated excellence, most other plants in the corporation are not faring much better, as evidenced by the fall in the company’s share price over the past several quarters. Further, recent pressures on interest rates have added even more angst to this capital-intensive company. Most analysts recognize the problems facing Beta, and see intense international competition as a continuing threat to Beta International’s long-term success, and perhaps even its survival in its current form. Inside the Beaver Creek plant, an operator and process engineer have recently come up with several ideas for improving their manufacturing processes, but are having difficulty convincing their boss, the production manager, to use their ideas for better process control—they cost too much, they are too risky, they require too much training, etc. Nearby, a similar situation has developed in the maintenance department, where a technician and maintenance engineer have also come up with some ideas about how to reduce mechanical downtime. Likewise, their boss, the maintenance manager, believes
MANUFACTURING AND BUSINESS EXCELLENCE
they’re too expensive and a bit “radical,” and, besides, he opines phrases like “If only people would just do what they’re told, when they’re told, everything would work just fine,” or “If it ain’t broke, don’t fix it.” Meanwhile, back in the design and capital projects department, they know that the plant was designed and installed properly; if only the plant people would operate and maintain it properly, everything would be just fine. All the while, executives are following their traditional approaches, intensifying the pressure for cost-cutting in an effort to become more competitive. Simultaneously, they implore their staff to keep the plant running efficiently at all costs. Few, except for the people of the shop floor, have recognized the irony in this situation. Challenges from global competition, challenges from within, challenges from seemingly every direction are all coming together to threaten the prosperity and perhaps the very existence of a longstanding, respected corporation.
The Players Though based on actual companies and case histories, the above scenario is fictitious. There is no Beta International, or Beaver Creek plant, but it could very well describe the situation in many manufacturing companies today. Further, with some modifications, it could also reflect the situation in power utilities, automotive plants, paper plants, etc., and even in the government sector, where a reduced tax base and pressure for cost-cutting are creating intense pressure to improve performance. Beta International and its Beaver Creek plant, as well as other plants, are used in this book to illustrate real case histories that reflect the actual experience of various manufacturing plants, but the actual descriptions have been modified to mask the identity of the plants. Though based on real events, these case histories are not intended to describe any specific company’s actual performance. Therefore, any correlation, real or imagined, between Beta and any other company is coincidental. Beta International is a composite of many different companies. All businesses, and particularly manufacturers, are being called upon to do more with less. All are facing intense pressure, either directly or indirectly, from global competition, and all are behaving in very much the same way—cost-cutting is a key corporate strategy. There is nothing inherently wrong with cost-cutting, but it must be
3
4
MAKING COMMON SENSE COMMON PRACTICE
combined with a more strategic process for ensuring manufacturing excellence. Cost-cutting does little to improve the knowledge base that ensures improvements in plant operation, in equipment and process reliability, and, ultimately, world-class performance. So what are we to do? Well, most of us understand that if we don’t grow our businesses, we just aren’t going to be successful. We also understand that cost-cutting typically reduces the effects of a growth strategy. Please do not misunderstand. Being prudent and frugal are hallmarks of good companies; and being the low-cost producer may be a necessity for long-term market leadership. However, too many companies focus on cost-cutting almost to the aversion of growth, and/or the application of best practices, and over the long term may hurt their strategic position. Business excellence in most companies requires that the company do each of the three things shown in Figure 1-1 exceptionally well—marketing, R&D, and manufacturing. Excellence in each is essential for long-term success, and each must be fully integrated with the other. Each must recognize certain overlapping areas, where teamwork and cooperation are required, and that all areas must be fully integrated with a common sense of purpose—maximizing shareholders’ financial returns. Each group must recognize mutual independence and dependence, but in the end, all groups must be synchronized to a common purpose and strategy that fosters a sense of trust—we all have a common understanding of where we’re going and how we’re going to get there. It is common that a company at first will do a better job selling than it will understanding its markets and then developing a sales and distribution strategy to penetrate those markets. It is also common that considerable R&D may not be fully linked to the marketplace and customer requirements. Perhaps more to the point of this book, however, is that few companies do a good job integrating their marketing and R&D strategies with their manufacturing strategy, often treating manufacturing like a big reservoir from which sales “opens a spigot and fills its bucket with product” and then sells it. It’s true of Beta International. While the thrust of this book is primarily manufacturing, it is critical that the manufacturing strategy be fully integrated with the marketing strategy. This is discussed in the following section and in additional depth in Chapter 3.
MANUFACTURING AND BUSINESS EXCELLENCE
Figure 1-1. World-class business requires integration of key components.
Aligning the Manufacturing and Marketing Strategies There are several exceptional publications about developing a manufacturing strategy and ensuring its support for the marketing and overall business strategies.1–5 These have apparently received insufficient attention until recently, but are now helping Beta to form its strategy, where manufacturing has historically been viewed by the marketing function as, in effect, “the place that makes the stuff we sell.” Beta understands well that all business starts with markets—some existing (basic chemicals, for example), some created from “whole cloth” (new process technology or unique instruments, for example). Beta was principally in mature markets, but of course was investing R&D into new product and process development. However, Beta was not actively positioning its products and its manufacturing strategy to ensure an optimal position. Additional discussion on this, as well as a process for optimizing Beta’s product mix, is provided in Chapter 3, but for the time being, the discussion will be limited to the basis for aligning Beta’s marketing and manufacturing strategies. What image does Beta want in its markets? What market
5
6
MAKING COMMON SENSE COMMON PRACTICE
share is wanted? What is the strategy for achieving this? Has Beta captured the essence of its business strategy in a simple, clear mission statement (or vision, or both, if you prefer)? Using the model described in more detail in Chapter 3, Beta reviewed its 5-year historical sales for all its product lines, including an analysis of the factors for winning orders, Beta’s market share, the gross margins for given products, and the anticipated total market demand for existing and future products. Beta then reviewed its production capability, including each manufacturing plant’s perceived capability for making a given product. More importantly, however, Beta’s perception of manufacturing capability was balanced with a reality check through an analysis of historical delivery capability and trends for each plant’s performance in measures like uptime and overall equipment effectiveness, or OEE (both defined below relative to ideal performance); unit costs of production for given products; on-time/in-full performance; quality performance; etc. Beta even took this one step further and looked into the future relative to new product manufacturing capability and manufacturing requirements for mature products to remain competitive. All this was folded into a more fully integrated strategy, one for which manufacturing was an integral part of the strategic business plan, and that included issues related to return on net assets, earnings and sales growth, etc. Through this analysis, Beta found that it had many products that were not providing adequate return with existing market position and manufacturing practices, it lacked the capability to manufacture and deliver certain products (both mature and planned) in a costcompetitive way, and it could not meet corporate financial objectives without substantial changes in its marketing and manufacturing strategies. Further, it was found that additional investment would be required in R&D for validating the manufacturing capability for some new products; that additional capital investment would be required to restore certain assets to a condition that would ensure being able to meet anticipated market demand; and, finally, that unless greater process yields were achieved on certain products, and/or the cost of raw material was reduced, some products could not be made competitively. All in all, the exercise proved both frustrating and productive, and provided a much better understanding of manufacturing’s impact on marketing capability, and vice versa.
MANUFACTURING AND BUSINESS EXCELLENCE
Further, a surprising finding resulted from Beta’s effort. Historically, Beta’s efforts had focused on cost-cutting, particularly in manufacturing, and within manufacturing, particularly in maintenance. However, as it turned out, this approach was not enlightened. Indeed, in many plants it had resulted in a deterioration of asset condition and capability to a point where there was little confidence in the plant’s capability for achieving increased production to meet marketing’s plan for increased sales. Other issues, such as poor performance in quality products at a lower cost/price, and less than sterling performance for on-time/in-full delivery to customers, only exacerbated the situation. After extensive review and intensive improvement efforts, manufacturing excellence became more than just a nice word. Measures associated with manufacturing excellence, asset utilization rate and unit cost of production, became paramount, and the comprehensive integration of manufacturing, marketing, and research and development to achieving business success became better understood. For example, Beta began to adopt the philosophy stated by another major manufacturer:6 As a result of our global benchmarking efforts, we have shifted our focus from cost to equipment reliability and Uptime. Through our push for Uptime, we want to increase our capital productivity 10%, from 80% to 90% in the next several years. We value this 10% improvement as equivalent to US$4.0 billion in new capital projects and replacement projects for global Chemicals and Specialties. Maintenance’s contribution to Uptime is worth 10 times the potential for cost reduction. Realizing this tremendous resource has helped make Uptime our driving focus for future competitiveness rather than merely cost reduction. While prudently and properly saving money is a good thing, Beta is only now beginning to recognize that it will be difficult to save its way to prosperity and that capital productivity must be included with labor productivity in its management measurements. Consider the following example.
7
8
MAKING COMMON SENSE COMMON PRACTICE
Beta’s Whamadyne plant had been “encouraged” to cut labor costs by some 10%, amounting to a “savings” of $2M per year. At the same time, Whamadyne was measured to be one of Beta’s best plants, having an asset utilization rate of some 86%. (It was manufacturing 86% of the maximum theoretical amount it could make.) Whamadyne management developed a plan to increase utilization rates to 92% by improving process and equipment reliability, without any major capital expenditure. Further, it was determined that this increase was worth an additional $20M in gross margin contribution, most of which would flow directly to operating income. It was also determined that requiring a force reduction would jeopardize this improvement. What would you do? What did Beta do? Beta took the risk and achieved much better performance in its plant, and therefore in the marketplace; its maintenance costs came down more than the $2M sought; and it managed the need for fewer people over time through natural attrition, reallocation of resources, and fewer contractors. Genuine “cost-cutting” comes through process excellence, not simply cutting budgets.
Becoming the Low-Cost Producer So what are we to do again? In manufacturing, there is almost always greater supply than demand, requiring that a given plant focus on becoming the low-cost producer of its products. Granted, there are also other market differentiators, such as product quality, delivery performance, customer service, technology, etc., but a key business indicator is the ability to produce quality product at the lowest achievable cost for a given targeted market segment. This helps ensure greater return on assets for further investment in R&D, marketing and distribution channels, etc., to further improve a company’s business success. Consider Figure 1-2, which represents three companies competing in a market where the market price is variable, depending upon the state of the marketplace, the economy of a given region, etc. Each manufacturer makes its product(s) for a given unit cost of production that includes fixed costs, variable costs, capital costs, etc. For the purpose of this discussion, we’ve assumed that these are products of comparable quality for a given specification in the marketplace.
MANUFACTURING AND BUSINESS EXCELLENCE
Figure 1-2. Market survivor profile.
Company C is the high-cost producer. One year it may make money, the next year it may lose money, but, all in all, it may not survive very long, because it is not generating enough working capital to sustain and grow the business. Given this scenario, it is bound to change, one way or another. It is typically fighting for its very sur vival, and, therefore, it must embark on a course to improve its performance, typically by cutting its operating costs. It is also typically characterized by a reactive, crisis-driven manufacturing and corporate culture. It may even complain that its competitors “must be selling below cost.” And, whether it stays in business is, in some measure, determined by competitors, which want prices high enough for good margins, but not too high to attract additional competition. Company B is the mid-cost producer. Most years it makes money, and some years it makes considerably more money, but it is not viewed as the best in its industry. It tends to be complacent with its position, because it has typically been around for decades, it has almost always made money, and it is respected in its industry as a good company (but not the best). Sure, the management team recognizes that it has a few problems, but doesn’t everyone? The compelling reasons for change seem more obscure and are often taken less seriously. While perhaps less so than Company C, it too is often reactive in its management practices and often driven by the “crisis” of the moment—or by the latest management fad, which it rarely
9
10
MAKING COMMON SENSE COMMON PRACTICE
executes well, because a kind of “this too will pass” attitude often prevails at the operating level. Company A is the low-cost producer. It essentially always makes money, and, in some years, it is very prosperous indeed. Notwithstanding market forces, it is in a better position to determine market price and weather a market downturn. It wants the price high enough to ensure good margins, but not so high that new competitors are tempted to make major capital investments. It will work hard to ensure that Company B and Company C are not overly aggressive in pricing, in either direction. It too is generally compelled to change, but for reasons very different from Company C. Its basic mode of operation requires, rather than prefers, the company to be the low-cost, high-quality producer with high market share. Company A has also done a good job integrating its marketing and manufacturing strategies by continuously balancing the drive for higher margins against market share. Its basic culture is one in which constancy of purpose and manufacturing excellence, as determined by uptime, unit cost of production, delivery performance, and safety, are a key focus throughout the organization. Manufacturing excellence, continuous improvement, and being the lowcost producer are inherent in its culture. The unit cost of production in its simplest form can be characterized as the total cost of manufacturing divided by the total throughput (capacity). With this simple equation, there are three basic ways to lower the unit cost of production. 1. Cut costs while holding production capacity constant. 2. Increase production while holding costs constant. 3. Reduce cost and increase production simultaneously. The difference between the best companies and the mediocre/poor ones in this model is the emphasis the best companies give to the denominator. That is, they focus on maximizing the capacity available through applying best practices and ensuring reliability in design, operations, and maintenance, and through debottlenecking. They then use that capacity to go after additional market share, with little or no capital investment. Alternatively, they can rationalize underperforming assets without risking timely delivery to the customer. For
MANUFACTURING AND BUSINESS EXCELLENCE
example, if they have 10 plants in their system and create 20% additional capacity, they could shut down the poorest performing plant in the system. Or, if they’re running a 7-day–3-shift operation, they might be able to go to a 5-day–3-shift operation in the short term. Note also that in doing this, they also minimize the defects that result in failures and additional costs. In other words, they get a bonus— lower operating and maintenance costs. This helps further ensure low-cost production and competitive position. Make no mistake, they do not make products just because they can, creating excess inventory, but rather respond to demand as incurred. However, having the capability to respond to the market with both existing and new products, without significant incremental capital investment, gives them a major competitive advantage. The marketing and sales staff can then make decisions about applying this capacity, pricing, and market share, based on manufacturing performance. On the other hand, the typical manufacturing company tends to focus on the numerator, that is, cost-cutting, while hoping to hold throughput constant. The best companies focus on improving the reliability of their production operation, thereby improving performance for a given fixed asset and ensuring lower unit costs of production. Further, they get a bonus—by operating reliably, they aren’t always “fixing things,” or routinely, and inopportunely, changing from one product to another to accommodate markets and customers. Good practice and reliable operation reduce operating costs. A key point is necessary here. The A’s do not ignore costs. Quite the contrary, they are very cost sensitive, expecting to be the low-cost producer. They also expect that costs will continue to come down as they apply best practices. But their principal mode of operation is not to focus on cost-cutting as a “strategy” in itself, but rather to focus on best processes and practices, whereas the mediocre and poor companies use cost-cutting as a principal means for success while expecting that capacity will be available when it is needed. This approach can work, but it rests on the ability of the people at lower levels within the organization to somehow rise to the occasion in spite of what many of them perceive as poor leadership. The higher probability of success rests with focusing on best practice such that the capacity is available when necessary and such that costs are lowered as a consequence. Strategically, this also lowers incremental capital requirements.
11
12
MAKING COMMON SENSE COMMON PRACTICE
Company A, as the low-cost producer, is in a much better position to decide whether it wishes to pursue a strategy of increasing market share through lower prices and reliability of supply, yet still achieving good margins or by holding market share relatively constant— ensuring very healthy profits, which would finance future investments. However, all three companies must consider that over the long term the price of most manufactured products tends to trend downward. Company A is in a better position relative to future developments, principally because it is driven to hold its position as low-cost producer. Companies B and C are at greater risk if prices do fall; and, surprisingly, Company B may be at particular risk, because it is likely to be more complacent than Company A and Company C, which are compelled to change but for different reasons. Referring again to Figure 1-2, the gap between unit cost of production and market price represents our gross profit. How do we spend our gross profit? It finances our marketing and sales and the development of our distribution channels, R&D and new product and process development, general and administrative expenses, and, of course, it provides our resulting operating profits. These profits in turn finance additional capital investment and, ultimately, growth. Gross profit can be summed in one word: It represents the future of the business. Without gross profit for investing in the innovative efforts associated with marketing, R&D, and new product/process development, or what I’ll call “big innovation,” the company has little future. Of course, “A” has the brightest prospective future in this simple model. It has the greater ability to finance the “big innovation” required for ensuring its future success. How do we increasingly improve our gross profit, and finance our “big innovation,” in an intensely competitive world? We must reduce our production unit costs, particularly when market price is declining with time, as it always does over the long term. How do we do that? By cost-cutting? By process improvement? Both? As we’ll discuss later, it’s through “little innovation” at the plant or operating level, constantly seeking to improve our processes so that the costs are constantly driven lower, and so that we can finance our “big innovation” and create a virtuous circle for improvement. This “little innovation” is facilitated by many tools and methods, e.g., Kaizen, Total Productive Maintenance, Statistical Process Control, Reliability-Centered Maintenance, and others. There may be times when we have to do cost-cutting, and we’ll provide several criteria for that later, but it should always be our last choice.
MANUFACTURING AND BUSINESS EXCELLENCE
Application of Increased Capacity Of course, you can’t simply make all the product possible, overstock on inventories, drive up costs, etc. However, what strategy should you employ? If capacity could be increased, could all of the additional product be sold? At what price? At what volume would prices have to be lowered to sell any incremental volume? What capacity (asset utilization rate, uptime) is needed to ensure competitive position? Should we rationalize certain assets? And so on. Figure 1-3 provides an easy way to map manufacturing performance with market conditions and quickly judge its impact on financial performance. It plots return on net assets (or gross margin/ profits) as a function of uptime (and/or unit costs) for given market prices. For this chart, the logic goes something like this. For a given plant, you could determine what your current asset utilization rate or uptime is. For that uptime, and when combined with current operating costs, you could also determine what your current unit cost of production is for a given product set. With a large number of products this may get a little more difficult, but some companies use the concept of equivalent product units (EUs) for this purpose. For a given unit cost, and in a given market condition (price), you could also determine gross profit and subsequently return on net assets
Figure 1-3. RoNA, market price, uptime chart.
13
14
MAKING COMMON SENSE COMMON PRACTICE
(RoNA). In fact, for a family of prices you could determine the uptime required to achieve a given RoNA. Beta International’s new CEO, Bob Neurath, has recently completed a review of a benchmarking effort, centered on Beta’s financial and manufacturing performance, and has concluded that Beta is resoundingly average. While there are a few pockets of excellence, over time a complacent culture has evolved within the company, where mediocrity is the standard, and where only a few are substantially above average. This evolution has only been exacerbated by the fact that when problems have arisen, the typical response has been to “engineer a solution” (and spend more capital), rather than stepping back to determine whether or not best practice has been applied and best performance has been expected. After all, “They’ve been around for decades and have been fairly profitable. Sure they’ve got some problems, but doesn’t everyone? They’re a pretty good company.” Being “pretty good” and presuming the future is secure because the past has been successful are the beginning of the end for many companies. And at Beta this has become unhealthy, particularly in light of the increasing intensity of competition and other ills of the company. Further, increasing global competition represents both threat and opportunity. Threat for those who are complacent, but opportunity for those who can aggressively capture those new markets. In any event, Bob Neurath believes these issues represent opportunities, not problems, and that Beta International must view the global situation as opportunity. The challenge then is to establish new standards for performance and behavior. Beta must be the lowcost producer, among other factors, to ensure market position and prosperity.
Beta’s Beaver Creek Plant—RoNA vs. Uptime Reviewing Beta’s benchmarking data, we find from Figure 1-3 that Beta’s Beaver Creek plant has been operating at an uptime of 63%, relatively poor, and likely leading to a position of no better than what could be characterized as mid-range—a low-end Company B. Because Beta believes that it can sell every unit of product it can make at Beaver Creek, for purposes of this discussion uptime is defined as that percent of product a plant is making compared with that which it could make under ideal conditions—running 8,760
MANUFACTURING AND BUSINESS EXCELLENCE
hours per year at 100% of peak demonstrated sustainable rate, making a 100% quality product. However, it is believed that Beaver Creek could increase uptime from 63% to 77% in 1 year by taking the appropriate steps, and the marketing department has said that all the product could be sold at current market price. It grudgingly notes that it is currently buying product from a competitor to meet customer delivery schedules. The value of this increased output translates into an increase in RoNA from just under 15% to 22% at a market price of $10/unit. After this analysis, the marketing department has also noted that even if market pressures forced the price down to $9/unit to sell additional product, or to construct a longterm alliance with key customers, RoNA still increases to over 18%. Note that Beta does not want to start a price war with its pricing, just improve its financial and marketing position and its options. Further, under a more extreme scenario, RoNA would remain the same even if market price dropped to $7/unit, with a concurrent uptime of 83%. This kind of information is very useful in the thinking process and in creating a common theme for marketing and manufacturing. Chapter 3 describes a process for more fully integrating the marketing and manufacturing strategies.
A Model for Becoming the Low-Cost Producer Bob Neurath has concluded that all Beta’s plants must develop this type of information, which will in turn be used in creating a longterm business strategy that links marketing and manufacturing into an understanding of the sensitivities for a given level of performance. Indeed, the data indicate that uptime and RoNA are mediocre at best. Further, once understood, this information can be used to position the company strategically with certain key customers in key markets. One marketing strategy is to strategically position with key customers and offer to reduce prices modestly over the coming years, in return for a minimum level of business, recognizing that manufacturing performance must also improve to provide a reliable supply to those customers and concurrently improve RoNA. This approach is expected to create a strategic alliance between Beta and its customers, which will ensure both market share and an adequate return on assets. This information can in turn be used to help Beta to understand the profits/RoNA at which it can operate for a given market price and manufacturing level of performance, and then adjust business objectives consistent with
15
16
MAKING COMMON SENSE COMMON PRACTICE
current and strategic capability. Put more simply, Bob Neurath has directed the operating units to: 1. Determine the unit cost of production needed to ensure market share leadership (among other factors). 2. Determine the uptime, or overall equipment effectiveness, required to support this unit cost. 3. Determine any additional fixed or variable cost reductions necessary to achieve this targeted unit cost. 4. Validate the feasibility of achieving the targets. 5. Determine the key steps required to achieve that uptime and those key cost factors. 6. Presume success (presumptive market positioning) in achieving that level of uptime, unit cost of production, and market share. Proceed accordingly, and in parallel. Marketing and plant operations must work as a team to drive the improvement process. 7. As capacity is created, go after more market share, without having to invest additional capital, or alternatively rationalize the poorly performing assets. 8. Allow people within operating units considerable freedom to do the job right and ensure maximum reliability and uptime, and plant operational success. 9. Measure and manage along the way. Some plants will not be able to achieve a unit cost of better than about 110% to 120% of the recognized lowest-cost producer. Current technology, physical limitations, raw material costs, etc., limit the ability of these plants, even in the best of circumstances, from being the lowest-cost producer. This in itself is useful information in that it provides an understanding of what is possible with existing assets, and how the company may need to strategically rethink the business and its long-term fit into corporate objectives. Short term it also supports greater understanding and teamwork between marketing and manufacturing.
MANUFACTURING AND BUSINESS EXCELLENCE
Further, Bob Neurath has reluctantly accepted that it will probably take 2 to 4 years to achieve substantially improved manufacturing performance, and perhaps even longer to achieve world-class performance at most of Beta’s plants, given their current condition and level of performance. That said, he is pressing very hard for implementing the processes to achieve this level of performance.
Steps to Manufacturing Excellence But how will Beta’s operating managers determine the key next steps for what each of the operating plants is specifically to do?
The first step in the improvement process is to determine where you are as compared with the best. How do you compare with world-class companies—in uptime, in unit cost, in on-time/in-full measures—for example? How do you compare with typical companies? To do this, you must do some benchmarking. This generally creates some cognitive dissonance, or positive tension, because it creates an awareness of just how big the gaps are between typical and world-class performance. When properly applied, this knowledge can lead to improved performance. More on that in the next chapter.
Next, you must determine where your losses are as compared with ideal circumstances. This requires a system that allows you to track every hour in which you are not operating at the ideal rate, and assign a reason for the failure to perform at the ideal rate. This can then be used for analysis of key losses and key steps for eliminating those losses and is discussed in the following text. While putting this system in place, if you don’t have one yet, you can identify the major causes of losses using the technique described in Chapter 2.
Finally, you must compare your practices with best practices. Note that this differs from benchmarks, or numbers. Practices are what you do, not how you measure. The best manufacturing companies position themselves to design, buy, store, install, operate, and maintain their assets for maximum uptime and reliability—reliability of production process and reliability of equipment. Best practices in each of these areas are described in detail in Chapters 4–13. Further,
17
18
MAKING COMMON SENSE COMMON PRACTICE
Figure 1-4. The reliability process.
the best plants integrate their manufacturing strategy with their marketing strategy and plan, as described in Chapter 3. Consider Figure 1-4, which illustrates the reliability process for manufacturing excellence.7 Beta International must position itself to do all these things exceptionally well. Doing poorly in one of these areas introduces defects into the processes. These defects flow down into our manufacturing plants, resulting in lost capacity and higher costs. Further, if a mistake is made upstream in the reliability process for manufacturing excellence, it tends to be compounded downstream as people try to compensate for the mistakes within their organization. Beta must be able to use the knowledge base reflected by best practice and from within the operations and maintenance departments to provide feedback into the design, procurement, storage, and installation efforts, as well as into the operating and maintenance functions, to help minimize the number of defects being created. Beta must use this knowledge base to eliminate these defects and losses in its drive for excellence in manufacturing. Note also that most of the defects are introduced as a result of our design, installation/startup, and operating practices. These data are based on a review of the Uptime/OEE data from several large companies with multiple operations world-
MANUFACTURING AND BUSINESS EXCELLENCE
wide. And, while the data are only approximate and may vary from site to site, it highlights the need to ensure excellence in design, installation and startup, and operation if we expect to minimize production losses and maintenance costs. More on the details of exactly how this is done as we continue. To eliminate defects and losses from ideal, we must understand what ideal is, and measure all our losses against ideal. For example, if, in an ideal world, you could operate your plant 8,760 hours per year (that’s all there are except for leap year when you get an extra 24 hours), making a 100% quality product, at 100% of your maximum demonstrated sustainable rate, with an optimal product mix (no downtime for anything—changeovers, planned, unplanned, etc.), how much could you make? Certainly this would be ideal, and it is recognized that no one will ever be able to achieve this. The more important issue, however, is how close can we get to this and sustain it? Let’s measure our losses from ideal, and then manage them. If we don’t have sufficient market demand, why? If we have extensive unplanned mechanical downtime, why? And so on—measure it, manage it, eliminate or minimize the losses from ideal. Moreover, we may also find that some assets must be rationalized, that is, we can produce at market demand requirements with substantially fewer capital assets, necessitating that some be decommissioned or even scrapped. While this may be a painful thought, given the original capital investment, it may be better to stop spending money to retain a nonproductive asset than to continue to spend money ineffectively.
Measuring Losses from Ideal Figure 1-5 illustrates a general definition for uptime, overall equipment effectiveness (OEE), or asset utilization and the losses related thereto. The terms in Figure 1-5 are defined as follows:6
Asset utilization rate—That percentage of ideal rate at which a plant operates in a given time period. The time period recommended is 8,760 hours per year, but this can be defined as any period, depending on how market losses are treated.
19
20
MAKING COMMON SENSE COMMON PRACTICE
Uptime or overall equipment effectiveness (OEE)—That percentage of ideal rate at which a plant operates in a given time period, plus the time for no market demand losses.
Quality utilization—That percentage of ideal rate at which a plant operates in a given time period, plus market demand losses and changeover and transition losses.
Potential rate utilization—That percentage of ideal rate at which a plant operates in a given time period, plus market demand losses, changeover and transition losses, and quality losses.
Availability—That period of time the plant is available to run at any rate.
These terms can be confusing, depending upon any individual’s experience base—hence, the effort to define them. For example, many people refer to uptime as any time a line or plant is up and running. For them, this does not mean that it is being run in an ideal way, just that it’s up and running, regardless of rate, quality, or other losses from ideal. Further, it may be necessary to introduce other categories, depending on the nature of the business and the losses. For example, utility downtime may be a critical category for loss accounting. Production paperwork for the FDA in the food and pharmaceutical industries may represent key losses. The point is to develop a model that accounts for all losses from ideal, and then use that model to manage and minimize those losses, all things considered. The goal with this methodology is to ensure that “there’s no place to hide” any losses from ideal. Once we account for those, then we can truly begin to manage them in an integrated way. Indeed, we may find that some so-called “losses” are entirely appropriate, and ensure optimal performance over the long term. For example, planned maintenance “losses,” product changeover “losses,” etc., are an integral part of business excellence when properly done. Beta’s continuous plants have adopted the term uptime, while the batch and discrete plants have adopted the term OEE. Other companies and plants use other terms, such as Operating Asset Utilization, Operating Plant Efficiency, or other terms with which they are comfortable. Beta’s plants generally apply the concept of mea-
MANUFACTURING AND BUSINESS EXCELLENCE
Figure 1-5. Uptime/OEE/asset utilization model.
suring asset utilization in both a tactical and strategical sense. Tactically, it provides a measure of day-to-day performance and supports focus on key losses and eliminating their root causes. Strategically, it provides a measure of the effective use of capital (or not) and supports analysis of new capital requirements needed for additional market share. Applying the measurement in both modes is essential. Further, sustainable peak rate is, as the name implies, that maximum rate demonstrated to be sustainable for an extended period. Many of Beta’s plants have used their best 3-day performance ever; others have used their best production run ever, etc. The point is to use a rate that represents a serious challenge but is not totally unrealistic. Realism comes when you compare your actual rate with the best you’ve ever done. Scheduled and Unscheduled Downtime. These are normally, though incorrectly as we will see, considered the sole responsibility of maintenance. Unscheduled downtime is typically for breakdown or reactive maintenance. Scheduled downtime is typically for preventive maintenance, or PM, as well as corrective and/or planned maintenance. In supporting business excellence measurements, we generally want to eliminate, or at least minimize, unscheduled downtime; and we want to optimize (minimize for a given effect or goal) scheduled downtime using a PM optimization process (Chapters 9 and 10) that combines preventive, predictive, and proactive methods with equipment histories and knowledge of current condi-
21
22
MAKING COMMON SENSE COMMON PRACTICE
tions to ensure doing only what is necessary, when it is necessary. Subtracting these times yields actual availability. With this in mind, it has been Beta’s experience that much of the unscheduled downtime for equipment maintenance has a root cause associated with poor operational practice, e.g., pump failures being caused by running pumps dry and burning up the seals (a root cause review at one of Beta’s plants found 39 of 48 seal failures were due to operational error); by running conveyers without operators routinely adjusting the tracking; by poor operator TLC (tightening, lubricating, and cleaning using so-called TPM principles discussed in Chapter 13); etc. Hence, it is important that operations and maintenance work as a team (see Chapter 15) to identify the root cause of unscheduled downtime, and minimize it. Minimizing scheduled downtime also requires teamwork, particularly in integrating the production and maintenance schedules so that PM can be performed as scheduled, minimizing perturbations in the maintenance planning and scheduling effort, and ensuring proper parts, resources, testing, etc. Properly done these losses can be minimized and support cost reduction in manufacturing costs, poor delivery performance and time delays due to equipment failures, and improved inventory planning through increased reliability. Process Rate Losses. Generally, these are losses that occur when the process is not running in an ideal manner, e.g., production process rates at less than ideal, cycle times beyond the ideal time, yields at less than ideal, etc. These too can be caused by either operational or maintenance errors, e.g., if a machine or piece of equipment has been poorly installed, resulting in the inability to operate it at peak rate, then it’s likely that a design or maintenance problem exists; or fouling in a heat exchanger could be due to improper gasket installation by maintenance, poor process control by operations, poor piping design, or some combination. Measuring the losses from the ideal is the first step in motivating the drive to identify the root cause. Subtracting these losses results in potential rate utilization. Quality Losses. These are usually losses due to product quality not meeting specification, resulting in scrap or rework being necessary. It too can be the result of poor design, operational or maintenance practices, or some combination. In some cases specific measures are put in place for the cost of various drivers of quality nonconformance. However, these quality losses are generally a small fraction of the total losses from ideal. One concept is to use the overall uptime/
MANUFACTURING AND BUSINESS EXCELLENCE
OEE and related losses as a measurement of the cost of process quality nonconformance. In any event, subtracting the straight quality losses such as scrap and rework, a measure of product non-conformance, results in quality utilization. Changeover/Transition Losses. These include downtime losses, derate losses, and/or product quality losses that occur during a changeover or transition to a new product, both the shutdown losses for the existing product as well as the startup losses for the new product. Minimizing the changeover and transition losses will help minimize manufacturing costs. Changeovers, as we’ll see in Chapter 3, can represent a huge potential for losses, particularly when product mix and marketing and manufacturing issues are not well integrated. Subtracting changeover/transition losses results in uptime/OEE, or product utilization rate. At this point, after subtracting all these losses from ideal, we have reached a measurement of uptime or OEE, as defined in Figure 1-5, that takes credit for the nodemand and market losses. If our market losses are near zero, then uptime/OEE and asset utilization rates are the same. However, if, for example, we only run a 5-day–2-shift operation, then our market losses are quite high—we’re only operating 10 of the 21 shifts available for a given asset. Alternatively, we could measure OEE only for the time run—e.g., 5-day–2-shift operation—excluding market losses from consideration. In other words, we can characterize OEE as the sum of asset utilization plus no demand and market losses, which is more common in continuous process plants; or we can characterize it as the operating efficiency for a given period of time, such as a 5-day–2-shift operation, and ignore market losses. Either model works as an improvement facilitator, but identifying all losses, including market losses, and using asset utilization as a key performance indicator, is more likely to drive senior management in the improvement process. It will also help senior management identify tactical operating performance, as well as strategic capital requirements. Other Issues Relative to Losses from Ideal. We could include any number of categories in addition to those shown, e.g., break times, wherein machines are shut down during breaks and might not otherwise be down, if the scheduling of resources accommodated not shutting down the equipment. We might also detail losses due to utility interruptions for steam, electricity, or compressed gases that we want to account separately. We might want to break out
23
24
MAKING COMMON SENSE COMMON PRACTICE
unscheduled downtime into categories associated with maintenance errors and operational errors. The model in Figure 1-5 is simply a tool we use to determine our losses from ideal, how close we can come to ideal performance as we have defined it, and to managing and minimizing those losses. A few points need to be highlighted: 1. We frequently make legitimate business decisions that these losses from ideal are acceptable in light of current business goals, product mix, technology, staffing, union agreements, etc. The point is that we want to measure these losses and then make a thoughtful business decision about their acceptability and reasonableness. 2. It is important to distinguish between industrial standards, such as 150 units per minute on, say, a production line, and ideal standards, such as 200 units per minute in the ideal world. The first is used for production planning and recognizes current performance for management purposes. The second is a measure of ideal performance against which we judge ourselves for making process improvements toward this ideal. 3. As we debottleneck and make our improvements, we often find that equipment can run under ideal circumstances at higher rates, e.g., 220 units per minute. We then adjust our industrial production planning standards, as well as our ideal standards, upward to accommodate the measurement still being used against ideal performance. Uptime or OEE is a tool, not an end, for performance improvement. We use industrial standard rates for production planning, we use “perfection” rates for OEE measurement as an improvement facilitator, and we use measures such as unit costs of production as a measure of the desired outcome. No Demand and Market Losses. These generally refer to those losses associated with a lack of market demand. The manufacturing staff has little short-term influence on market demand. However, it is critical to the long-term success of the business to highlight this equipment and process availability for strategic decision making regarding gaining additional market share, new capital requirements, and/or capacity rationalization.
MANUFACTURING AND BUSINESS EXCELLENCE
Tactically, uptime or OEE is a measure of the daily operating effectiveness of the manufacturing function. It seeks to minimize all losses from ideal and to reach a point where the losses are acceptable from a business perspective and where production rates are sustainable at a minimal cost of goods sold, all things considered. Strategically, asset utilization represents the opportunity to strategically position corporate assets toward new products and/or greater market share (and minimal unit cost), or to decommission certain assets that are no longer needed and the cost of which is not justified in light of current performance. It is understood that certain lines may not be decommissioned in the short term because of company qualification standards for which a given production line may be uniquely qualified. But, this should at least highlight the need to do a trade-off analysis for sound business planning and for not incurring any unnecessary operating and maintenance costs. Tactically and strategically then, uptime/OEE and asset utilization rates support decisions related to the following, ensuring maximum return on capital: 1. Additional production capability for the sale of new products 2. Additional production capability for the sale of existing products 3. Daily operating performance 4. Contract manufacturing for other companies 5. Reduction in the number of production shifts and costs at a given facility 6. Mothballing of appropriate production lines for appropriate periods to reduce costs On which processes should we be measuring uptime or OEE? The answer to this question is more problematic. Because of plant configuration, product mix, etc., it may not be possible to measure uptime/ OEE or asset utilization on all processes. The logistics may just be too difficult, and/or most staff may not be trained to even think in terms of uptime or OEE. At Beta, the approach being used for those difficult situations is to map key processes and select those that (1)
25
26
MAKING COMMON SENSE COMMON PRACTICE
are bottlenecks to many finished products; (2) have the highest volume; (3) have the highest gross margin contribution; (4) have the largest quality losses; (5) are considered representative of a number of products or processes; or (6) are some combination or other criteria. Using this approach to narrow your focus will help alleviate getting too bogged down in the logistics of the improvement process, and allow focus on the critical production processes and issues.
Sample Calculation of Batch Plant OEE Continuous plants tend to be relatively straightforward for setting up an uptime measurement. With some minor modification to the model of Figure 1-5, losses can generally be routinely accounted for. Measuring OEE in batch and discrete plants, however, is often more difficult, partly because they generally have more discrete manufacturing steps, some of which feed multiple finished products; partly because they generally produce a larger number of products; and partly because the logistics of doing the calculations are just more complex. Using the technique previously described will help to focus the measurement effort, but to illustrate the method, let’s take an example from Beta’s Hemp Hill plant, a batch operation, which had been measuring the following performance indicators at one of its plants for one of its key production lines: 1. Availability. Although the term being used was uptime, or any time the line was up and running, availability is a more accurate characterization using our model, because it did not include the effects of rate, quality, or other losses. And, because downtime was accounted separately, this number also excludes scheduled downtime. 2. Downtime. This is the time when the equipment is down unexpectedly, and is synonymous with unscheduled downtime losses in the model. 3. Changeover time. This is the time for product changeovers. In the model this was part of the changeover and transition losses. 4. Clear/clean/material changes. This is the time for clearing, cleaning, and setups. This also could be part of the changeover and transition losses.
MANUFACTURING AND BUSINESS EXCELLENCE
5. No demand time. This is the time during which the equipment is not in use for production and can be used to help determine asset utilization rate. It also apparently included scheduled downtime for the following: a. Scheduled PM and repairs. This time would normally be characterized as scheduled downtime in the model, and subtracts from no demand time. b. Special and planned projects. This time is not considered by the model shown, but could easily be made a part of the model as a separate category. 6. Quality losses. This was broken into several categories, one of which was product quality losses due to equipment failures. 7. Process rate losses. This was available, but there was some confusion about the industrial engineering standards being applied, as compared with ideal rates that required additional analysis. 8. Other. There were also other losses that did not directly fit into these categories but needed an accounting, e.g., break times, utility failures, startup losses, etc. These losses may be acceptable under the current business structure, but should be identified separately. Note: Convincing everyone to account for every hour the line was not operating at peak rate, and the related causes, for loss accounting, was a difficult process. Excuses were numerous, thwarting the effort to ensure that there’s “no place to hide” poor performance. Ultimately, the value in the measurement was seen, and a measurement system was put in place to make sure production lines were being used effectively and that additional information was available to make tactical and strategic decisions for process improvement. The calculation in Table 1-1 was applied to a line that essentially operated on a 5-day–1-shift basis, with the following information being “normalized” to a 24-hour day. The basis for the calculation included:
27
28
MAKING COMMON SENSE COMMON PRACTICE
Table 1-1
Sample OEE Calculation. Reported
Estimated Hours
Scheduled downtime 1 hr
—
1.0 hrs
10.1%
4.2%
Unscheduled downtime 16%
—
1.4 hrs
14.4%
5.8%
Changeover/transition losses
31%
2.8 hrs
27.9%
11.5%
2/3 hrs
0.7 hrs
6.7%
2.8%
na
14.1 hrs
na
58.8%
Scheduled breaks No demand
—
Process rate losses
10%
—
0.4 hrs
1.7%
1%
—
0.04 hrs
0.2%
—
3.56 hrs
14.9%
Quality production at peak rate (~asset utilization rate)
—
—
% of 24 hrs
Total nonrunning hours/%
Quality losses
20.0 hrs
% of 9.9 hrs
83.1%
Note: This leaves us with 4 hours during the day when we are actually running the process, but only at 90% of its peak sustainable rate and with 99% quality.
1. No demand time was reported at 63%, but this also included scheduled downtime for PM, repairs, and projects. This was reported to average 1 hour/day, although the work was actually performed in much larger blocks of time. 2. Quality losses were reported at 1%. 3. Process rate losses were estimated at about 10%. The peak demonstrated sustainable rate was reported at 200 units per minute, but the line was reported as typically running at 150 units per minute, or a 25% “loss” from ideal. However, this required further review with industrial engineering. A nominal loss of 10% loss was used. 4. Changeover, cleaning, setup, etc., losses were reported at an average of 31%. 5. Breaks were reported at 40 minutes per run. Note: No one is suggesting that people shouldn’t be allowed to take breaks. The model accounts for all time related to all production activity. Once the losses are accounted, then business decisions are made as to their acceptability.
MANUFACTURING AND BUSINESS EXCELLENCE
Figure 1-6. Sample uptime/OEE calculation.
Using these data, an average day was broken down into two parts—63% of 24 hours for “no demand” and 37% for production activities, including maintenance, or 15.1 hours for no demand and 8.9 hours for production. However, because an average of 1 hour/ day was reported for PM/repair/project activities, we subtracted 1 hour from no demand and added it to production activities, making the average production time (including maintenance) 9.9 hours and no demand 14.1 hours. Nakajima, who developed total productive maintenance (TPM) principles that include the measurement of OEE, might disagree with this approach, because he apparently allows for an indeterminate time period for scheduled maintenance. In this example, we’re following a more restrictive application of OEE principles, which accounts for every hour of every day, and improving all business activities associated with the entire production function, including maintenance. Every production function must make these kinds of decisions, and then use the measurement tool for improving its processes, and not necessarily as an end in itself. This approach gives the calculations shown in Table 1-1, which are depicted graphically in Figure 1-6. Using different scenarios for OEE and asset utilization calculations gives:
29
30
MAKING COMMON SENSE COMMON PRACTICE
= Availability × Rate × Quality
OEE (@9.9 hrs)
= [100% (10.1% + 14.4% + 27.9% + 6.7%)] × 90% × 99% = (100% − 59.1%) × 90% × 99%
= 36.4% OEE (@24 hrs)
= Market losses + Asset utilization = 58.8% + 14.9% = 73.7%
Asset utilization
= 16.7% × 90% × 99% = 14.9%
Further, at Beta’s Fleming plant, also a batch operation, which supplies material for the line previously described, some additional difficulty was being experienced in developing an OEE measurement, primarily because the plant used a fermentation process as part of the production process. How do you measure peak sustainable rate on a fermentation process? Rather than do this per se, after reviewing historical production data, Beta found that for the key process, which had already been mapped, a cycle time of some 100 hours (e.g., the average of the three best runs) was an ideal run for the fermentation process. We also found from the data that a peak yield of 92% was achievable. With this information the OEE measurement was remodeled: (Ideal cycle time + Ideal setup / cleanup / PM) Actual yield OEE = ----------------------------------------------------------------------------------------------------------------------------- × ------------------------------- × %quality (Actual cycle time +Actual setup / cleanup / PM) Ideal yield
so, for example, one run of the process yielded the following: ( 100 + 20 ) 85% OEE = --------------------------- × ------------ × 100% = 55% ( 160 + 40 ) 95% Losses Loss accounting:
Cycle time
= 200 – 120
= 80 hours
Yield
= 92% – 85%
= 7%
MANUFACTURING AND BUSINESS EXCELLENCE
In this example, the production process ran at 55% of the ideal. If 85% is considered a “world-class level of performance,” then the estimated loss is 30%. This could in turn be assigned a $ value for the production process and product, and a value estimated for the losses from the ideal. Some observations on this method: 1. It requires that maintenance and production work as a team to define more clearly the losses from ideal. For example, the 80 hours could be because of poor practices for setup, cleanup, PM, production, etc., as well as for yield performance. The key here is to identify the losses from ideal, and to work hard to eliminate them. We might also find that in a given circumstance 55% is actually reasonably good. 2. This process could be used to calculate a weighted average for all production using this process stream, and could be combined with other production processes to calculate an aggregate weighted average of manufacturing performance. 3. A quality rate of 100% was assumed in this example, or alternatively that quality was included in the yield determination. In this process the batch was either good, or not, and in the latter case the quality was 0%. 4. The calculation does not include the effects of low (or high) asset utilization. For example, if this process were only used during 50% of the month, then the effective asset utilization rate would be 50% of 55%, or 27.5%. Both measures are useful in that one gives a measure of the effectiveness of the production process itself; the other gives a measure of the business use of available production capacity. Both measure the prospective business opportunity associated with the product being made.
Discussion of Sample Measurement The previous discussion suggests substantial opportunity for production process improvement, market share improvement, production line rationalization, or some combination. However, the decision-making process is subject to additional discussion, because not all factors that lead to business decisions at Beta or in any given organization are included in this approach. For example, if Beta
31
32
MAKING COMMON SENSE COMMON PRACTICE
decided to rationalize production lines at Hemp Hill (mothballing, decommissioning, etc.), it would also have to consider whether or not the remaining lines were capable and/or qualified under current regulatory, corporate, and engineering requirements to run the products that would otherwise run on the decommissioned line. Qualifications of operators, anticipated products, etc., would also have to be considered. In the short term Beta used the given information to make tactical decisions about improving production practices. If Beta could eliminate unscheduled downtime, and reduce scheduled downtime for PM and repair (without deleterious impact on the equipment) by half, and if changeover and transition times could be cut in half, then on average production times could be reduced by 33%. Additional improvements could also be made in ensuring running the process at peak rate during a production run, and ensuring minimal quality losses. After considerable review, it was finally concluded that about the best OEE achievable under present circumstances for the Hemp Hill plant was 50% to 55%. At the present time, nine people are operating this line per shift. Theoretically then, the same quantity of product could be produced with six people. At a nominal cost of $50K per year per person, this reduces costs per year by $150K. It is understood that theory rarely equals reality, and making linear assumptions is not always valid, because systems tend to be nonlinear, but at least Beta now has a basis for making decisions about production planning and rationalization of existing capacity. Strategically, Beta will use this information to rationalize production capacity. As certain plants improve performance, production requirements will be transferred there, ensuring maximum performance. If markets materialize as expected, production capacity will be brought to bear using existing capital assets. If markets do not materialize, then certain production facilities will be decommissioned, at least until they are needed again.
A Special Case—Beta’s Dwale Plant At Beta’s Dwale plant, a continuous process plant, it had developed an understanding from a benchmarking effort that 95% uptime was a world-class level of performance for its type of plant. Beta determined its peak demonstrated sustainable production rate based on its output during its best-ever 3-day continuous performance. How-
MANUFACTURING AND BUSINESS EXCELLENCE
ever, on further review, Beta found that there were nonlinear variables for fixed and variable costs that would influence its decision about what the optimal targeted uptime should be. Beta found that variable costs in the form of energy and certain feed material increased sharply above about 87%, leveling off thereafter, but then increasing again above 95% (see Figure 1-7). Beta also found that maintenance costs increased sharply between about 90% and 95% of peak demonstrated rate, primarily due to fouling and choking of the process. This fouling also affected its ability to keep the process on line, reducing its uptime due to additional maintenance. After some analysis, Beta concluded that its best sustainable performance would be running the plant between 90% and 95% of peak demonstrated rate. Operating the plant at this rate, Beta felt it could achieve an uptime of 90%, which was considered to be its best achievable and sustainable production rate. It was just not realistic, or cost effective, to try to run the plant at 100% of its demonstrated rate and still expect to achieve an uptime in the range of 95% without significant additional engineering and capital investment. Some sites, particularly large, integrated process sites with multiple plants, may have structural constraints imposed upon any given plant. For example, environmental discharge constraints may limit one or more plants’ ability to operate at peak rate. Business constraints and/or feed stock supplies from upstream plants may force a downstream plant to operate in a less than optimal mode. For example, at one of Beta’s downstream plants, some chemical reactor vessels were operated to take feed stock of one kind, while the other vessels were operated to take feed stock of another kind, a kind of buffer for fluctuations in site operation. However, no one ever reduced the peak demonstrated rate to account for this, or counted this as a loss attributed to the site operation. Structural constraints imposed by business or operating conditions must also be accounted for in the measurement of uptime or OEE. Many managers take the position that they couldn’t sell all that they could make. Whether this is true or not is problematic. For example, if they could reduce their unit costs and improve delivery performance, they might be able to immediately gain market share and sell all they could make. In a strategic sense, however, those underutilized assets represent opportunities, such as increased business volume and return on net assets, but it may be in the rationalization of assets. Hence, the previous model.
33
34
MAKING COMMON SENSE COMMON PRACTICE
Figure 1-7. Optimal uptime determination hiding behind excess capacity (the hidden plant).
Many managers use excess capacity as an excuse for poor or sloppy practice. For example, if a plant were operating at 60% asset utilization rate, when world class is 90%, the plant may have another 30% of sales opportunity at a lower unit cost of production; or it could potentially make the product for substantially lower costs, improving profits, and without incremental capital investment. Some managers have been known to use this 30% “reserve” to make sure they can meet delivery schedule, incurring additional costs associated with work-in-process buffer stocks, higher levels of staffing, overtime, scrap product, higher operating and maintenance costs, etc. Using excess capacity as a “reserve” is the wrong way to do business. It almost always ensures no better than mediocre performance, and sometimes worse. Costs incurred are excessive, often extraordinary. Excess capacity often masks poor practices, which increases
MANUFACTURING AND BUSINESS EXCELLENCE
unit costs, reduces profits and market share, and can result in capital expansion for needed capacity, rather than optimal use of existing assets. These things make companies less competitive, and can ultimately lead to their demise. For example, suppose a company is operating a 5-day–3-shift operation. With this in mind, our asset utilization rate can be no better than 15 ÷ 21 = 71%. Further suppose that when we measure our uptime or OEE during the 15-shift period, we find that we’re operating at 57%, a fairly typical rate when all losses from ideal are accounted. If we could get this rate to 85% or better, this would represent a 50% improvement. We would find that we could produce the same product during a 5-day–2-shift operation. In the short term we save a great deal of money. In the long term, we have incremental capacity without incremental capital investment. When excess capacity does exist, it should be viewed as opportunity waiting and used strategically to increase market share, and over the long haul to maximize return on assets and profits. It should not be used as an excuse for sloppy practice. At Beta’s Salyersville plant, a large, continuous process plant, the following conversation occurred with the maintenance manager: Benchmarking Consultant:
“How much does downtime cost you?”
Maintenance Manager:
“Nothing, we have eight lines; we only need five to meet current market demand, so when one line shuts down, we start up one of the spares, and keep on producing. So it doesn’t cost us anything.”
Benchmarking Consultant:
“So you’re carrying sufficient staff to operate and maintain eight lines, or at least more than five?”
Maintenance Manager:
Pausing, “Well, probably more than five.”
Benchmarking Consultant:
“How about spare parts? Are they sufficient for more than five lines?”
Maintenance Manager:
“Well, probably more than five.”
35
36
MAKING COMMON SENSE COMMON PRACTICE
Benchmarking Consultant:
“How about overtime? Do you spend considerable overtime to make repairs or switch over the lines? These failures don’t usually occur at a convenient time do they?”
Maintenance Manager:
“Well, no they don’t, and we probably spend more overtime than we should.”
Benchmarking Consultant:
“How about collateral damage? Do you ever have any because of running a machine to failure, rather than catching it long before it gets so bad it becomes a crisis?”
Maintenance Manager:
“Well, probably . . . well, yes we have, on line 3 last week we had a heck of a mess because of a catastrophic failure.”
Benchmarking Consultant:
“How about scrapped product, or reduced quality? Does that occur when you have to switch lines and lose product in process?”
Maintenance Manager:
“Well . . . almost always.”
Benchmarking Consultant:
“How about interprocess flows? Does the downtime on the failed line result in delays or disruptions in upstream or downstream processes, requiring considerable effort to realign the plant?”
Maintenance Manager:
“Well . . . usually.” (At this point, the wind was clearly out of his sails. He was even a bit agitated.)
Benchmarking Consultant:
“How about management attention? Don’t events that require a switch-over result in distracting management attention away from other issues that could add more value to the corporation?”
Maintenance Manager:
“Probably.”
MANUFACTURING AND BUSINESS EXCELLENCE
Benchmarking Consultant:
“Look, I’m not trying to be hard on you, or anyone for that matter. I try to help people see things from a different perspective, to help them identify opportunities for improvement, so they can make more money.”
Maintenance Manager:
“I think we have several opportunities here, don’t we?”
Benchmarking Consultant:
“It sure seems that way, but we’ve just taken the first step in capitalizing on those opportunities. Let’s look at how improved processes and practices can help eliminate some of these costs.” (We went on to review a strategy to help change the plant’s culture from being repair focused to being reliability focused.)
If we have eight lines, but only need five, why not operate the plant as if we have only five, using staff, spares, overtime, etc., at world-class levels of performance to maximize profitability for those five lines? Alternatively, why not operate a production line at a world-class level using a 5-day–2-shift operation vs. a 5-day–3-shift operation? What about the “excess capacity”? Strategically, this represents increased market share and improved profits (not excess capacity per se; or, it represents the opportunity to rationalize assets). Once we get the plant to operate reliably at superior levels of performance for five lines—striving to be the low-cost producer in the market—then we can position our products to increase market share, while still making a healthy profit, expanding our marketing and distribution efforts, and bringing those additional lines on as we need the product for our newly gained customers. In doing so, we maximize return on assets, profits, and share price.
37
38
MAKING COMMON SENSE COMMON PRACTICE
Differences between Batch and Continuous Manufacturers Many have noted that there are substantial differences between the way in which batch and continuous manufacturing plants are operated, most observing that you really cannot expect a batch or discrete plant to be operated like a continuous plant. Agreed. However, these differences should not be used as an excuse for sloppy practice. One of the fundamental questions still remains—What is ideal performance, and how far is my plant from ideal? This is regardless of the type of plant. Data collected from some 300 manufacturing plants—batch/discrete, continuous, and combination—show that the differences in performance, as viewed by the people who operate the plants, are striking. As described in the next chapter on benchmarking, people who work in continuous manufacturing plants report that their practices, when compared with a best practices standard (one that results in higher uptimes, lower unit costs, and better safety), are almost always better than the practices in a typical batch or discrete plant. The reasons for this are uncertain, but it is theorized that the batch manufacturers always believe they can make up for lost production with the next batch, not recognizing that time lost is lost forever, whereas the continuous manufacturers’ mistakes are almost always highly visible, principally because of the very nature of the plant—there are fewer places to hide when a continuous plant suffers lost production. When a batch plant does, this appears not to be the case. More detail on this issue, as well as key success factors and the differences between batch and discrete plants, is provided in Chapter 2 and Appendix A.
Lean Manufacturing, Six Sigma, and Focused Factories In recent years, lean manufacturing, focused and agile factories, and six sigma methods have come to the forefront as models for manufacturing excellence. The following text is a discussion of these principles or methods.
MANUFACTURING AND BUSINESS EXCELLENCE
Lean Manufacturing Womac, Jones, and Roos’s book, The Machine That Changed the World, The Story of Lean Production,8 is an interesting discussion of the history of what came to be known as lean manufacturing, but it is scant on the details of the methods for achieving it. More recently, in Running Today’s Factory, A Proven Strategy for Lean Manufacturing,9 Charles Standard and Dale Davis have provided a clear methodology, which integrates lean manufacturing concepts and factory physics principles together in a very practical way, and whose points of emphasis are: (1) reducing process variability; (2) reducing system cycle times; (3) minimizing delay times between process; and, above all, (4) eliminating waste in the manufacturing process and supply chain, from receipt of order to delivery of product and payment. Indeed, measuring system cycle time, from receipt of order to delivery, and minimizing delay times between processes, are essential for discrete manufacturers to ensure their competitive position; and, as they point out, are much more important than the “efficiency” of any given machine. Consider Figure 1-8, which characterizes the parameters for achieving lean manufacturing. Production flow in the manufacturing process is such that raw material is fed into A, which feeds B, which feeds C, which feeds D, which creates our finished product.
Figure 1-8. Reliability supports lean manufacturing and reduces variability, delay times, and inventory needs.
39
40
MAKING COMMON SENSE COMMON PRACTICE
Each of the production steps has considerable variability in its daily quality production output. There are also delay times between processes, inducing further variability in system cycle time. And, variability tends to increase with each step of the production process because of upstream volatility. Finally, the demand flow is also highly variable. Matching demand flow with production flow while achieving on-time delivery to the customer becomes a difficult and often complex task in this environment. The way this variability is typically managed is to keep lots of inventory—raw materials, work in process (WIP), and finished product—on hand. So, if a process or equipment fails, you can still generally meet delivery requirements. This is clearly not the best way to run a business. We incur lots of waste, and costs, both in terms of excess inventory and in terms of additional operating and maintenance costs. Our production system is not in good control, though we can calculate the inventory required at each step using certain statistical formulas and imputing the variability of production and demand flow, our on-time delivery requirements, and specific confidence limits. Improving process and equipment reliability will allow us to reduce our finished product inventory and WIP requirements, simultaneously reduce our waste and operating costs, and increase our confidence in high on-time delivery to our customers. One of the major issues that needs to be addressed is the understanding (or lack thereof) that lean manufacturing is not about head count reduction. Lower head count per unit of output is a consequence of applying lean principles. And yet it seems that many managers, even at very high levels, still believe that reducing head count will result in a lean company. There is a huge body of evidence that indicates that head count reduction will not provide sustained improvement. An analogy that may help is to recognize that “lean” and “fit” are two very different concepts. Olympic athletes are “fit,” which would typically give them a “lean” appearance. Anorexic people are “lean,” but hardly “fit,” and we certainly don’t want an anorexic company. So, lean manufacturing is more about the concept of being lean though being fit, not the other way around. Moreover, if you reduce the resources available to your system without changing its basic design, then system performance will likely decline. Lean manufacturing is more a philosophy or condition than it is a process. For example, when you’re lean, you:
MANUFACTURING AND BUSINESS EXCELLENCE
Have minimal inventory, WIP, and raw material
Have high on-time delivery performance
Are typically operating in a “pull” mode—you only make enough to fill near-term demand
Make more smaller batches and have less longer runs (a bit counter-intuitive)
And you use techniques such as:
One-piece flow, quick changeover, takt time,* and mistake proofing
Measuring system cycle times and delay times, and managing them more effectively
Applying reliability principles and six sigma to minimize process variability
If you don’t have reliable processes and equipment, it will be very difficult to be lean. You need that extra “stuff”—buffer stocks, spare parts, spare equipment, etc., to manage your unreliability and still meet your customers’ demands. It’s hard to be lean when you’re broken. So, get the basics of good reliability practices in place to ensure good lean manufacturing performance.
Figure 1-9. Causes of losses from ideal production. * Takt time is a constant pace at which to operate the plant to exactly meet market demand.
41
42
MAKING COMMON SENSE COMMON PRACTICE
Further, as we’ve seen, reliability is not just about maintenance. Superb reliability requires excellence in design, procurement, operations, and maintenance. MCP Consulting, in its work with the Department of Trade and Industry in the United Kingdom, observed that some 40% to 50% of equipment breakdowns were related to poor operating practices, 30% to 40% were related to poor equipment design or condition, and 10% to 30% were related to poor maintenance practices. As shown in Figure 1-9, several Fortune 500 manufacturers, including several of Beta’s plants, have reported that some two-thirds of all production losses, as measured against ideal, are not related to equipment downtime, but rather to rate and quality losses, changeover losses, no demand losses, lack of raw material, poor utility supply, etc.; the remaining one-third is due to equipment downtime as a result of poor operating or design practices, not poor maintenance. This leaves maintenance in control of only about 10% of the production losses from ideal. Clearly, if you want high performance, the first order of business is to ensure excellence in operating practices.
Six Sigma In their book The Six Sigma Way,10 Pande and colleagues provide the following model for applying six sigma principles. Six sigma is a statistical term that characterizes your quality having less than 3.4 defects per million for a given product or process specification. However, six sigma has become a methodology for reducing the variability of processes such that the result is greater quality and consistency. It stresses simultaneously achieving seemingly contrary objectives:
Being stable and innovative
Seeing the big picture and the details
Being creative and rational
Similar to Deming’s “plan, do, check, act,” it applies the DMAIC model—define, measure, analyze, improve, control—to core processes and key customers.
MANUFACTURING AND BUSINESS EXCELLENCE
Principal themes include:
Focus on customer satisfaction/success
Data/fact-driven management
Process management and improvement
Proactive management
Boundaryless collaboration
Drive for perfection, but tolerance for failure
Principal tools/methods include:
Continuous improvement
Process management
Process design/redesign
Customer feedback
Creative thinking
Analysis of variance
Balanced scorecards
Design of experiments
Statistical process control
Improvement projects
One of the cautions regarding the use of six sigma is—don’t start with a “nerdy” monologue on SPC or statistical analysis—this damages its credibility, particularly with the shop floor. According to David Burns of SIRF Round Tables in Australia, the first step in applying six sigma principles is to address the obvious and make sure good basic practices are in place, the second step is to standardize your processes and, finally, the third step is to perfect your processes. This is depicted graphically in Figure 1-10. Of course, applying the tools described previously in doing this is a good approach. The point of emphasis here, however, is that achieving step one— getting good basic practices in place first, including excellence in maintenance and reliability, is essential for success.
43
44
MAKING COMMON SENSE COMMON PRACTICE
Figure 1-10. Journey to six sigma.
Notice that in Figure 1-10, with three sigma performance your processes are within specification 93.3% of the time, whereas with four sigma your performance jumps to 99.4%. Most plants seem to operate in the three sigma range, more or less. Significant competitive advantage comes from going from three sigma to four sigma. We achieve that by fixing the obvious and doing the basics really well.
Focused Factories The concept of focused factories has also been offered as a strategy for ensuring manufacturing excellence.11 While the concept of focused factories has been well received and has shown considerable improvement at some of Beta’s plants, it should be emphasized that this has typically been at its batch and discrete plants, with the concepts being more difficult to apply at its continuous plants. Further, even at its batch and discrete plants, what has often resulted within the maintenance function is that “the firefighters have only moved closer to the fires, and little is done to eliminate the cause of the fires.” Improved equipment reliability and performance rarely result from this approach, nor is it expected to adequately support related strategies such as agile and lean manufacturing, which require high
MANUFACTURING AND BUSINESS EXCELLENCE
reliability and performance. Again, It’s hard to be agile or lean when you’re broken. Further, improving production flows and minimizing floor space can lead to substantial production improvement, but has also led to inadequate pull space, lay-down area, etc., for maintaining equipment, at times resulting in increased costs and longer downtime. Certainly, plant layout and production flow are critical, but they must fully consider reliability, maintenance, and operating requirements. The view by some focused factory advocates that having backup machinery and equipment is a solution to capacity and production problems does not appear to be well founded. Additional equipment is not normally the solution to poor maintenance practice, as it increases the need for additional capital, as well as operating and maintenance expense, etc. Getting to the root cause of equipment failures and improving maintenance practices is a much better approach. Beta has found that properly applied, maintenance management systems are not a burden, but a key tool for equipment reliability, and that work orders are an essential element for planning, managing, and generally minimizing maintenance work requirements. Beta has also found that most often a hybrid organization of some decentralized (focused resources) and centralized resources works best at most of its operations.
Summary As best practices are implemented at Beta International, fewer people will be required to achieve the same production goals. However, Beta decided to show loyalty to its employees and to work hard to avoid downsizing, and has pledged that it will use the following techniques to minimize that possibility: 1. Not replacing workers lost through attrition: resignations, retirement, etc. 2. Reduced contract labor, using employees even when retraining is required 3. Reduced overtime—a smaller paycheck is better than no job 4. Voluntary reductions in staffing 5. Termination of poor performers
45
46
MAKING COMMON SENSE COMMON PRACTICE
6. Reallocation of employees for new, or different, jobs, including any retraining 7. Finally, and by no means least, reallocation of resources to handle expanded business volume World-class business performance requires excellence in and the integration of marketing, manufacturing, and R&D, and that we know what excellence is by measuring it; it requires that we know what it means to be the low-cost producer of our products, and how to achieve and sustain that position; it requires that we understand how our manufacturing performance relates to our return on net assets and general corporate performance; it requires that we understand our losses from ideal and manage them; it requires that we put in place a reliability process for manufacturing that ensures that we design, buy, store, install, operate, and maintain our manufacturing assets in a superb way. Finally, it requires that we integrate our marketing and manufacturing strategy in a comprehensive way, focused on world-class performance. Each of us sees light (and information) according to where he or she stands relative to the prism. Beta, like most major manufacturing companies, tends to operate each business function as if each one were the only “color” in the rainbow, and it works hard to optimize each function, missing the continuous, interrelated nature of all the issues. Because of this, Beta only “sees” things from one distorted perspective. Although it has tried to optimize its processes, these processes always optimized at the suboptimal level. If Beta is ever to find the proverbial gold at the end of the business rainbow, it must recognize that this continuum of issues must be fully integrated and each issue recognized for its contribution and interrelationship. Bob Neurath must instill a common sense of purpose for world-class performance among all within Beta International—world-class performance and the gold at the end of the business rainbow. This involves focusing on reliability and capacity improvement for the existing assets; understanding and integrating manufacturing with markets and their sensitivities; measurement of uptime and losses from ideal; and understanding and applying best practices in an integrated way in the design, buy, store, install, operate, and maintain continuum for manufacturing excellence.
MANUFACTURING AND BUSINESS EXCELLENCE
References 1. Skinner, W. “Manufacturing—The Missing Link in Corporate Strategy,” Harvard Business Review, May/June 1969. 2. Hayes, R. H., and Wheelwright, S. C. Restoring Our Competitive Edge: Competing through Manufacturing, The Free Press, New York, 1984. 3. Hayes, R. H., and Pisano, G. P. “Beyond World Class: The New Manufacturing Strategy,” Harvard Business Review, January/February 1994. 4. Hill, T. Manufacturing Strategy—Strategic Management of the Manufacturing Function, Macmillan Press Ltd., New York, 1993. 5. Turner, S. “An Investigation of the Explicit and Implicit Manufacturing Strategies of a South African Chemical Company,” Master’s thesis, The Graduate School of Business, University of Cape Town, South Africa, December 1994. 6. Flynn, V. J. “The Evolution of Maintenance Excellence in DuPont,” a presentation sponsored by E. I. duPont de Nemours & Co., Inc., by arrangement with the Strategic Industry Research Foundation, Melbourne, Australia, August 1996. 7. Fraser, A. Reliable Manufacturing Associates, Warrington, Cheshire, England, 1997. 8. Womack, J. P., Jones, D. T., and Roos, D. The Machine That Changed the World, The Story of Lean Production, HarperCollins Publishers, New York, 1991. 9. Standard, C., and Davis, D. Running Today’s Factory, A Proven Strategy for Lean Manufacturing, Hanser Gardner Publications, 1999. 10. Pande, P. S., Neuman, R. P., and Cavanaugh, R. R. The Six Sigma Way, McGraw-Hill, New York, 2000. 11. Harmon, R. L. Reinventing the Factory II, The Free Press, New York, 1992.
47
This Page Intentionally Left Blank
Benchmarks, Bottlenecks, and Best Practices
2
For the man who knows not what harbor he sails, no wind is the right wind. Sienna Benchmarking, according to Jack Grayson, involves “seeking out another organization that does a process better than yours and learning from them, adapting and improving your own process.” 1 Benchmarks, on the other hand, have come to be recognized as those specific measures that reflect a best-in-class standard. Best practices, as the name implies, are those practices best for a given process, environment, etc., and that allow a company to achieve a benchmark level of performance in a given category. Benchmarking, then, involves emulating the best practices of others to improve your processes so that you can achieve a superior level of performance as measured against benchmarks, or best in class. Benchmarking is a continuous process, requiring constant attention to the latest improvement opportunities and the achievements of others. Further, as Joseph Juran said, “If you don’t measure it, you don’t manage it.” So benchmarks represent those measures we choose to manage to improve performance. This chapter explores Beta’s use of benchmarking, which revealed it was thoroughly average, and represented the beginning of its journey to understanding and implementing the best practices discussed later. We will also review the dynamic nature of bottlenecks, which
49
50
MAKING COMMON SENSE COMMON PRACTICE
can help prioritize resources for applying best practices and minimizing key losses, ultimately leading to improved performance. But first, let’s review the benchmarking process itself.
Benchmarking—Finding Benchmarks and Best Performers The first step in benchmarking is to define those processes for which benchmark metrics are desired, and which are believed to reflect the performance objectives of the company. While this may seem simple, it can often be quite complicated. You must first answer the questions, “What processes are of concern, e.g., those that support business goals and strategy?” and “What measures best reflect my company’s performance for those processes?” Then you must answer the question, “What measures best reflect my division or department’s performance, which, in turn, supports corporate objectives?” and so on. These decisions will vary from corporation to corporation and from industry to industry, but let’s suppose we have an overall objective to improve corporate financial performance and, in particular, manufacturing performance. Some suggestions at the strategic level are as follows: Category Return on net assets Return on equity Earnings growth Profit as a % of sales Market share Safety record Beta ultimately chose as corporate key performance indicators (KPIs) the principal measures of return on net assets, earnings growth, and safety. At the strategic operational level, we might see:
Return on net assets
Product unit cost
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
Percent plant capacity utilization (or uptime, overall equipment effectiveness)
On-time/in-full deliveries
Inventory turns
First-pass/first-quality product
Training time per employee
OSHA accidents per 200,000 labor hours
Customer satisfaction
At the business unit level, Beta chose as its KPI’s return on net assets, uptime, unit cost of production, on-time/in-full, customer safety, and safety. Certainly, there are other measures within a business unit that must be monitored and used to support the business. Indeed, it is likely that any given business would choose different measures, or measures unique to its industry, and you are encouraged to develop your own analysis in this regard. In any event, these were chosen as encompassing Beta’s other supporting and subordinate measures. At the operational and departmental level, and supporting key business measures, were a variety of measures:
Uptime, or OEE
First-pass quality yield, cycle times, defect rate, equipment/ process capability (statistical)
Maintenance cost as a % of plant replacement value
Percent downtime, planned and unplanned
O&M cost as a % of total costs and per unit of product
Percent overtime
Average life, mean time between repairs for critical machinery
Beta’s principal plant measures were generally uptime/OEE (and losses thereto per Figure 1-5), maintenance cost as a percent of plant replacement value, unit cost of production, inventory turns, on-time/ in-full deliveries, and safety. Each department understood this, and
51
52
MAKING COMMON SENSE COMMON PRACTICE
chose other measures within their department that supported the plant’s measures. Within a given industry, we might see industry-specific metrics, such as cost per equivalent distillate capacity (refining), equivalent availability or forced outage rate (electric utility), labor hours per vehicle (automotive), and so on. No industry-specific measures were used for Beta, but they may be used in the future at specific plants. A more detailed listing of performance measures that may facilitate additional consideration and discussion in the selection process is provided in reference 2. The key for Beta, as it is for most companies, is to select those measures that reflect the performance objectives of the corporation, or for a given process within the corporation, and then subsequently determine how the company compares with other companies, generally, though not necessarily, in their industry, especially those in the upper quartile. In general, no more than 10 key metrics, and preferably 5 or so, should be selected in each category of interest, and they should relate to each other in a supportive, integrated relationship. Before going to the next step, a strong word of caution is advised at this point. Benchmarking is part of beginning the process for continuous change; it is not a conclusion. Too many executives, after going through a benchmarking exercise, forget that the true definition of benchmarking is finding another organization that does a process better than yours (not just doing the numbers), and essentially emulating what it does. Being good executives and decision makers, they tend to get the benchmarks—the numbers, not the practices or processes—and then make arbitrary decisions on the numbers before the required learning has occurred and improved processes are deployed. Costs are a consequence of your business system design. If you reduce resources without changing your underlying design, system performance will likely deteriorate. Caution must be exercised about arbitrary cost reduction. For example, a world-class measure of inventory turns on maintenance stores at a manufacturing plant is typically viewed as greater than 2. A typical organization has a turns ratio on spare parts of 1, or sometimes even less. Knowing this, the good executive might then decree that half of the spare parts inventory should be eliminated, without recognizing that the organization is a highly “reactive” operation with high breakdown losses, and needs additional spare parts to support this mode of operation. The executive has failed to
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
recognize that superior practices in operations, maintenance, and stores resulted in high turns, not arbitrary decisions. Benchmark data tend to have a very high level of “scatter,” and discerning what is “best” for a given organization in a given business situation can be an enormous effort. Further, a given plant may have inherent design or operational capability that either enhances or limits its capability to achieve a world-class level of performance for a given measure. Finally, other corporate issues such as product mix (Chapter 3) can have a very big impact on operational performance, even in an exceptionally well-run manufacturing plant. The key is to use benchmark data as the beginning of a process for change (which never stops), and to use the information to make changes to your practices and business decisions that will improve the performance indicators. It is not to be used for arbitrary decision making or cost-cutting by executives. This was and continues to be a difficult issue for the executives at Beta, as it often is for most companies that are seeking rapid improvement. The key is to understand processes that lead to superior performance, not to arbitrarily cut budgets with the hope that everything will turn out all right.
Making the Comparison Comparing your company to other companies is the next step. Having selected the key metrics, you must now make sure that you are properly measuring these metrics within your company, that you understand the basis for the numbers being generated, and that you can equitably apply the same rationale to other information received. Now, to develop comparable data from other companies may be somewhat difficult. You may have several choices:3 1. Seek publicly available information through research. 2. Set up an internal benchmarking group that will survey multiple plants within a corporation, ensuring fair and equitable treatment of the data. Benchmarks will then be defined in the context of the corporation. 3. Seek the assistance of an outside company to survey multiple plants, usually including your corporation, but also expanded to other companies within the company’s industry.
53
54
MAKING COMMON SENSE COMMON PRACTICE
4. Seek the assistance of an outside company to survey multiple plants, many of which may be outside your industry, in related, or even unrelated fields. 5. Some combination of the above, or other alternative. If you choose to do benchmarking, a couple of points are worth mentioning. It is recommended that a standard such as the Benchmarking Code of Conduct3 be followed. This ensures that issues related to confidentiality, fairness, etc., are followed. For benchmarking outside the company, particularly within your industry, it is recommended that you use an outside firm to help avoid any problems with statutes related to unfair trade practices, price fixing, etc. Properly used, benchmarking is an excellent tool to ensure improved performance. Benchmarking has become associated with “marks,” as opposed to finding someone who does something better than you do and doing what this person does, which may more accurately be called best practices. Notwithstanding the semantics, benchmarking and application of best practices are powerful tools, not solutions, to help improve operational and financial performance. The ultimate benchmark is consistently making money, at substantially better than industry average, over the long term. Applying benchmarking and best practices can help ensure that position. Let’s consider some prospective world-class levels of performance, e.g., the “benchmarks” developed by Beta International shown in Table 2-1. Note these are nominal in that they represent a composite of several sources, and specific “benchmarks” may vary depending on plant design, process design, industry, product mix, business objectives, etc., and may change with time as companies improve. However, they do represent a good cross-section from Beta’s experience. Be cautious about using benchmarks. First, there is considerable scatter in the data used in benchmarking. Second, benchmark data are constantly changing as plants improve their processes. Third, no single benchmark should be used to make any decisions; rather, all the data must be considered in light of the company’s overall business goals. Fourth, variables related to product mix, processes, etc., will affect benchmarks. For example, while the data in Table 2-1 are relatively good as guidance, we could review the paper products
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
industry and find that a world-class level for uptime in a tissue plant might be 95%+, in a newsprint plant 90%+, and in a coated paper plant 85%+, depending on any number of factors. Likewise, in the petrochemical industry, we might find that a world-class olefines plant would operate at 95%+, and that plastics and elastomers plants would only operate at 85%+. To further illustrate the point about being cautious when using benchmark data, let’s consider maintenance costs as a percent of plant replacement value. This number is commonly used in manufacturing plants to “normalize” the maintenance costs for a given plant asset. However, it’s important to understand how these data vary with circumstances. For example, the two variables to develop the measurement are themselves highly variable—maintenance costs in the numerator and plant replacement value in the denominator— and require understanding of each other’s genesis, discussed in the following text.
The Numerator Maintenance costs may have to be normalized to a labor base and some “equivalent value” for labor. For example, at Beta’s plant, located in Asia Pacific, this ratio was only 0.75%. However, when reviewed more fully, it turned out that the actual labor rates were very low, less than half of what you might expect if the work were performed by European or American workers. When we took the actual labor hours, and multiplied by US labor and overhead rates, the ratio came up to ~1.25%. Second, this particular plant, while well managed, may have been “saving” maintenance costs at the expense of the long-term reliability of the equipment and not spending enough to maintain the equipment and the plant infrastructure on a life-cycle basis. However, this second point is more of a judgment call. Further, some of the “maintenance expenses,” particularly for upgrades of major capital items, were being capitalized (vs. expensed). While this is a legitimate accounting choice, it can skew benchmark data somewhat, depending on how you treat these items. Other issues to consider include reviewing the historical maintenance costs for a given set of plants and their general trend, and the actual condition of the assets—have they deteriorated over the past few years? As always, maintenance costs will depend to some degree on the type of plant. For example, a paint pigment plant is likely to
55
56
MAKING COMMON SENSE COMMON PRACTICE
Table 2-1
Comparative Data: World-Class and Typical Performance.
Performance Measure Manufacturing performance On-time/in-full Uptime (continuous process plants) Overall equipment effectiveness (discrete) Quality—Cpk Defect rate Waste/scrap as % of manufacturing costs Customer returns Critical equipment/processes “capable” OSHA injuries per 200K hrs Recordables Lost time accidents
Nominal World Class
Typical
99% 90–95% 80–85% >2 50–100 ppm 0.1–0.2% <0.01% 95%
80–90% 70–80% 50–70% >1.33 500–5,000 ppm 1–3% <0.1% 30–70%
<0.5 <0.05
5–10 0.2–0.5
1–3% <1–2% >90% <10% >$6–8M <1% <5%
3–6% 5–10% 50–70% 45–55% $2–4M >10% 10–20%
<1% 0.25– 0.5% >2 10–12 $1–1.5M $1.5–2M
4–6% 1–2% 1 4–5 $0.5–1.0M $0.5–1.0M
Maintenance performance Maintenance costs as a % of PRV* Breakdown production losses Planned maintenance Reactive maintenance PRV $ per mechanic % Maintenance rework Overtime Stores/spare parts management Parts stockout rate Stores value as a % of PRV* Parts inventory turns Line items/stores employee/hr Stores value/store employee Disbursements/stores employee/yr Human resources Training ($/yr) (Hrs/yr) Employee turnover rate Absentee rate Injury rate
$2–3K $1–1.5K 40 hrs 20 hrs <3% 5%+ <1% 3%+ —See Above Data—
*PRV is plant replacement value, simple in concept, but more difficult in practice. For example, if you had to rebuild the plant today, what would it cost? Alternatively, if it is insured for replacement value, what is that? Or, what price would the plant bring on the open market? Or, what is its original capitalization, adjusted for inflation? Or, some combination of the above.
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
incur higher maintenance costs as a percent of replacement value than an electronics plant. Finally, maintenance costs at a given time may not be the key factor in overall business excellence, and extra money spent to produce higher uptimes may be appropriate in some circumstances.
The Denominator How was the plant replacement value determined? For example: 1. Was it the plant’s insured replacement value? At full value? 2. Was it the initial capital value, plus inflation; plus added capital plus inflation on the added capital? 3. Was it a professionally assessed value based on selling the asset? 4. Was it based on similar assets with similar output in similar products that have recently been sold? 5. Were there other unique circumstances that resulted in a higher than normal asset valuation? For example, at one of Beta’s plants, the effluent treatment requirements substantially increased the plant replacement value, but did not add a commensurate amount to the maintenance costs. At another, the reverse was true. There are probably other factors in the numerator and denominator that may be considered to ensure that “like-to-like” comparisons are made, but these are ones that come to mind immediately. Benchmarking is a good process to help begin to understand how to improve performance, but don’t use any single benchmark measure to make any decisions or reach any conclusions about anything. The data must be used in the aggregate, and the processes and practices that yield improved performance must drive behavior and business decisions. For example, it could be that a plant has low maintenance costs and low uptime, possibly meaning that the high downtimes are because of poor maintenance practices—not enough maintenance— or that the plant has high uptimes and high maintenance costs as a
57
58
MAKING COMMON SENSE COMMON PRACTICE
percent of PRV because it has lots of in-line spares and lots of breakdown maintenance, contributing to higher unit costs of production; in the best cases, it has have high uptimes and low maintenance costs because it’s doing the right things right. All these must be considered. In the case of Beta’s Asian plant, it was actually recommended that more money be spent on maintenance, not less, because that was likely to produce higher uptime, as well as slightly higher unit costs of production. But the gross margin on its product was so high that the incremental financial gain made on the incremental production was well worth the additional expense, in spite of the slightly higher unit cost of production! Much depends on your business objective. As you might expect from this discussion, asset utilization rate is, if not the most important measure to a manufacturer, among the most important measures. Two measures that help capture this are overall equipment effectiveness (OEE),4 and uptime, a measure generally attributed to DuPont in its creation and understood to have been developed from the OEE concept. At Beta, OEE is generally used in discrete and batch manufacturers and is discussed further in following sections. Uptime, or a variant thereto, is generally used in continuous process industries, e.g., refinery, chemical and petrochemical, pulp and paper, primary metals, etc. Beta International ultimately decided to use the following model as an integral part of the improvement process:
What is the plant’s uptime or asset utilization rate?
What are the causes of lost utilization, e.g., no demand, production changeovers, planned downtime, unplanned downtime, reduced rate, reduced yield, reduced quality, poor utilities, etc.? Note: This means you must measure the causes of every hour of lost utilization. A sample of the causes of losses is provided in Table 2-2.
What is being done to eliminate the causes of these losses?
What is the plant’s unit cost of production?
What is my personal responsibility (at all levels) in eliminating these causes?
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
Table 2-2 Sample OEE Measurement and Related Losses for a Batch Manufacturer Peak Sustainable Rate. Peak Sustainable Rate
100%
Minus Losses: No demand
10%
Changeovers
12%
Reduced rate/cycle times
5%
Quality rejects
1%
Planned downtime
5%
Unplanned downtime
5%
Other losses
2%
Total losses
40%
Net output
60%
Additional discussion is provided later in this chapter under “Manufacturing Uptime Improvement Model,” regarding methods for creating a common sense of purpose and for determining and measuring the cause of major losses. One interesting statistic that Beta found in doing some comparative analysis is provided in Figure 2-1. This correlates uptime as a function of reactive maintenance. What was found was that on average, for every 10% observed increase in reactive maintenance, an approximate 2% reduction in uptime was realized. Reactive maintenance is defined as the portion that is run to failure, breakdown, or emergency. In other words, it’s all the work that you do during a week that wasn’t anticipated at the beginning of the week. These data imply that highly reactive organizations experience loss of output because of the reactive processes they have in place. It could also be further implied that high reactive maintenance levels also result in high maintenance costs, because the general literature reports that reactive maintenance typically costs twice or more that of planned maintenance. Beta also put forth considerable effort to determine what factors would affect the improvement process as it strove for world-class performance. These efforts included perceptions within rank and file, as well as management, on issues such as management support
59
60
MAKING COMMON SENSE COMMON PRACTICE
Figure 2-1. Uptime vs. reactive maintenance.
and plant culture, organization and communication, performance measures, training, operations and maintenance practices, stores practices, and overhaul practices. These self-assessments indicated that one of the critical parameters in Beta’s case, when compared with plants with substantially better performance, was the issue of management support and plant culture. Management support and plant culture reflect the degree to which management is perceived to be supportive of manufacturing excellence, basic reliability principles and practices, and the extent to which management has created a proactive, team-based, process-oriented culture. Beta was generally within the typical range in its scores in most areas, except for this one, where it ranked some 5% to 10% below average. Coincidentally, Beta’s average uptime or OEE was below an average industrial manufacturing plant. Beta was apparently so focused on cost-cutting it had created a workforce afraid to take the risks associated with changing its processes and that had greater focus on cost-cutting and delivering a budget than on delivering manufacturing excellence. Table 2-3 is a summary of typical and world-class measures from these self-assessments. Appendix A provides additional detail, including statistical correlations of the key success factors, or “drivers,” for best practices and improved performance. This analysis shows uptime is positively correlated with all leadership/management practices and with all operating/maintenance practices, that is, the higher
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
Table 2-3
Reliability and Maintenance Practices Assessment.
Management Practices
Max
Characteristic/Score -Average ScoresBatch Continuous
Typical World Class
Mgmt support/plant culture
100%
51%
57%
>75%
Organization/communication
100%
50%
59%
>75%
Performance measures
100%
44%
60%
>75%
Training
100%
48%
50%
>75%
Operational practices
100%
50%
57%
>75%
Reactive maintenance
100%
53%
46%
<10%
Preventive maintenance
100%
47%
57%
>75%
Stores practices
100%
36%
47%
>75%
Shutdown/overhaul practices
100%
50%
68%
>75%
Predictive maintenance (vibration)
100%
18%
60%
>75%
Predictive maintenance (oil, IR, UT, etc.)
100%
35%
55%
>75%
Proactive maintenance
—
25%
40%
>75%
Notes: 1. These data have been compiled from a database of some 300 manufacturing plants that participated in the self-audit. A detailed discussion of these parameters is provided in Appendix A. The scores have been normalized to percentages and do not reflect the weighting given to different areas, which is also discussed in Appendix A. 2. A world-class plant is viewed as one that sustains an OEE or uptime of 85% to 95%, and concurrently reports scores in the upper quartile in essentially all categories shown, with one crucial exception, and that is having a reactive maintenance of less than 10%.
the score in best practice, the higher the uptime. Conversely, in this same study, when reactive maintenance levels were high, all leadership/management practices, as well as all operating and maintenance practices, were poorer. Nor was any one practice, or success factor, the “magic bullet,” but rather all had to be done in an integrated and comprehensive way. The conclusion is that in a reactive culture, practices tend to be poorer, uptime lower, and costs higher—processes are not in control and practices are less than optimal. In a reliability-
61
62
MAKING COMMON SENSE COMMON PRACTICE
focused culture, uptime is higher, costs lower, and process and practices are in control. As David Ormandy said, “If you do all the little things right, the big bad things don’t happen.” The concept of uptime, typically used for continuous manufacturing plants, and OEE, typically used for batch and discrete plants, while a fairly simple one, was difficult for the management of Beta to embrace initially. It had become so accustomed to managing to budgets, and to manufacturing a targeted quantity of product based on historical experience, that some fairly fundamental principles of manufacturing were being ignored at many plants. These are reviewed in the following section in a different light. From total productive maintenance (TPM4) principles, we understand that all production processes are limited (1) by the amount of time the equipment can actually run, its availability; (2) by the yields it provides when it is available, its process rate and efficiency; and (3) by the sellable product produced, its quality. The product of these parameters defines maximum output through so-called overall equipment effectiveness, or OEE, as previously noted. This overall equipment effectiveness parameter will in large measure be driven by its reliability, which can, in turn, drive asset utilization, unit cost, and ultimately market share, profits, and return on assets. Consider the data in Table 2-4. Even with the level of performance shown, the plant can only produce 85% of its maximum capacity. But, plant overall equipment effectiveness (OEE) levels of 85% are generally accepted as “world class” for discrete manufacturers.5 Additional discussion of TPM is provided in Chapter 13. This example illustrates the compounding effect of availability, production rate/yield, and quality, which is further compounded when multiple production steps are involved in the production process, creating dynamically variable bottlenecks in the daily production process. This is discussed in the following section.
Bottlenecks—A Dynamic View In The Goal,6 Goldratt explains bottlenecks in a manufacturing process and how to manage bottlenecks, which will always exist in any manufacturing plant. However, Goldratt doesn’t emphasize the fact that bottlenecks are quite dynamic and can change daily. For
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
Table 2-4
Overall Equipment Effectiveness. Nominal World Class
Measure
Definition
% Availability
Available time/max available time
>95%
Actual yield or throughput/ max throughput
>95%
Quality production/total production
>95%
Availability × performance efficiency × quality
>85%
% Performance efficiency % Quality rate % Overall equipment effectiveness (OEE) =
example, on a given day a lack of raw material may be the production bottleneck, on another day unplanned downtime in one area, on another day rate reduction in another area, and so on. Beta generally understood where its design bottlenecks were. Indeed, at any given plant, plant management could almost always identify the bottleneck in the production process. Often, however, this was not the limiting factor of production on a given day. It was dependent on which equipment was down, which process was difficult to control,å the quality of the raw material, the most recent production requirement from sales, or any number of factors. In this section, we will review some exemplary data concerning how equipment and process reliability, or the lack thereof, can dramatically affect production capacity and therefore profitability and by inference the dynamic nature of bottlenecks. We’ve seen the TPM principle of overall equipment effectiveness (OEE = Availability × Process Rate/Yield/Efficiency × Quality) and how it can lead to a measure of a plant’s actual output compared with its theoretical output. We’ve also seen how these effects are cumulative and compounding in their ability to substantially reduce output. This effect is only further compounded when one process feeds another in a manufacturing plant. Studies have shown that downtime events can dramatically reduce capacity of a given manufacturing plant. Consider the following Beta plant where process A feeds process B, which feeds process C, which yields the finished product:
63
64
MAKING COMMON SENSE COMMON PRACTICE
Current rated capacities A @ 100 units/hr B @ 110 units/hr C @ 120 units/hr If each step is operated at capacity, producing 100% quality product, then the maximum throughput would be 876,000 units per year and would be limited by process A, the bottleneck. Suppose further that we’ve collected data from each production area and found the processes typically operating as follows. Then, production rate (or OEE) for each manufacturing step will be the product of availability × production rate × quality, or: Planned Downtime
Unplanned Downtime
Process Rates
Quality Rate
A
10%
5%
95%
98%
B
10%
10%
90%
95%
C
10%
15%
90%
90%
Equivalent Capacity
OEE
79.1% × 100 =
A
(85% × 98% × 95%) 68.4% × 110 =
B
60.7% × 120 =
C
79.1 units/hr 75.2 units/hr 72.9 units/hr
Now, plant capacity is limited by process C, and equals Availability = (75% × 8,760) = 638,604 units/yr
×
Process rate
×
Quality
×
(120 × 90%)
×
90%
(72.9% of max)
Or does it? Suppose planned downtime occurs at the same time for each process, because we have considerable control of this factor. But, as luck would have it, unplanned downtime is such a random variable that it rarely occurs at the same time for each process. Suppose further
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
that we are in a manufacturing environment with a just-in-time philosophy, keeping inventory and work in process to an absolute minimum. Then, we have the following: Availability Process A yields
×
Process rate
×
(8,760 × 85%) × (100 × 95%) ×
Quality 98%
= 693,223 units/yr
From this information, processes B and C have: B
C
Noncoincident downtime
10%
15%
Quality
95%
90%
Rates
90%
90%
Process B’s output is:
= A’s output × B’s production rate
= 693,223 × (100% – 10%) × [(110% ÷ 100%) × 90%] × 95% = 586,779 units/yr
Similarly, process C’s output and, therefore, the plant’s annual output is = 485,797 units/yr
which is only 55.5% of maximum theoretical capacity. Simply eliminating unplanned downtime in each process, that is, maximizing process and equipment reliability and recalculating the output, yields an annual output of 627,570 units/yr, or 72% of maximum, a 30% increase in output. In effect, it could find a plant within its plant by eliminating unplanned downtime. Further, eliminating unplanned downtime and improving process reliability through a comprehensive improvement strategy also provides additional gains. 1. Product quality increases. Increased process reliability ensures equipment is running as it should and yielding maximum-quality product. Further, fewer breakdowns reduce
65
66
MAKING COMMON SENSE COMMON PRACTICE
scrap, rework, and transition losses. In effect, quality is improved. 2. Process efficiencies and rates increase. Considerable process inefficiency can occur as equipment is failing or when being restarted after repair. This is particularly true when a commissioning process is not in place to ensure adequate and proper repair. For example, according to the Electric Power Research Institute, over 50% of the failures in fossil power plants occur within 1 week of startup and last for 1 week. This reduces availability and rates (yields, efficiency, etc.). 3. Planned downtime decreases. As process and equipment reliability increases, fewer overhauls are necessary. Further, more planning and work are done prior to the overhaul effort, so that minimum planned downtime is necessary for overhauls. Better application of resources is inevitable. Consider the potential effect of these three inherent improvements that result from a good reliability-based strategy. For example, suppose we could reduce planned downtime for each process from 10% to 8%, that we could improve quality by 2% in processes B and C, and that we could improve process yields by 2% in each process. These are relatively minor improvements. However, working through the math yields a total output of 683,674 units/yr, another 9% in increased output, and now at 81% of maximum, nearing a world-class level of >85%. Alternatively, if market demand is substantially below running the plant 24 hours a day, 7 days a week, then this same approach allows for reducing a 5-day–3-shift operation to a 5-day–2 1/4-shift operation, saving tens of thousands, if not millions, per year. We’ve covered several key benchmarks, touched briefly on the dynamic nature of bottlenecks, and looked at best practices and how they can bring us to a world-class level of performance. However, one of the most difficult issues is getting started. The next section describes how one of Beta’s plants initiated its improvement process and created a common sense of purpose and teamwork in the effort.
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
Manufacturing Uptime Improvement Model As Beta has found, simple cost-cutting may not be the most effective solution for improving manufacturing performance. Indeed, in some cases cost-cutting has resulted in a deterioration of assets such that manufacturing performance has suffered substantially. Nor are costcutting and arbitrary decision making likely to result in benchmark levels of performance. Improved productivity, improved output using existing assets, lower unit costs of production, etc., are a natural consequence of good processes and best practices applied to the manufacturing environment. But, where do we begin with the improvement process? We can’t focus on everything at once, so how do we apply our limited resources to provide the greatest improvement? How do we sustain the improvement process? The following is one model that has worked well at many of Beta’s plants. We’ll begin with the assumption that manufacturing uptime as previously defined is a key measure of the success of a manufacturing plant. Indeed, in the best plants in the world, OEE is typically in the range of 85%+ for batch and discrete plants, whereas an average batch plant operates in the range of 60%; uptime is typically in the range of 90% to 95% for continuous process plants, whereas the average plant operates in the range of 75% to 80%. If we ask: “What is my uptime or OEE? What are the causes of lost uptime?” This is sort of a “Where am I and why am I here?” set of questions. Next we must ask, “What am I doing to eliminate the causes of lost uptime?” Here’s where things begin to get more complicated. The first two questions may be difficult enough for many manufacturers, considering determination of their real and imagined bottlenecks, their product mix effects, process flow effects, etc. However, most can usually come up with a reasonable estimate of theoretical capacity and resultant uptime and of some key causes of lost uptime. Typically, these are at the strategical level, and perhaps a best guess, or both. So how do we refine this, and, more importantly, how do we decide what the root causes are and what to do next? The following outlines a process that several of Beta’s plants have used to “jump start” the improvement process. At the Beaver Creek plant, the technique of failure modes and effects analysis (FMEA) was used to begin the process for establishing priorities, with the plant being viewed as a business system, a system that frequently experienced business functional failures. A
67
68
MAKING COMMON SENSE COMMON PRACTICE
functional failure was defined as anything that resulted in substantial loss of production capacity, or that resulted in extraordinary costs, or that created a major safety hazard. The first step was to have a team of people use a production process block diagram to verify (or determine) the specific peak production rates of each step of the production process. The team didn’t have the complicating effect of having to consider too many products, but if that happens, select a key one or typical one, use it, and then use the process to normalize other products relative to product demand. This process will allow you to determine your “design” bottleneck(s) in the production process. Recall that an improvement in the throughput at the bottleneck results in an improvement of the throughput of the entire plant. In any event, set up a block diagram of your production process. For example, the following one shows where process A feeds process B, which feeds process C. Other processes also feed into the final product, e.g., process D: D→→ ↓ A→B→C (finished product) Next, create a set of cross-functional teams, one from each major production step, to define and analyze “losses” at each step in the production process, and to offer suggestions regarding reducing these losses. As noted, for purpose of the review and analysis, a functional failure of the system is defined as anything that results in substantial loss of production capacity, substantial unnecessary costs, or a major safety hazard. Each cross-functional team from the various production steps should typically consist of an operator, a production supervisor, a mechanic and/or electrician, and a maintenance supervisor. These should be people who are peer leaders within their areas of the plant and who feel the freedom to express their considered opinions. Other people could be involved as part of these teams, and this decision should be left to those leading the effort. The key is to have a team of people who understand where the plant’s problems are and who are willing to work as a team to help resolve those problems. Finally, there should also be a group of support staff who represent another team for the review process. This is likely to consist of a plant engineer, a member of the purchasing staff, a member of the
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
human resources staff, a stores person, someone from utilities, and perhaps others who can contribute to problem resolution as the review evolves. This will result in teams of nominally four to five people from each of the production areas, people who are most familiar with the production and maintenance practices, plus one or two teams of people from other plant support areas, such as purchasing (spare parts and raw material), stores, capital projects, contractor support, project engineering, utilities, and human resources. Some may argue that all this is unnecessarily tying up too many people, and some of that criticism may be warranted, but you just can’t tell a priori whether it is or not. It’s been Beta’s experience that once you get a group like this together—all focused on plant success through solving problems—it’s surprising, even fascinating, what is revealed in the review process. Team building takes place without focusing on what team building is, because everyone is focused on problem solving for the good of the plant. More examples of that follow. As noted, once the group is together, we define a functional failure of the production process as anything happening in the plant that results in lost uptime, extraordinary cost, or a safety hazard. Once a functional failure has been identified in a given area, we also ask how often these failures occur, and what their effects are (principally financial as to lost uptime or extra costs). Next, using a facilitator (a must) we “walk through” the production process with the team assembled, defining functional failures associated with each step in the production process. We begin with the first step in the production process, e.g., process A, but once we’ve finished with finding all the functional failures, their frequencies and effects, in process A, we also look downstream and ask the questions: “Are failures in process B causing any failures in process A? Are failures in any of the support functions, e.g., utilities, purchasing, human resources, capital projects, etc., causing any failures in process A?” And so on. Next we go to the team in process B and ask the same series of questions, looking upstream and downstream and at the support team to identify functional failures of the system. Using this same method, we walk through each step in the production process looking for functional failures in the system. Finally, we also make sure that all the support functions are encouraged to communicate with the production functions regarding how production could help the support functions more effectively per-
69
70
MAKING COMMON SENSE COMMON PRACTICE
form their jobs. This process is not used per se as a problem-solving exercise, only as a problem-identification exercise, including a general order of magnitude to their relative size and importance. For example, at one of Beta’s plants, step A in the production process, we found that: 1. One particular piece of equipment was frequently failing, resulting in most production losses. Further review of this equipment was held in abeyance until root cause analysis could be applied. 2. Raw material quality was a major contributor to lost uptime, lost quality, poor process yields, etc. (As opposed to the ability of the operator to run, or the mechanic to repair, a given set of production equipment.) 3. Gearbox (pump, motor, or compressor, etc.) failures were a major contributor to mechanical failures. (However, the gearbox was purchased without the proper service factor and has been run at higher than design rates, so it’s not likely a “maintenance” problem per se, but rather more likely a design/procurement problem.) 4. Operator inexperience and lack of training is a major contributor to poor process yield. (Operators had been asking for additional training for some time.) 5. Market demand was highly variable, both in product mix and quantity, resulting in frequent downtime. (Marketing had tried to “reduce overhead allocation” by selling anything the plant could make, resulting in 200 different products, but only 5 made up 75% of production demand; the opportunity cost of downtime for changeovers and the cost of equipment reliability had not been considered; marketing must target its niches more effectively.) 6. Spare parts were frequently not available, or of poor suitability or quality. (Purchasing had no real specifications or understanding of the need—low bid was the criterion, lacking specifications.) 7. Inherent design features (or lack thereof) made maintenance a difficult and time-consuming effort, e.g., insufficient isolation valves, insufficient lay-down space, skid-mounted pumps, etc.
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
Lowest installed cost was the only real criterion for capital projects (vs. lowest life-cycle cost). 8. Poor power quality was resulting in frequent electronic problems and was believed to be causing reduced electrical equipment life. (Power quality hadn’t been considered by the engineers as a factor in equipment and process reliability.) 9. Lubrication practices for mechanical equipment needed substantial improvement. (The lubricators, who weren’t well trained to begin with, were let go some time ago in a cost-cutting move. Would you deliberately choose NOT to lubricate your automobile regularly, or have 10 different people with 10 different skill levels and backgrounds do your lubrication?) 10. Mechanics were in need of training on critical equipment and/or precision mechanical skills. A few needed a fresh (or refresh) course in bearing handling and installation. (Reducing training expenses was another cost-cutting move to “save” money. Someone once said: “You think education is expensive—try ignorance!”) And so on. Next, we repeated this process for each step in the Beaver Creek production process, gathering estimations and potential causes for the losses in uptime and/or extraordinary costs. A point worth mentioning is that this may be a very imprecise process, bordering on controlled chaos. Further, in a forum such as this, we’re not likely to be able to accurately calculate the losses; we’re estimating, perhaps even “guesstimating.” As such, these estimates will require validation at some later time. However, these estimates are being made by those who should be in a position to know best. Perhaps, more importantly, an additional benefit is that we have our staff working as a team to understand each other’s issues, and we are using this information to focus on common goals—improving process and equipment reliability, reducing costs, improving uptime, and in the final analysis improving Beaver Creek and Beta’s financial performance. After we complete this process on each of the steps in the production process, we step away from the individual process analyses and begin to judge the overall plant operation, that is, how our bottlenecks might shift, depending on the nature of what we’ve found in
71
72
MAKING COMMON SENSE COMMON PRACTICE
the review; how our previous impressions may now have changed; how cost reduction efforts may now focus on different issues; how we may be seeing systematic effects at every step; how one process can have a dramatic effect on downstream processes; etc. Further, at Beaver Creek we made an estimate of how each affected production capacity, costs, or safety, so that we could prioritize how to apply resources to areas where we could achieve the most benefit with the available resources. Before jumping to any premature conclusions, and by reviewing the causes and solutions to each, we considered carefully the areas where the most benefit was to be gained. For Beaver Creek, we found that most quantifiable production losses or major costs were caused by no more than three problems or issues, and sometimes by only one key problem. Such was the case in step A, wherein a single piece of equipment was causing most of the production losses, followed by poor raw material quality, followed by gearbox problems, the sum of which accounted for over 90% of production losses. But, we also found several systematic issues prevalent throughout the organization, e.g., lack of training for operators, mechanics, and electricians; spare parts quality and availability; product mix and production planning; lubrication practices; and design practices—particularly getting input from the plant on operational and maintenance issues, etc. These systematic issues had to be resolved at a plant level or higher, while in parallel solving equipment-specific issues at the area level. As we went through this analysis, we also began to determine where best to apply certain technologies and practices. For example, if the gearboxes were causing extraordinary downtime and costs, we could in the short term use vibration analysis (a so-called predictive technology) to anticipate problems and be prepared to respond to them in a planned, organized way. In the long term, the engineers had to look for more constructive solutions by improving the basic design (a more proactive approach). If raw material were a problem, in the short term we could monitor the quality of raw materials more frequently and mitigate these effects. In the long term, we could work more closely with suppliers to eliminate the root cause. If a given piece of equipment were the problem, we could set up a detailed root cause failure analysis process to eliminate this problem. Further, we considered how best to prioritize our production and maintenance planning efforts, anticipating where resources were best applied. What also came from the analysis was that we were
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
doing a great deal of preventive maintenance to little effect—either overdoing it on some equipment and achieving little uptime improvement, or underdoing it on other equipment and experiencing unplanned equipment downtime. We can begin to consider how to optimize our PM practices. More on that later in Chapter 10. We could go on, but the point is that if you don’t understand where the major opportunities are, then it is much more difficult to apply the appropriate technologies and methods to improve your performance in a rational way. This method facilitates defining major problems and issues and, perhaps more importantly, creates a team-based approach to resolve those problems. Some other obvious but easy-to-fix problems came up during the review. For example, the Beaver Creek plant had numerous steam, air, and gas leaks at its plant—an easy economical thing to correct with an almost immediate payback. At Beaver Creek, a typical manufacturing plant, some 30% of the steam traps were bad—a tremendous energy loss, unnecessarily costing some $50,000 per year. It also had four 250-horsepower compressors. Only two were needed for most production requirements. Three operated routinely. Conservatively speaking, the savings in getting the air leaks fixed was also $50,000 per year (250 hp × 6,000 hours/year × $0.05/kwh). Nitrogen leaks were also present and substantially more expensive. Finally, a point that should be emphasized—having lots of air and steam leaks at Beta’s Beaver Creek plant sent a clear message—these leaks aren’t important to management and how we view our plant. Getting them fixed sent another clear message—we care about our plant and we want it operated right. Further, the Beaver Creek plant did not do precision alignment and balancing of its rotating equipment, in spite of numerous studies that indicate equipment life and plant uptime can be substantially improved with precision alignment and balancing. It also found that when it began to truly measure equipment performance, some surprises resulted. At Beaver Creek, pump failures were “keeping them up at night,” and when it began to measure its mean time between repairs, the pumps were only running some 4 to 5 months between repairs. Thus, the plant began a pump improvement program that helped it focus on improving pump reliability and uptime. A bonus was the fact that it reduced costs and improved safety because it wasn’t routinely repairing things. Its new “model” for behavior was “fixed forever” as opposed to “forever fixing.”
73
74
MAKING COMMON SENSE COMMON PRACTICE
Beta also found that it was vital to its success to begin to do critical equipment histories, to plan and schedule maintenance, and to be far more proactive in eliminating defects from the operation, regardless of whether they were rooted in process or people issues. This was all done with the view of not seeking to place blame, but seeking to eliminate defects. All problems were viewed as opportunities for improvement, not searches for the guilty. Beta’s Beaver Creek plant had nothing but opportunities when we started the process for improvement. It is now well on its way to substantially improved performance. Finally, this same process is now being applied at other Beta plants for identifying major opportunities, creating a sense of teamwork, and creating a common sense of purpose related to uptime improvement. At the same time, however, this review process revealed several systematic issues about Beta International and its corporatewide practices, which required considerable improvement in the way Beta designed, bought, stored, installed, operated, and maintained its plants. These are discussed in the following chapters, but not before we address the critical issue of integrating the marketing and manufacturing function and a process for rationalizing product mix in the next chapter.
References 1. Grayson, J. National Center for Manufacturing Science Newsletter, Ann Arbor, MI, November 1991. 2. Maskell, B. H. Performance Measurement for World Class Manufacturing, Productivity Press, Cambridge, MA, 1991. 3. The Benchmarking Management Guide, Productivity Press, Cambridge, MA, 1993. 4. Nakajima, S. Total Productive Maintenance, Productivity Press, Cambridge, MA, 1993. 5. “Calculating Process Overall Equipment Effectiveness,” Maintenance Technology, June 1994. 6. Goldratt, E. The Goal, North River Press, Croton-on-Hudson, NY, 1992.
BENCHMARKS, BOTTLENECKS, AND BEST PRACTICES
Additional Reading Ashton, J. “Key Maintenance Performance Measures,” Dofasco Steel Industry Survey, 1993. Hartoonian, G. “Maintenance Stores and Parts Management,” Maintenance Journal, Vol. 8, No. 1, January/February 1995. HR Benchmarking Study, European Process Industries Competitiveness Center Newsletter, Cleveland, Middlesbrough, United Kingdom, Autumn 1997. Jones, E. “The Japanese Approach to Facilities Maintenance,” Maintenance Technology, August 1991. Jones, K. Maintenance Best Practices Manual, PE—Inc., Newark, DE. Kelly, K., and Burrows, P. “Motorola: Training for the Millennium,” Business Week, March 1994. Liker, J. K. Becoming Lean, Productivity Press, Portland, OR, 1997. Mendelbaum, G., and Mizuno, R. Maintenance Practices and Technologies, Deloitte & Touche, Toronto, Ontario, Canada, 1991. Moore, R., Pardue, F., Pride, A., and Wilson, J. “The Reliability-Based Maintenance Strategy: A Vision for Improving Industrial Productivity,” Industry Report, Computational Systems, Inc., Knoxville, TN, September 1993. Wireman, T. World-Class Maintenance Management, Industrial Press, New York, 1990.
75
This Page Intentionally Left Blank
Aligning the Marketing and Manufacturing Strategies
3
Bein’ all things to all people is like bein’ nothin’ to nobody. Anonymous Businesses should clearly understand their targeted market(s), and within those markets their profile customer(s). This allows a business to better focus on its strategic business strengths and target markets and customers more effectively. For example, an instrumentation and control company might define its targeted markets as those that represent heavy industrial manufacturers, e.g., chemical and petro-chemical, pulp and paper, refining, power generation, automotive, primary metals, textiles. These would typically be those industries where their products provide the greatest benefit for precision process control and, therefore, greater production output and reduced cost. Others, such as food and general manufacturing, might simply not be targeted and, therefore, would not receive any priority relative to resources for promotion, new product development, design modifications, etc. Of course, the company would sell its products to these industries, but wouldn’t spend significant money to try to gain their business. Further, within these “targeted market” industries, for example, a profile customer might have sales of >$100M per year, operate in a continuous or 24-hour/day mode, and have production losses due to poor control valued at >$10K per hour. These plants would place a very high value on process control, and would have the financial resources to buy and implement the technology.
77
78
MAKING COMMON SENSE COMMON PRACTICE
With this in mind, the company would focus its design and new product development, marketing and sales, advertising, manufacturing capability, distribution channels, customer support, etc., on the needs of these targeted markets and, more importantly, on making the profile customers within these markets knowledgeable of, and successful in, using its products. This customer success translates into the company’s success. As we’ll see, Beta generally has a very good sales organization, but is generally less mature in understanding its markets, particularly as it relates to fully integrating its marketing and manufacturing strategies to achieve a much more successful business strategy. Several exceptional publications have been written regarding the development of a manufacturing strategy and integrating this fully into the marketing and overall business strategies.1–3 Yet, these and similar publications have received little attention at Beta, where manufacturing has historically been viewed by the sales function as “the place where we get the stuff we sell,” a kind of spigot from which to fill our buckets for selling at the local market. Only recently has Beta begun to use the approaches described in these publications and this chapter to form its strategy.
Beta’s Pracor Division Beta understands that all businesses start with markets, some existing (basic chemicals, for example); some created from “whole cloth” (new drug technology, for example). This company is currently primarily in existing, mature markets, but of course is investing R&D into new product and process development. Beta has a strong desire for its Pracor Division, which sells into the industrial market, to expand into the moderate and rapid growth areas where its newer, higher-margin products have already established a modest position. However, Beta has not spent adequate effort to strategically position its products for gaining market share or for ensuring that its manufacturing capability and strategy could support those marketing targets and its strategic business objectives. For example, products and product mix have often been created through happenstance—a salesperson would promise something that the plants then had to deliver. The most difficult questions asked were, “Can you make it? On time? At a reasonable gross margin?” Product specifications have sometimes been created to meet a
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
specific sales order versus having done the market analysis to create the products that would serve key customers. Granted, there must always be the give and take necessary to understand the customer’s needs as they evolve, but these needs should be under regular review, and market analysis should adapt to those changing needs and anticipate them. It is understood that being flexible and agile is important, and that being consistent and rigorous is also important. The best companies balance both well, understanding that simply reacting moment to moment to every potential sales order is not likely to lead to superior performance. Additional discussion on optimizing product mix at one of Beta’s plants is provided later, so for the time being the discussion focuses on the development of Beta’s integrated marketing and manufacturing strategy for its Pracor Division. Where does it want to position itself in its markets? What market share does it want? What is its strategy for achieving this? Has this strategy been communicated to the shop floor level so that all have a clear sense about how to get there? Let’s have a look at one model for addressing some of these issues.
Market/Product Success Factors The mission of Beta’s Pracor Division was to create a common sense of purpose within the organization, “To be the preferred supplier” to its customers. Lots of issues are embodied in this simple statement, and properly “lived” it could (unlike most mission statements) provide a genuine common sense of purpose throughout the organization. Just a quick note—mission statements should create this common sense of purpose. Most mission statements are long, drawn out “corporate speak,” which albeit important for senior management to articulate, do not create a common sense of purpose that drives the organization. Few people at Beta International know its corporate mission statement. More on this issue is provided in Chapter 15. In any event, what does being the preferred supplier mean in practice, and how will we achieve this? One way to answer this question would be to answer Turner’s question,3 “What wins orders?” If you can win the order, then you are the preferred supplier by definition. After considerable thought, noting its products applied to different markets, and that different markets and customers had different expectations, Beta developed the preliminary profile in Table 3-1 to answer the question, “What wins orders?” in its three markets (A, B, and C) for its five principal product lines (1–5).
79
80
MAKING COMMON SENSE COMMON PRACTICE
Table 3-1
Rating of Market/Product Success Factors. A/1
A/2
A/3
B/1
B/4
C/5
Price
5
5
4
3
2
1
Quality specs
3
3
4
4
4
5
On-time delivery
3
3
4
5
4
5
Packaging
4
2
3
3
3
5
Technical support
1
2
3
2
3
4
In Table 3-1 a score of 1 means the customer is relatively indifferent to this factor, so long as it is within a reasonable range, while a score of 5 means the customer is highly sensitive. These sensitivity factors can, in turn, be interpreted into actual marketing and manufacturing requirements, thus facilitating a higher rate for “winning orders.” It should also be noted that quality per a given specification is a requirement, that is, if you can’t meet the customer’s specification on the order 99% of the time or better, you’re not going to get many orders. The sensitivity relates to the need to meet increasingly difficult customer specifications. For Beta, market A and products 1–3 (A/1–3) represent Beta’s historical strength, but in a business that is changing substantially. The products are mature, e.g., commodities, showing slow steady growth, and becoming highly price sensitive, which is increasingly cutting into Beta’s historically good margins. To Beta’s chagrin, certain customers for A/3 are demanding an even higher quality at a lower price, though not as intensely price competitive as A/1–2. Even this is likely to change in time, as demands for steadily increasing quality and performance are common expectations in most all industries today. Likewise, these same customers are demanding more reliable delivery, e.g., just in time, 100% on time/in full, and increasing their packaging flexibility requirements. Fortunately, technical support requirements were only minimal in A/1–2, but show an increase in A/3. Sales in market A have historically been most of Beta’s business (some 60%). Increasing price sensitivity has led to cost-cutting exercises over the past several years, reducing Beta’s operating and maintenance budgets for its manufacturing infrastructure. This has resulted in deterioration of some of its assets and exacerbated its ability to deliver quality product in a timely manner—the plants
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
just aren’t as reliable, particularly in packaging capability. This has begun to erode customer satisfaction and market position, though it hasn’t become a critical issue, yet. Finally, A/3 represents the same product, but in several new market applications requiring additional technical support. For some years now, Beta has provided only minimal support in most markets, providing that effort on a case basis only for those who demanded it. Staffing and capability are, therefore, limited. This has also limited its ability to support those new customers in A/3 and, increasingly, in markets B and C. The customer support function will require a significant investment in the future. In market B the customer is less sensitive to price, generally more sensitive to quality and delivery, as well as packaging and technical support requirements. However, again because of the lack of reliability in Beta’s plants, quality and delivery performance have suffered, leaving it at risk in these markets, which make up some 25% of its business. This market is also expected to grow moderately in the coming years, and is a “must” part of Beta’s growth plans. Market C and particularly product 5 represent a major, new growth opportunity that will allow Beta to assert itself in an area with relatively high margins. However, though relatively price insensitive, stringent requirements in quality, delivery, packaging, and technical support, when combined with manufacturing problems in these areas, could result in Beta’s failure to capitalize on these markets. Also, these markets will require additional capital projects at the plants to fully support product manufacturing, including associated packaging requirements. And substantial technical support is required, something that Beta, as noted, is not accustomed to supplying. All this requires substantial investment and training for a market that short term has low volume and high margins and a net small return. Long term, Beta believes this is a market in which the company needs to be a major player. It could ultimately constitute over 30% of its business. Given Beta’s desire to increase its volume of business in markets B and C, it is essential that these issues be addressed as quickly as possible. Finally, there are many products, 6–30, that are typically sold into other markets, D–Z, as well as A–C, but which only constitute 5% of business volume at this division of Beta. In other words, over 80% of Beta’s products make up 5% of its sales in this division, while 3% of its products (product 1) make up over 35% of its sales.
81
82
MAKING COMMON SENSE COMMON PRACTICE
These low-volume products are generally those sold to certain customers, with the sales force forecasting large potential, little of which has materialized—and/or with R&D coming up with the latest “gee-whiz” product, which likewise has not fully materialized. Make no mistake, these types of risks must be taken, but these same risks often result in offering too many products, especially when markets and product mix are not regularly scrutinized. How else will we truly know about a new product’s potential, unless we try it? That said, however, a rigorous process must be established so that it’s common practice to review markets, product mix, sales histories, anticipated demand, etc., for continuing modification and rationalization of products and mix.
Volume and Market Growth Analysis Using this information, and other market and customer data, Beta has made the preliminary observations shown in Table 3-2. Table 3-2 shows that in general Beta would like to change its business mix from being as dependent on its historical mature market to more specialized products, which will allow for greater product differentiation, growth, and profits. For example, in market A, it would like to reduce its percent of business volume from nearly 60% to 45%, and yet modestly increase market share, from 30% to 35%, in a market that is growing at only 2% per year. To do this will require an increase in total volume of 10% to 15% over a 3- to 5year period, depending on market price and other factors. If this is to be, Beta must capture more market share in market A and simultaneously expand quickly into other markets that are growing more rapidly. In market B, which is expected to grow at 5% per year, Beta would like to modestly increase its percent of total business, as well as market share. This will require a growth in total volume of 40% to 60%, depending on price, supported by moderate improvements in quality and on-time deliveries, and very modest improvements in customer support and packaging. Finally, in market C, which is growing at 15% per year, Beta would like to substantially increase its business volume, as well as market share. Similarly, this will require a growth in total volume of 250% to 350%, or roughly triple its current volume. Keeping in mind its drive “to be the preferred supplier” and achieve these goals,
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Table 3-2
Market A
Markets/Products History and Forecast. Products & Current Average Price/Unit
Position
1-$2,000
Leader
Current Volume/ Market Share
5-Year History
5-Year Forecast
Slow, steady growth @~2%/yr
~45%/ Slow, ~35% steady growth @~2%/yr
Follower, 25% / moderate ~15% supplier
Moderate Growth @~5%/yr
Moder- ~30%/ ate ~20% Growth @~5%/yr
Rapid Growth @~15%/ year
~25%/ Rapid Growth ~20% @~15%/ year
Limited
Uncertain
2-$2,000
58%/ ~30%
3-$2,050 B
1-$2,100 4-$2,200
Goal— Volume/ Market Share
C
5-$2,350
Developing
D–Z
6–30
Uncertain 6%/NA
Avg~$2,250
11% / ~10%
3%/—
this now requires additional analysis. What production capabilities—volume, cost, quality—are necessary to support these goals? What is Beta’s strategy for making all this happen? What wins orders? Beta must gain more than its historical share in all markets and must be able to support these gains with its manufacturing capability. Beta’s strategy must also ensure meeting the following minimum goals, which, in turn, are required to support return on equity and earnings per share growth: Sales growth:
10% per year, consistent with market growth objectives
Return on net assets:
20% per year minimum
Profit after tax:
Consistent with RoNA objectives
83
84
MAKING COMMON SENSE COMMON PRACTICE
Manufacturing Capability for Supporting Market/Volume Goals Beta next undertook an analysis of its operational and manufacturing practices to determine if they supported key marketing and corporate business objectives. Note: While all this is proceeding, the sales department continues to berate manufacturing, demanding that they “do more, and do it better,” despite the fact that its production requirements for manufacturing are often late, wrong, and can change frequently during any month. Production planning is often an oxymoron. The marketing function has taken a back seat to the sales department in an effort to “Make money, now!” But this reactive approach to selling is only exacerbating the ability of the manufacturing plant to deliver on its customer requirements. All things considered, the Pracor Division is in a bit of a conundrum. In any event, let’s consider its historical manufacturing capability and performance to meet the demand of the marketing and sales departments, which is summarized in Table 3-3. All things considered, this performance is no better than average. Uptime is about five points below average for this type plant, and unit costs are about $50/unit more than what is believed to be an industry average for these products. Other plant performance indicators such as on-time deliveries, stock turns, first-pass quality, etc., are also in the average range. These parameters are important in an integrated analysis, but for present purposes they will not be reviewed. You are encouraged, however, to bring this or other information into your analysis as appropriate. Table 3-3
Demonstrated Manufacturing Capability.
Location
Capacity Peak, K units/yr
Production Output, Actual
Uptime (Figure 1-5)
Principal Products Capability
Average Unit Cost
Plant 1
40
27.2
68%
1–4
$1,600/unit
Plant 2
30
24.6
82%
1–4
$1,450/unit
Plant 3
25
18.2
73%
1–5, 6–30
$1,550/unit
Plant 4
25
19.2
77%
1–5, 6–30
$1,500/unit
Plant 5
29
15.0
75%
3–5, 6–30
$1,550/unit
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Finally, all plants can theoretically produce all products. However, because of operator training, plant configuration, market areas, etc., it is quite difficult as a practical matter to do so. The plants are currently configured to produce as shown. In an emergency any plant could be reconfigured and staffed to produce any product necessary, but at considerable additional cost.
Performance by Plant and by Product Line Beta further undertook an analysis by plant and by product line of each plant’s actual performance as it relates to output and gross margins, using the unit prices and unit costs shown. The results of this are provided in Table 3-4. Summarizing this, Beta is achieving a gross margin of some $58.6M, which, after costs associated with R&D, marketing and sales, administration, interest, and taxes, leaves some $8.5M in profit after tax, or just under 4% profit on sales. This also translates to a return on net assets of about 8%, with net assets of $110M for this business. This performance, while not terrible, is not particularly good, and is troubling in light of the trend that appears to be going in the wrong direction. Nor does it support the business volume and market share goals described in Table 3-2. Let’s consider the plant’s operating performance—uptime, unit costs, and gross margin contribution (Tables 3-3 and 3-4)—in light of its stated market direction, growth requirements, and success factors—“What wins orders” (Tables 3-1 and 3-2). First, as we’ve seen, Beta has done some manufacturing benchmarking and concluded that its uptime and cost performance are no better than average. Uptime for these plants ranges from 68% to 82%, with a weighted average of about 75%. According to the data available, this is 5% below average for this industry, and at least 15% below so-called world-class levels of 90%+. Further, unit costs are above average, and at least 10% above the best plants. Particularly troublesome is the fact that the largest plant, which should have the best performance, has the poorest performance. It is a large plant that produces mostly high-volume, commodity-type products, and should have the lowest unit cost of production. Quite the opposite is true. It has the highest cost of production, and its uptime and reliability are the poorest, making it increasingly difficult to ensure customer satisfaction. If Beta is to (1) gain market
85
86
MAKING COMMON SENSE COMMON PRACTICE
Table 3-4 Plant Performance by Product/Market—Output and Gross Margin. Market/Product A/1 K Units, GM $K
A/2
A/3
B/1
B/4
C/5
D–Z 6–30
Totals
1
13.1 $5.24K
8.7 $3.48K
2.8 $1.26K
1.1 $0.55K
1.5 $0.90K
0 —
0 —
27.2 $11.43K
2
9.1 $5.00K
5.7 $3.14K
4.8 $2.88K
3.1 $2.02K
1.9 $1.42K
0 —
0 —
24.6 $14.46K
3
3.6 $1.25K
4.6 $2.07K
1.8 $0.90K
2.5 $1.38K
2.1 $1.36K
2.1 $1.68K
1.5 $1.05K
18.2 $10.06K
4
2.5 $1.25K
1.3 $0.65K
1.2 $0.66K
5.8 $3.48K
3.4 $2.38K
3.8 $3.23K
1.2 $0.90K
19.2 $12.55K
5
0 0
0 0
3.2 $1.6K
0 0
4.0 $2.60K
4.5 $3.60K
3.3 $2.31K
15.0 $10.11K
Totals
28.3 $13.11K
20.3 $9.34K
13.8 $7.30K
12.5 $7.43K
12.9 $8.66K
10.4 $8.51K
6.0 $4.26K
104.2 $58.61K
GM $/unit
0.46
0.46
0.53
0.59
0.67
0.82
0.71
0.56
Total Sales, $M
$56.60
$40.60
$28.29
$26.25
$28.38
$24.44
$13.50
$218.06
Total Costs, $M
$43.49
$31.26
$20.99
$18.82
$19.72
$15.93
$9.24
$159.45
Avg. Cost, $/unit
1,537
1,540
1,521
1,506
1,529
1,532
1,540
1,530
Plant No.
share in market A, (2) achieve higher profits, and (3) achieve other key corporate objectives, plant 1 must improve its performance. For example, price performance is critical to Beta’s success in market A, as well as packaging in product No. 1. If Beta is to succeed in this market, pricing must come down, and costs must come down proportionally more to support Beta’s goals for increased market share and concurrent improved financial performance. Plant 2 seems to be doing reasonably well, but could still support improved performance in markets A and B. Key to this are moderately improved uptime and unit costs, and improved technical support.
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Plant 3 is only slightly better than plant 1, and plant 4 only moderately better in terms of uptime and unit costs. However, it appears that these plants are trying to be all things to all people, and apparently trying to act as “swing plants” in the event plant 1 or 2 fails to meet delivery schedules, as well as address markets B and C, and D– Z. While plant 4 seems to be doing a better job at this, a clearer focus needs to be established. Plant 5 is relatively well focused on higher-margin, higher-quality products, but is also producing lots of small-order products for markets D–Z. While these appear to have relatively high margins, these margins have not included the effect of downtime on principal products and markets, which appears to be disrupting production plans routinely, especially for key strategic markets. With this in mind, Beta must answer the question—“What value should be placed on the opportunity cost of lost production when making product for nonkey markets?”
The Plan Beta’s Pracor Division has no choice but to support its market share and business volume goals detailed in Table 3-2. The strategy for doing this begins with winning orders using the market/product success factors in Table 3-1. Specifically, it must lower prices in market A, improve quality and delivery overall, particularly in markets B and C, and improve packaging and technical support for markets A, B, and C. All this must be done while improving margins and return on net assets. Therefore, the following must be done:
Improve uptime and manufacturing performance to near world class. Using a reliability-driven strategy, described in Chapters 4–17, improve the uptime at all its plants, including specific targets that are near world class of 90%. Lower uptimes would be acceptable in plants running a larger number of products. Further, quality and on-time delivery requirements in the newer, faster growing markets are critical to the success of the business. Having mediocre performance in manufacturing reliability could be the death knell for the business in light of these requirements.
87
88
MAKING COMMON SENSE COMMON PRACTICE
Review product mix and rationalize those products that offer little strategic return. The tentative conclusion was that business volume in markets D–Z for noncore products should be cut in half, and most of that remaining business should be focused at plant 4, which had demonstrated its ability to effectively manage changeovers and still maintain relatively high uptimes. It’s likely that there is considerable business in these markets that Beta just doesn’t want, at least not at the current price. A more detailed model and case history for product mix rationalization is provided in the next section.
Set higher standards for new products being promoted by R&D and/or marketing and sales. Link the manufacturing function more fully into the decision-making process concerning the impact of new products and/or “difficult” products.
Increase market share in the more mature businesses. Increasing market share requires substantially improved manufacturing performance, especially in mature businesses. To support concurrent financial performance is a must. This requires much higher uptimes and much better cost performance. Some of this cost performance would come about as a result of improved uptime, but additional reductions in fixed and variable costs also must be realized to support the marketing plan. These cost reductions should be a consequence of best practice, not arbitrary cost-cutting.
Reduce prices in some products/markets. To support certain pricing sensitivities, particularly in market A, Beta concluded that it had to reduce its pricing by at least 7%. Further, it also anticipated that increased competition, and increased pricing pressures, would result as others recognized the increased opportunity in markets B and C.
Expand its technical support capability. This is a must to ensure customer satisfaction, especially in the more attractive markets. This includes not just telephone support, but also training seminars in product applications and use, frequent joint sessions with customers to understand needs and complaints, etc. Annual surveys, with follow-up discussion, will ensure understanding of needs. Note that these training seminars would also be used internally for marketing, manufacturing, and R&D staff.
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Improve packaging capability. Short term this is a must and may require modest capital expenditure short term. This may also result in excess packaging capability as reliability improvements are made. Beta is willing to make this investment to ensure customer satisfaction and market share, in spite of the potential of excess packaging capability later. While this will have a negative impact on RoNA short term, Beta may have little choice. A packaging team has been established to determine whether the need for additional packaging can be avoided through rapid deployment of reliability practices.
Revised Pricing Strategy As a result, Beta has concluded that it will likely be necessary to modify its pricing structure, as shown in Table 3-5, consistent with success factors related to “what wins orders.” With all this in mind, Beta will now approach key customers in each market with the goal of establishing long-term contracts (3–5 years) for minimum volumes, in return for achieving their specific Table 3-5
Planned Pricing Structure.
Market/ Product
Current Price, $/unit
Planned Price, $/unit
A/1
2,000
1,850
Packaging premium on pricing
A/2
2,000
1,800
Rock bottom pricing
A/3
2,050
1,950
Quality/delivery premium
B/1
2,100
1,950
Quality/on-time delivery premium
B/4
2,200
2,100
Relative price insensitivity; quality/delivery premium
C/5
2,350
2,250
Relative price insensitivity, but increasing competition; premium on high standards for all requirements
D–Z/all*
2,200
2,200
No price concessions; selectively increase prices; reduce demand 50%
Success Factor Comments/Rationale
*Note that in markets D–Z, no price concessions will be offered, and prices will selectively be increased.
89
90
MAKING COMMON SENSE COMMON PRACTICE
objectives, e.g., lower pricing and better packaging in market A; superior quality, delivery, and technical support in markets B and C; etc. These discussions will be targeted and backed by a specific plan of action that outlines how each customer’s requirements (in each market) will be met to support the customer’s business. For example, over the next 3–5 years, Beta might discount its product in market A by 5% to 10%, prorated over the time period, and allowing for raw material cost adjustments. In each case, Beta will strive for a strategic alliance for ensuring mutual success. However, Beta will still use pricing modifications as appropriate to ensure that all pricing is market sensitive, maximizing profits as needed and protecting market share as needed.
Plant Performance Requirements To support this plan, Beta also concluded that the plant performance must improve dramatically, from just under 75% to nearly 87%, and unit costs must come down by at least $180 per unit. A target value of $200 per unit will be established. Details are shown in Table 3-6 and are necessary as a minimum to support market share and financial objectives. Improved uptime will support achieving these unit cost objectives, but in and of itself is not sufficient. Best practices must be put in place that ensure that costs are not unnecessarily incurred, particularly in maintenance, where equipment downtime and repair costs have been a key contributor to excess costs and/or lost production. For example, plant 1, where raw material costs are 50% of the total cost of manufactured goods, must improve its uptime from 68% to 90%, and reduce operating and maintenance costs by an Table 3-6 Plant
Plant Performance Data—Unit Cost and Uptime. Current Cost, $/ Planned Cost, $/ unit unit
Current Uptime
Planned Uptime
1
1,600
1,350
68%
90%
2
1,450
1,300
82%
90%
3
1,550
1,350
73%
85%
4
1,500
1,450
77%
80%
5
1,550
1,350
75%
85%
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
additional $1M/yr to achieve a unit cost of less than $1,350/unit. The other plants found a similar situation, particularly plant 4, where unit costs were only anticipated to be reduced by $50/unit. However, it was also anticipated that additional costs would be incurred because Beta was making plant 4 the plant that would become good at product changes and at handling rapid responses to market and sales conditions. Note from Table 3-7 that plant 4 will handle 66% of products for markets D–Z and that the total volume for these markets has been cut in half. This approach will necessarily reduce its uptime to a maximum estimated at about 80%. Plants 3 and 5, likewise, would also be limited to about 85% for similar reasons and would act as backup for certain situations, but would not be anticipated to be as agile. Plants 1 and 2 would focus on being the low-cost, high-volume producer.
Other Actions Required On further review, Beta found that plants were not fully capable of determining their product costs by product line, or to do so-called activity-based accounting. Hence, the unit cost of production had to be aggregated for each plant. In the future, each plant will formulate and implement the basis for tracking the cost of given product lines more specifically. This, in turn, will allow greater refinement in the cost and pricing strategy for given products. This effort will also include making a determination of the opportunity cost of lost production.
The Expected Results Beta was convinced that applying a manufacturing reliability strategy for increasing uptimes and lowering production costs would also further reduce “out-of-pocket” costs. This would support its pricing strategy, while providing adequate margins to ensure meeting financial objectives. As shown in Table 3-7, gross margin contribution from the plants will increase from $58.6M to $71.2M, or some $12.6M. Beta anticipates that about $1M may be necessary to improve packaging capability, both in terms of equipment and training, and to do some minor debottlenecking in certain production areas. This will only be spent if necessary after the packaging team’s review. It is also antici-
91
92
MAKING COMMON SENSE COMMON PRACTICE
Table 3-7 Pro Forma Plant Performance by Product/Market—Output and Gross Margin. Market/Product A/1 K Units, Gross Margin
A/2
A/3
B/1
B/4
C/5
D–Z 6–30
Totals
1
17.3 $8.65K
11.5 $5.17K
3.7 $2.22K
1.5 $0.90K
2.0 $1.50K
0 0
0 0
36.0 $18.44K
2
9.1 $5.00K
5.7 $2.85K
4.8 $3.12K
4.7 $3.06K
2.7 $2.16K
0 0
0 0
27.0 $16.19K
3
3.6 $1.80K
4.6 $2.07K
1.8 $1.08K
3.6 $2.16K
3.1 $2.33K
4.2 $3.78K
0.3 $0.25K
21.2 $13.47K
4
2.5 $1.00K
1.3 $0.45K
1.2 $0.60K
5.8 $2.90K
3.4 $2.21K
3.8 $3.04K
2.0 $1.50K
20.0 $11.70K
5
0 0
0 0
3.2 $1.92K
0 0
4.0 $3.00K
6.5 $5.85K
0.7 $0.59K
17.0 $11.36K
Totals
32.5 $16.45
23.1 $10.54
14.7 $8.94
15.6 $9.02
15.2 $11.20
14.5 $12.67
3.0 $2.34
118.6 $71.16
GM $/unit
0.51
0.46
0.61
0.58
0.74
0.87
0.78
0.60
Total Sales, $M
$60.12
$41.58
$28.67
$30.42
$31.92
$32.62
$6.60
$231.93
Total Costs, $M
$43.67
$31.04
$19.73
$21.40
$20.72
$19.95
$4.26
$160.77
Avg. Cost, $/unit
1,344
1,344
1,342
1,372
1,363
1,376
1,420
1,356
Plant No.
pated that an additional $1M will be spent to develop and implement practices at each of the plants to ensure best practice for achieving the desired objectives; it represents less than 1% of plant replacement value. Finally, it is anticipated that over $0.5M will be required for administration, additional commissions on sales, etc. This brings the net operating income increase to some $10M, which, in turn, results in about $7M in increased net income. This represents an 80% increase in profits and return on net assets. However, RoNA is still only 14%, and other goals related to business mix and
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
market share are still not being achieved, even with this substantially improved performance. On further review, Beta also concluded that it must:
Achieve an additional unit cost reduction of some $50/unit. This will bring its average unit cost to about $1,300/unit, and/or additional equivalent output commensurate with financial goals. This will result in achieving a 20%+ RoNA.
Perform additional debottlenecking reviews of all its plants. Bottlenecks that could be increased in capacity without major capital investment represent additional opportunity for gaining market share with existing assets using the reliability strategy described herein. This may also support deferring capital investment to meet expected increases in volume. Note that bottlenecks are not just “design,” and tend to be dynamic and variable, depending on plant equipment performance on a given day. Indeed, the design bottleneck may rarely be the actual bottleneck, principally because it gets so much attention to ensure its operational reliability. The real bottleneck then tends to “bounce around,” depending on what isn’t operating properly on a given day. This dictates that the entire plant be highly reliable.
Build one or two additional plants, depending on plant size and location. These plants would be focused on producing products for markets B and C, because plants 1 and 2 can, with the improvements indicated, provide the additional capacity needed in market A. It is anticipated that 10–15K units per year of products 1 and 4 will be required for market B, and 20–25K units of product 5 will be required for market C to support anticipated growth. Given that the plant is performing at 85% uptime, this necessitates one plant rated at nominally 40K units per year, or two small units each at about half this.
Approach its key suppliers to ensure they are applying manufacturing excellence principles. Raw material costs represent about half of Beta’s manufacturing costs. Beta will work with suppliers to ensure excellence such that at least some of the benefit flows through to Beta, allowing it to further reduce its manufacturing costs. Note, for example, that
93
94
MAKING COMMON SENSE COMMON PRACTICE
a 5% reduction in raw material costs could be the equivalent of cutting energy costs in half. Beta intends to work with key suppliers to develop strategic relationships to ensure they have manufacturing excellence, which will, in turn, ensure it receives the lowest possible cost at the highest possible reliability of supply.
Reduce waste in raw material use. Because raw materials represent half of manufacturing costs, small improvements are equivalent to larger ones in other areas. Beta will investigate ways to improve processing yields. This will be done not so much in the process itself, as it will with the minimization of waste and scrap. An example of simple waste reduction that comes to mind is that of a refining company participating in the benchmarking with Beta. To make a long story short, this refiner was experiencing some 1%+ loss in yield conversion from crude stocks. This was reduced to 0.25% after a new sight glass design and appropriate PM were introduced to allow operators to view the water-purging process more effectively, so they did not do excessive purges. Beta will seek the help of its operators and maintenance to help reduce waste. R&D has also expressed that it is currently working on several process improvements that could improve yields and reduce raw material requirements. These will be incorporated into the strategic plan.
Establish a process for cross-training manufacturing and marketing/sales personnel. Beta had also found that in this business unit, its sales manager had previously worked in manufacturing as a production supervisor. This individual understood the production process and the plant’s capability. He also understood the general sensitivities as to costs, production capability, etc., and could readily relate his role as sales manager to his old position as production supervisor. This proved invaluable in terms of his ability to integrate customer requirements and manufacturing capability. Beta is currently considering making it a requirement to have key sales staff spend at least a year in a production supervisory role, facilitating the practical requirements of integrating manufacturing and marketing strategies.
Ensure that supply chain issues are integrated into the plan. At Beta much attention had been previously given to improv-
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
ing supply chain performance, yet without addressing the fundamental requirement for manufacturing excellence. Supply chain objectives will be very difficult to meet if manufacturing excellence, reliability of supply, etc., are not integrated into the supply chain analysis and plan. Reducing work in process and inventories, increasing stock turns, ensuring 99% on-time deliveries, ensuring 99%+ first-pass quality, etc., require manufacturing excellence.
Integrate other issues, such as capital spending and asset condition and expected life. While not addressed specifically as part of this analysis, Beta is also addressing and integrating other issues, such as capital spending, asset condition and expected life, specific operating and maintenance requirements for each plant, etc. This is being done using the reliability process discussed briefly in Chapters 1–3 and as detailed in the following chapters.
Beta will also be applying this methodology to its R&D efforts in the future. Using a “what wins orders” approach, Beta will more actively ensure that most R&D is tied to improving product specifications and manufacturing performance related to winning orders. Further, a review will be undertaken to determine which R&D conducted over the past 5 years or so has led to which sales volume and/or improved gross margin contribution. What did we think then, and have our expectations been realized? This analysis will be used to more actively manage the R&D activities. One note of caution on this, and that is that some R&D still needs to be truly cutting edge and experimental, even recognizing that there is a relatively high probability this will not result in a return on the investment. The next discovery will never be made unless some freedom is permitted in the research process.
With this effort, Beta’s Pracor Division is well on its way to more fully integrating its marketing and manufacturing strategy, but as with most things, this process is iterative and requires continuing attention and adaptation. Next we’ll consider how Beta more effectively rationalized product mix, and then we’ll get into the details of how manufacturing excellence is being established throughout Beta International.
95
96
MAKING COMMON SENSE COMMON PRACTICE
Effect of Product Mix on Manufacturing Performance Manufacturers, including Beta International, are increasingly called upon to be the low-cost producer of their products, and simultaneously to ensure maximum flexibility for meeting varying customer needs—the customer is king. Customer responsiveness, customer demands, and flexible (agile, cell, etc.) manufacturing, are strategies often used to satisfy the customer demands and to ensure maximum market share, market penetration, asset utilization, and, ultimately, profits. However, is this the proper strategy? Does it lead to higher profits? Treacy4 makes this point when suggesting you should “choose your customers, narrow your focus, dominate your market.” More recently a major consumer products company is paring down its products to ensure greater profitability and market focus.5 By inference, one of the thrusts of the book is that we must strategically select our customers, target our markets, and not try to be everything to everyone. This section deals with one issue related to this strategy—product mix and its effect on the ability of a given manufacturer to achieve maximum market share and profits. It represents a case history for one of Beta International’s plants.
Beta’s Sparta Plant—Rationalizing Product Mix Beyond the issues described at the beginning of this book, Beta’s Sparta operation is a mid-sized manufacturing plant located in middle America. The plant employs over 400 people and is highly regarded as a supplier of high-quality products. It has recently seen substantive changes in its markets and in competitive pressures thereto. These changes are partly related to changes brought about by GATT and NAFTA, partly related to changes in customer demographics, and partly due to internal pressures forcing modernization of an older plant—in both equipment and work habits. Moreover, Beta has included flexible manufacturing as part of their basic manufacturing strategy, the intent being to ensure maximum responsiveness to meeting customer needs and, ultimately, maximum market share and profits.
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Sparta’s plant management is feeling the pressure from Beta’s senior management to improve performance—merely maintaining the status quo will not ensure survival, let alone prosperity. Moreover, Bob Neurath sees the changes brought about by NAFTA and GATT as more of an opportunity than a threat—if Beta can improve performance and penetrate those new markets, particularly in foreign countries, improved growth and greater profits are clearly available. This is particularly true in light of the new banner product that Sparta is to manufacture. It wants to capitalize on these markets using existing manufacturing assets (no additional capital expense), of course, while maximizing continuing customer satisfaction. Beta has advised the management of the Sparta plant that based on its analysis, improvements are necessary to ensure the plant’s competitive position in current, as well as new, markets. Beta is intent on expanding its markets, investing its capital, and manufacturing its products where, all things considered, it could make the most profits.
Benchmarking Audit The Sparta plant had recently gone through a benchmarking audit to determine where it stood relative to world-class performance. The results of the audit were both discouraging and encouraging— Sparta was typically in the average range for the major audit categories, considerably below world class. At the same time, the analysis showed that world-class performance should: 1. Allow the company to manufacture an additional 1,000,000 units per year of its products with the same assets (it had been struggling to make 4,000,000 per year). 2. Reduce operating and $1,000,000 per year.
maintenance
costs
of
some
The total value of this improved performance was estimated at $5,500,000 in increased net operating income. Strategically, the audit demonstrated that if it could accomplish the objective of being world class, product unit costs would improve, thereby allowing more strategic pricing and customer alliances and greater market share and profits. Further, it could apply
97
98
MAKING COMMON SENSE COMMON PRACTICE
the same principles to other manufacturing facilities in the company to ensure long-term prosperity for the company. The specific results of the benchmarking audit indicated: 1. Asset utilization, as compared with theoretical maximum, was running near 70%, better than the average plant. However, world-class plants operated at 85% or better. 2. Plant unplanned downtime was quite high, running near 10%, and well above a world-class standard of less than 1% to 2%. 3. The process thought to be the production bottleneck was not. It was running at 98% availability. The process thought to have over 100% excess capacity was, in fact, the limiting factor in production because of unplanned downtime, raw material quality, and other operational issues. 4. Twenty products produced 90% of total sales. Another 180 products yielded the balance of sales and represented some 50% of the total number of purchase orders. Indeed, some 50 products yielded just 2% of sales. While these issues are fairly complex, we’ll simplify the first three by summarizing the plan of action. 1. The company began the process for improving equipment and process reliability by applying an integrated process for operations and maintenance excellence, including a balance of preventive, predictive, and proactive methodologies. This resulted in lower unplanned (and planned) downtime for maintenance, and in general making its maintenance department a reliability-focused function, as opposed to its historical role of rapid repair (and unfortunately often poorquality work). 2. The company also set up work teams (maintenance, production, engineering, purchasing, and personnel as appropriate) for improving process and operator effectiveness. 3. The company spent a great deal of effort in getting operators much more active in equipment ownership, routine care and
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
minor PMs, and in precision process control. These practices are described in Chapters 4–17. And, to the point of this section, it began the process for “product rationalization,” discussed in the following text.
Product Mix As previously noted, 20 of Beta’s 200 products (or stock keeping units, SKUs), produced 90% of its sales; the balance of 180 products produced the remaining 10% of their sales; interestingly, 50 produced only 2% of sales. At the same time, all products were typically offered on the same delivery schedule. The result of this policy was frequent stops of a production run to insert a small order, in large measure because it was highly responsive to customer demands and had a commitment to on-time delivery of quality products. Historically, it had also developed a culture among the sales staff of “not losing an order.” When combined with the current strategy of flexible or agile manufacturing, this led to the following: 1. Frequent changeovers 2. Reduced process stability (and statistical process capability) 3. Reduced equipment reliability 4. Increased downtime 5. Increased production and handling costs 6. Reduced production capacity At this point the decision was made to critically analyze the key markets and key customer characteristics, e.g., those who brought 80% to 90% of its business, and to strategically assess which markets and, therefore, products, it truly wanted to pursue. Beta concluded that its key markets were in two major areas and that it did not dominate those markets—it had some 10% of worldwide business volume. It also concluded that portions of its business were not as valuable as previously considered. For example, historically it had assumed that a 50% gross margin (almost twice normal) was adequate for some of the low-volume products to ensure good profit margins. However, when it factored in the opportunity cost of
99
100
MAKING COMMON SENSE COMMON PRACTICE
lost production for supplying key markets, the value of downtime, the handling costs, etc., it found that its “linear thinking” had led the company to the wrong conclusion, and that by using this set of assumptions it was actually losing money on the so-called “highmargin,” but low-volume products. For example, upon analysis Beta found that on average 1 hour of downtime was worth nearly $3,000 in net operating income on its 20 key products. When considered in terms of gross sales volume, the figure was even more dramatic at $10,000. When it factored the lost production opportunity into the small order cost, it found that the company actually lost money on most of the orders for fewer than about 10,000 units of product. Further, when it factored in the cost of handling the larger quantities of small orders associated with the small volume, the costs were even higher. Effectively, Beta found that if it truly allocated costs to the production of the product, and opportunity cost to the lost production, it was losing substantial money on small-quantity orders. And it gets worse. Equipment reliability was suffering substantially (resulting in unplanned downtime) during the frequent starts, stops, changeovers, setup modifications, etc. While the statistics were difficult to determine (because the company didn’t have good equipment histories or adequate statistical process control data), it estimated that equipment reliability was poor at least in part because of starting and stopping on a frequent basis—much like starting and stopping a car on a mail route, as opposed to running it for hours on the interstate—the car just won’t last as long or be as reliable on the mail route. Process capability, that is, the statistical variation in product quality, was also suffering substantially. When shutdowns occurred, process stability deteriorated, resulting in a poorer-quality product. Cpk was less than 1, a poor showing. The company had few “commissioning” standards or tests for verifying that the new setups were properly done before it restarted the equipment. Beta ran the product, checked it after the fact, and then made adjustments until the quality was acceptable. The frequent starts, stops, and changeovers led to greater scrap and reject rates for the core products, as well as the many small-quantity products.
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Product Mix Optimization Process The next step in the process was to review all current products, quantities sold, setup times, changeover downtime, unplanned equipment downtime, planned downtime, etc. Beta also reviewed the design specifications of its small-volume products and found that many had very small differences between them, leading to an effort to consolidate products with very similar characteristics. In some cases this was a marketing opportunity—it could tout a higher-quality product for the same price and, with reduced overall costs of production, actually make more money. Considering the value of reduced downtime and reduced order handling, it found improved profits (or reduced losses) even on the smaller-volume orders. Other issues also had to be considered. For example, making small-quantity orders for key customers as part of a trial run was key to product and market development. Similarly, small runs had to be made for R&D and engineering to test new products, to advance production process capabilities, etc. Out of all this came a process for rationalizing the product mix. Consideration was given to: 1. Consolidating products with very similar characteristics, with a focus of ensuring that the highest-value product was offered. 2. Subcontracting production for some small orders to companies that were more suited to small quantities, and then marking the product up to ensure making money on those orders. This should also give an indication of true market price and production costs. 3. Setting up an internal production line, which did nothing but small-order products, and then managing that line to meet customer needs for small orders—optimizing small production processes. 4. Very selectively taking those products that have no margin or strategic value, and transferring those products to a nominal competitor, in cooperation with the customer, so as to maintain the relationship with the customer.
101
102
MAKING COMMON SENSE COMMON PRACTICE
5. Pressing the marketing department to make better forecasts for the small-order products, and then making larger quantities of those small orders in anticipation of the orders coming in later in the year. Note that because the orders were generally small, even making 10 times a normal historical quantity may not substantially affect inventory and related carrying costs. 6. Negotiating with customers for improved lead times for smaller quantities so that quantities could be consolidated, and better production planning could be accomplished. 7. Implementing rapid changeover methods and training of staff in rapid changeover techniques, including a commissioning process for both equipment and product reliability. 8. Dropping some products that did not meet the minimum profit requirement, or a strategic need for new product development or key customer alliances. This is not suggesting that Beta should not make small-volume products, or that Beta shouldn’t be flexible or agile, nor is it suggesting that Beta should arbitrarily eliminate certain products below a certain profit level. For example, it is understood that if your best customer requests a small-quantity order for a test market effort, you’re very likely to take the order; if R&D asks for a small production order to test a new product, you’re very likely to make the product; or if a good customer requires a product mix that includes low-margin and (mostly) high-margin products you will make the low-margin ones. Rather, it is suggesting that we consider all things on balance and make decisions accordingly; that we have strategic business reasons for these products; and that we understand our key markets and customers and our real costs of production. If we do so, we will be more likely to rationalize and optimize our production efforts to our key markets, increasing the probability of our success. The combination of these methods leads the company to a better process for positioning itself for reduced production costs, better market and product positioning, greater market share, and greater profitability.
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Beta’s Leets Division—Rationalizing Customers and Markets Beta’s Leets Division was having exceptional difficulty. On-time delivery was less than 75%. Production costs were high, leaving profits very low and sometimes negative. Most of this was perceived as due to poor manufacturing performance. As we’ll see, manufacturing improvement was critical, but the marketing strategy needed a major overhaul before the plant could be successful. The Leets Division had been in business for decades, beginning in the 1950s with some unique technology that was very difficult to replicate, and making the barriers to entry for the competition fairly high. Because of this, Leets enjoyed a high market share, with substitute products generally being a greater threat than direct competition. Moreover, the market was not very large for these products, making the risk/reward calculation for any new entrant more formidable. Leets was also a relatively small division, employing some 600 people, including the marketing and sales staff, as well as engineering and administration. This comfortable position had resulted in the people growing a bit complacent, with the usual consequence—that is, competitors with substitute products were becoming increasingly aggressive; their slogan: “It’s just as good as Leets, and cheaper!” This was having a substantial effect on the company, both tactically and strategically. Tactically, it has been cutting back on price to hold market share (and still losing some). Strategically, it has been evolving its products for several years into new applications and different markets. However, this strategy had not been well orchestrated thus far, and the business was suffering. As a result, a new managing director was hired to help with bringing Leets back to its former strength and to expand the size of the business in these new markets. On review of the business, he found that the sales force was “incentivized” to ensure sales in the new markets, but without clear criteria for what was an “acceptable order,” and that the sales force often perceived the situation as “sell what you can to accommodate the short-term softness in bookings.” Further, the company was strongly “customer focused,” and wanted to make sure that every customer was happy. The sales manager, or one of his lieutenants, would often call the production manager and insist on changing the production schedule to support the most recent “hot” order.
103
104
MAKING COMMON SENSE COMMON PRACTICE
The result—hundreds of products without a unifying strategy in its business. This, in turn, was cascading into frequent plant disruptions and changeovers, lost production, and general chaos. The “squeaky wheel” was getting the oil, but the car was falling apart. Some fairly dramatic action had to be taken. The initial step, of course, was to stop the “bleeding” within the plant and set stringent criteria for preempting existing production schedules. Priority was given to so-called key customers and growth accounts, both of which had criteria associated with them. All others could wait for normal production schedules to flow through. In parallel, the plant was engaged in various improvement activities to eliminate downtime, improve process and equipment reliability, and generally ensure better manufacturing performance. The next step was to analyze historical markets, including product sales by customer and market, review key accounts/customers, analyze gross profits by customer and product line, and analyze the amount of time and money spent managing the various accounts. The analysis consisted of answering and debating questions such as: How will we grow the business? How will we sustain existing key accounts? What business is core? Growth? Marginal? Losing money? And, if the market or product line is losing money, is this “investment” critical to our future? Further, a more specific review was performed, asking does any given market opportunity: Leverage our production and marketing capabilities? Bring an application for similar or related products? Differentiate us against our competitors? Have technically demanding specifications? Add significant production cost? Add significant finished product value? Come from foreign customers? Enhance our leadership position in that market? Leverage our areas of excellence? Meet our—size/growth/profit guidelines?
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
Table 3-8 Analysis of Customer/Market Revenue, Profit, and Time Spent. Category— Description
Level
No. of Accounts
Revenue
Gross Profit
Time Spent
Strategic-core
1
8
40%
50%
10%
Growth-innovation
2
22
14%
14%
25%
Nonstrategic cash cow
3
17
25%
20%
20%
Marginal profit
4
18
12%
8%
15%
Questionable
5
68
12%
8%
30%
Each area was actually scored, mostly subjectively, on a scale ranging from –10 to +10, and then used for further analysis, the results of which are shown in Table 3-8. The results of the analysis were surprising to everyone who participated in the review. For example, the sales staff spent 45% of its time on 86 accounts (65%) with 20% of the total revenues and only 16% of gross profit. This was not a particularly good investment of its time. Further, what was perceived before the analysis as a cash cow was not a very big cow—only 20% gross profits from 25% of total revenues. Clearly, improving performance required redirecting resources to more strategic accounts, either core or, more importantly, growth. After much debate, which included considerable disagreement at times, the following actions were taken: 1. Leets notified the Level 5 accounts, which were low-margin, low-volume accounts that did not have strategic growth potential, that their accounts would be discontinued after a transition period, working with them to find alternate suppliers. 2. A policy was established to sell to Level 4 accounts, but only out of stock and only on Leets’s terms. 3. Level 3 accounts would be continued, but with much lower priority for capital, production scheduling, and technical/ sales support.
105
106
MAKING COMMON SENSE COMMON PRACTICE
4. Levels 1 and 2 accounts were to receive the bulk of attention for account support, production planning, on-time delivery, quality, packaging, and so on.
The Results These actions resulted in a much more focused business, both tactically in supporting core customers and strategically in developing the growth opportunities. This, in turn, allowed better manufacturing focus on cost, quality, and delivery, and provided much better gross profits. This ensured more time and money for innovation of new products in growth markets. When combined with the manufacturing improvement efforts, overall plant performance was improved markedly. And, most importantly, within 6 months gross profits were up 12% above stretch goals, notwithstanding a 5% reduction in revenue from the elimination of low-margin, high“investment” accounts.
Summary Figure 3-1 depicts the process we are striving to establish within Beta. Begin with your marketing strategy, constantly working to
Figure 3-1. Aligning the marketing and manufacturing strategies.
ALIGNING THE MARKETING AND MANUFACTURING STRATEGIES
manage complexity, and optimize product mix and customers. This information feeds into the manufacturing strategy, where you’re constantly striving to improve reliability and minimize variability, thereby increasing capability. This, in turn, feeds into your operating plan, where you are working to increase capacity and lower costs, such that you can go after more market share without incremental capital investment (or you can rationalize poorly performing assets). This feeds back into your marketing strategy, where the cycle continues. Beta has renewed its effort to be more effective in aligning its marketing, manufacturing, and R&D efforts to achieve its mission to “be the preferred supplier.” The process for this, like most things, is relatively simple in concept, but more difficult in practice. 1. Define and rank critical success factors in key markets/products—what wins orders? 2. Establish key market/business goals for each market/product. 3. Review historical performance in marketing and manufacturing to determine how they can support the key success factors and business goals. Ensure communication and teamwork. The Pracor Division will use Figure 1-3, which links manufacturing performance to market pricing in achieving RoNA objectives, as a tool for ensuring common goals and expectations. 4. Understand what the gap is relative to achieving key business goals and establish a plan for eliminating those gaps. 5. Rationalize product mix, customers, and markets. 6. Execute the plan. Execute the plan. Execute the plan. In physics we learned that for every action there is an equal and opposite reaction. Seemingly, for every business strategy, there is probably an equal and opposite strategy—economies of scale for mass production vs. cells for flexible manufacturing; outsourcing vs. loyal employees for improved productivity; niche markets vs. all markets and customers; centralized vs. decentralized management; and so on. These strategies are often contradictory. Therefore, Beta must consider the consequences of its marketing and manufacturing
107
108
MAKING COMMON SENSE COMMON PRACTICE
strategies in a comprehensive and balanced way, and make every effort for strategic optimization in its markets, such that it will maximize profits and return on net assets. Aligning the marketing and manufacturing strategies and effectively rationalizing markets, customers, and product mix can have a substantial impact on business performance, market share, unit costs, production capacity, operating costs, equipment reliability, and, ultimately, on profits. Several options exist to position companies to optimize their strategic market and profit position in the marketplace. At Beta, these options are being reviewed and exercised as appropriate to ensure competitive position. Let’s explore next how Beta is putting the right practices in place to ensure world-class manufacturing performance for supporting marketing and sales performance. In particular, let’s review how it is ensuring manufacturing excellence in how it designs, buys, stores, installs, operates, and maintains its plants for optimal performance.
References 1. Skinner, W. “Manufacturing—Missing Link in Corporate Strategy,” Harvard Business Review, May-June, 1969. 2. Hayes, R. H., and Pisano, G. “Beyond World-Class: The New Manufacturing Strategy,” Harvard Business Review, January/February, 1994. 3. Turner, S. “An Investigation of the Explicit and Implicit Manufacturing Strategies of a South African Chemical Company,” Master’s thesis, The Graduate School of Business, University of Capetown, South Africa, December 1994. 4. Treacy, M., and Wiersman, F. The Discipline of Market Leaders, Addison-Wesley Publishing, Reading, MA, 1994. 5. Schiller, Z. “Make It Simple, Procter & Gamble’s New Marketing Mantra,” Business Week, September 9, 1996.
Plant Design and Capital Project Practices
4
Your system is perfectly designed to give you the results that you get. W. Edwards Deming After going through an audit of design and capital project practices, it became clear that when implementing new capital projects, including the construction of new plants, Beta International typically did not adequately apply historical experience, particularly of the maintenance and operations departments, to help ensure highly reliable plant equipment. While not an excuse, Beta is fairly typical. According to the Society for Maintenance and Reliability Professionals (SMRP), 86% of manufacturers surveyed do not use a life-cycle cost model when designing new capital equipment projects.1 Indeed, at Beta, the design and installation philosophy appeared to be one of lowest installed cost and minimum adequate design (MAD). While minimum adequacy may be appropriate, the problem for Beta was that the basis for minimum adequacy was often poorly defined, and was often driven by constrained capital budgets, rather than lowest life-cycle cost. This approach to design and capital projects typically did not lead to the lowest life-cycle cost, or to the lowest unit cost of production for manufacturing. Further, the economic importance of addressing reliability and lifecycle cost during the planning, design, and procurement phases should not be underestimated, and a characteristic graph of the implications of life-cycle cost is shown in Figure 4-1. According to
109
110
MAKING COMMON SENSE COMMON PRACTICE
Figure 4-1. Phases of the life-cycle cost commitment.
Blanchard,2,3 while expenditures for plant and equipment occur later in the acquisition process, most of the life-cycle cost is committed at preliminary stages of the design and acquisition process. The subsequent design and construction phases (which include the detailed specifications) determine an additional 29% of the life-cycle cost, leaving approximately 5% of the life-cycle cost to be determined by maintenance and operations. Further, it was also reported3 that some 60% to 75% of the life-cycle cost is associated with maintenance and support, with the balance of 25% to 40% being the initial acquisition of the asset. Moreover, this does not even include the value of lost production from poor design decisions. While this same pattern may not occur for specific operating plants, the point is well taken that design and installation efforts can have a dramatic effect on life-cycle cost, something known intuitively to most of us. What should this mean in terms of our practices? Clearly, preliminary design efforts should include all the project costs, not just the initial costs. More recent benchmarking efforts, which surveyed some 60 major corporations and 2,000 large capital projects, have reached similar conclusions4 regarding typical performance:
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
1. Startup and operability of new assets have not improved in the past 20 years, despite the ample scope for improvement. 2. More than two-thirds of major projects built by process industries in the US in the past 5 years failed to meet one or more of the key objectives laid out in the authorization. 3. Outsourcing for contractor-led projects has grown from less than 10% to nearly 40% between 1975 and 1995. The hope that outsourcing of engineering projects would result in lower engineering costs has not been met. Engineering costs as a percent of total installed costs have climbed from 12% to 21% in the same period. 4. All-contractor projects were the worst on every performance metric. The survey goes on to say of the best projects: 1. Instead of viewing the capital projects as the line responsibility of the engineering department, they view projects as the principal means by which the corporation’s capital asset base is created. They view technology and engineering as elements in the supply chain that result in competitive products, not as nonintegrated functions. 2. The importance of front-end loading (FEL) for business, facility, and project planning integration was hard to overstate. While fewer than one project in three meets all its authorization business objectives, 49 out of 50 projects that achieved a best practical FEL index score also met all objectives. 3. An excellent project system consists of business, technical, and manufacturing functions working together to create uniquely effective capital assets. 4. The best projects were functionally integrated teams, which consisted of owner functions such as engineering, business, operations and maintenance, and outside engineering and construction contractors. Even vendors were included in many of the most effective teams.
111
112
MAKING COMMON SENSE COMMON PRACTICE
5. The best in class all maintained some form of central organization that was responsible for providing the organization of the work process for front-end loading, a skilled resource pool in several core competencies, and provided the organizational and interpersonal “glue” that bound operations, business, engineering, and outside resources into an effective project process. Clearly, the decision to address reliability and life-cycle cost is best made during the planning and design phase. When decisions regarding these issues are made later in the life cycle, the return on investment decreases as plant problems reduce equipment availability and product capacity. The other concept often underestimated during the purchasing process is the role that “infant mortality,” or early-life failures, plays in equipment life-cycle cost. According to some studies discussed in Chapter 9, these early-life failures can account for 68% of equipment failures. Because of this, specifications must be verified by performing postinstallation checks at a minimum. For major equipment, the vendor should test the equipment prior to shipment and provide the required documentation. The equipment should be retested following installation to ensure the equipment was properly installed and not damaged during shipment. These same requirements apply to contractors as well as employees. Beta is currently in the process of modifying its design and capital projects methods to help ensure maximum asset reliability and utilization, lowest life-cycle cost, and lowest unit cost of production. This is described in the following case study. As shown in Figure 4-2, Beta believes that spending a little more time and money up front will pay off over the long term in improved profits and return on investment (ROI).
The Design Process Beta’s Stone Coal Creek plant was experiencing significant maintenance downtime. As you would expect, maintenance costs were also extraordinarily high. Maintenance was considered “the bad guy”— why couldn’t they just fix the equipment right, and fast, and cheap?
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
Figure 4-2. Life-cycle cost and cash-flow considerations.
An idealistic objective perhaps, but not very realistic under the circumstances. On reviewing the plant, several issues arose. The piping and pressure vessels were made principally of carbon steel; several heat exchangers were graphite or other brittle material. Most pumps (typically configured as a primary and an in-line spare) had been skid mounted for ease of fabrication and reduced expense, but from a vibration perspective had not been isolated from one another. As a consequence the primary pump (because it had not been properly aligned or balanced—too expensive) typically “beat the backup pump to death.” Some pumps were even “free-standing” for ease of repair when failure occurred. Reviewing process control practices indicated that very often the plant experienced transients of varying forms—transients in moisture content of the process stream, which formed acids that attacked the carbon steel, fouled the reactor, and dissolved the piping itself; and physical transients in the form of “hydraulic hammer,” which wreaked havoc with brittle components. Finally, in the rush to get the plant back on line, maintenance would frequently not “dry” the piping it had replaced, or align the pumps, or fit gaskets properly, or do many of the things that they knew were the right things to do; but under the pressure for production, they did the best they could. Some or all of this may sound all too familiar. So where does the root cause for the problems of the Stone Coal Creek plant lie? Maintenance would advise that operators are running the plant into the ground. Operations would advise that
113
114
MAKING COMMON SENSE COMMON PRACTICE
mechanics just never seem to do things right the first time. The design and capital projects staff would be long gone, not caring much about any operating problems—everything looked pretty good when they left. After all, they got the plant installed on time and within budget. Most of us standing on the outside would conclude from the limited information provided that all three were at the root cause of poor plant reliability. What follows is a process, a model, and case histories on how plant design and capital projects can work more effectively with operations and maintenance to ensure world-class manufacturing, short and long term.
Design Objectives As any good design engineer would tell you, one of the first things you must do is determine your design objectives. All too often, this effort does not include adequate reliability or maintainability objectives; nor does the installation effort include a process for verifying the quality of the installation effort (beyond the typical verification of process capability at a brief moment in time). A fundamental objective of every manufacturer should be to become the low-cost producer of its products, or at least as low as reasonably achievable. To do so, life-cycle costing must be considered, and, as previously noted from the SMRP data,1 in the vast majority of cases, it is not. There’s much more to plant design and world-class manufacturing than process flows, process chemistry, and standard design methods. This is not to diminish their importance, but rather to recognize other issues that are often ignored, but are of equal importance. In the Stone Coal Creek plant, design engineers had come under considerable pressure to get the plant designed, installed, and running to meet expanded market demand as quickly as possible. Concurrently, capital budgets were very constrained. With these constraints, they used less expensive material. To justify this, they had to assume that few, if any, transients, either chemical or physical, would be experienced during the operation of the plant (a boss of many years ago taught me how to spell “assume” with three syllables, after I had made a major, and incorrect, assumption on a project). In other words, they assumed that the process would always be in complete control. This was difficult, however, because they also put in place inexpensive instruments for process control, which were frequently operated near their design limits. This was only exacer-
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
bated by operators who were not particularly well trained (not in the budget), and who rarely used control charts or statistical process control (not in the budget, or part of their culture). Even more difficult was the fact that feed stocks often contained water and other contaminants, and were not frequently verified as to quality, because the laboratory was poorly equipped (not in the budget), and understaffed for the task at hand (not in the budget). All these constraints and problems were typically ignored until a series of crises literally shut down or severely curtailed operation, incurring extraordinary maintenance costs and production losses. These costs were not in the budget either, and yet were far greater than what it would have cost to properly design and install the plant. As pressures grew for “production” to meet growing market demand, maintenance requirements were almost always sacrificed for present production, at the expense of future production. In the end, the Stone Coal Creek plant reached a stage where the business was near abandonment. After much effort, pain, and anguish, it is just now returning to some level of control. As a footnote, it was remarkable that the chief of engineering and the project manager were actually praised for the “good job” they had done in bringing the plant into operation below budget and on schedule. Clearly, new standards were needed for determining excellence in a design or capital project effort.
Key Questions First, all parties should have a common understanding of reliability and maintainability.5 Reliability relates to the level to which operational stability is achieved because equipment does not fail—the equipment is available at rated capacity whenever it is needed, and it yields the same results on repeated operation. It could also be defined as the probability that plant or equipment will perfom a required function without failure under stated conditions for a stated period of time. Reliability is an inherent characteristic of the system and therefore very much a part of the design. Typical plant equipment includes:
115
116
MAKING COMMON SENSE COMMON PRACTICE
Rotating equipment
Stationary equipment
Centrifuges
Piping
Compressors
Pressure vessels
Conveyors
Tanks
Fans
Heat exchangers
Motors
Valves
Pumps
Ventilation
Turbines Instrumentation
Electrical equipment
Electrical
Switchgear
Electronic
Motor controllers
Pneumatic
Transformers
DCS/process control
Motors
Maintainability relates to those issues that facilitate equipment repair—minimal time, effort, and skill, and specified material to return it to reliable operation as quickly as possible. Some key questions related to reliability and maintainability should have been asked during the design, and before the capital authorization, to help define requirements for manufacturing excellence and ensure business success:
What are my reliability objectives for this plant related to availability and uptime, e.g., 95% availability, operating at 98% rate/yield, and 98% quality, i.e., 91%+ uptime?
What are similar plants experiencing? Why?
Are these similar plants measuring losses from ideal using uptime or OEE metrics?
What losses are being experienced due to poor equipment maintainability and reliability?
How will we acquire equipment histories and operating and maintenance experience prior to the design effort?
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
What are the key design parameters that will ensure high (or low) reliability?
What are the current procedures, practices, and standards for this type plant/equipment?
What are the assumptions being made regarding reliability of process control, e.g., zero process chemistry and physical transients?
What are the sensitivities of the equipment and processes to the foreseeable transients? What are the inherent control parameters, limits, and capabilities?
What kind of maintainability requirements should be included? For example: Access and lay-down space Handling—lifting, installing, removing On-line and routine inspection capability, including condition monitoring and PDM Tagging or labeling of valves, pumps, and components Color coding of piping Industrial hygiene requirements, e.g., lighting, toxic exposure limits, ventilation requirements, noise limitations Protective coating requirements Documentation, procedures, and drawings requirements Special handling, tools, etc.
What are/should be the training requirements for operators, and mechanics, technicians? How and when will this be accomplished? Should operators and mechanics participate in the startup and commissioning process?
Are there specific ergonomic issues for this operation that must be addressed?
Are standard suppliers already in place? How will standardization of equipment design be ensured? What are the existing guidelines of supplier alliances?
Are suppliers of major equipment required to include a detailed bill of material for their equipment, including a magnetic disk in a specific format?
117
118
MAKING COMMON SENSE COMMON PRACTICE
When is the design going to be presented in detail to operations and maintenance staff for their input? What is the methodology for this? Note: “Drawings over the wall on Thursday, pm, and expecting comments by Monday, am” is NOT a review process.
Will a senior operator and maintainer be on the project team?
How will spares and PM requirements be determined? Will the supplier(s) perform an RCM or FMEA analysis to support definitive requirements?
Will contractors be used for the installation effort? Have specific validation criteria been established for the quality of their work? Will they deliver “an effect,” e.g., reliability, or a set of equipment?
What commissioning standards have been set for the equipment (and the process)? For example:
Equipment: Vibration levels at specific frequencies and bands are below certain levels. Lubricants validated as to specification, quantity, and quality. Electric power of high quality, e.g., no harmonics, surges, sags, or ground loops. Motors have proper starting current, low cross-phase impedance, proper capacitance, and resistance to ground. Infrared scan shows no hot spots, e.g., good electrical connections, motor controllers, transformers, quality lagging, quality steam traps, etc. Leak detection shows no air, nitrogen, compressed gas, or vacuum leaks. Process: Proper temperatures, pressures, flows for key processes. Process is sustainable for weeks at proper yields, conversion rates, cycle times, etc.
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
Control charts for key process variables are within defined limits during sustained periods. Operators demonstrate ability to properly operate pumps, valves, and other equipment. Limited transients occur in both physical process and/or chemical process. Process control is readily apparent for all key process variables.
When and how will purchasing participate in the design effort? What is its role?
When and how will stores participate in the design or installation effort? Storing for reliability during the construction phase is very important, and ensuring stores capability for installation support is also very important.
What is the philosophy or methodology for resolving conflict—lowest installed cost or lowest life-cycle cost?
Are we willing to accept incrementally higher engineering costs for a longer, yet more comprehensive design process?
The old method for design used to develop the Stone Coal Creek plant is shown in Figure 4-3. A better model, which captures these issues is shown in Figure 4-4. Granted, the proposed model will require more time, more communication, more teamwork, etc., but when most manufacturers typically operate a plant for 30+ years, wouldn’t an extra 6 months be worth the effort and expense? Certainly, it would have been for the Stone Coal Creek plant, and for Beta International as a whole. Additional suggestions and case histories follow to illustrate these points.
Operations and Maintenance Input In the ideal world, comprehensive loss accounting would be routinely available to define the causes of major losses and to lend itself to better design efforts. Root cause failure analysis would be common and much would have been learned regarding how to better design future
119
120
MAKING COMMON SENSE COMMON PRACTICE
Figure 4-3. Traditional design and capital projects process.
equipment and processes. Comprehensive equipment histories would be routinely used to determine where the major problems (and losses) were and also to support better design. Unfortunately, this is rare, and equipment histories are typically limited. So, how do we get operations and maintenance input from the beginning of a design effort? Beta International has now implemented the process shown in Figure 4-4 and described in the following text. The project lead should contact leaders in the operations and maintenance departments and inquire about major losses and the principal reasons for them as they relate to design. Three or more case histories should be identified and briefly documented to help facilitate the thinking process for improvement. A focus group consisting of maintenance personnel (mechanics and engineers), operations (operators and production supervisors), purchasing and stores personnel, and design/capital project engineers should be convened. After an introduction as to the purpose of the focus group—getting operations and maintenance input for the coming design—these case histories should then be presented to the focus group to start the group thinking about where the potential opportunities are and what might be done to capture them. The goal is to identify major opportunities for improving plant reliability through “designing out” current problems in existing operating plants and thereby reducing life-cycle cost (as opposed to installed cost). As a
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
Figure 4-4. Design and capital project process flow model.
bonus, you also get higher uptime, lower unit cost of production, and the opportunity for higher market share and operating profits. Once these opportunities are identified, then the group should work with design to help develop a process for resolving major issues. Input points should be identified by the group for continuing review of the design, linking key deficiency areas to solutions and various stages of the design and installation effort. A formal design review plan should be developed for each major opportunity (previously known as losses). Finally, a formal review process should be defined for providing the details that were only outlined by the plan. A good comprehensive design review process would not simply be dropping the drawings off to operations and maintenance on a Thursday afternoon, and expecting feedback on Monday morning. The project engineer should be required to explain in detail the overall design, as well as the basis for selecting each major piece of
121
122
MAKING COMMON SENSE COMMON PRACTICE
equipment. Feedback should be provided by those impacted by the design, and this information should be used to resolve potential conflict and to develop detailed requirements for maintainability and reliability.
Estimating Life-Cycle Costs Often a tedious job, estimating life-cycle costs should not be viewed as a trivial task. Panko6 provides a model for this effort, which essentially requires developing a spreadsheet and estimates for the following parameters:
Project title Project life, years Depreciation life, years: Year 0, Year 1, Year 2, Year 3, Year 4, etc. Initial investment cost Design and startup Training Equipment Material Installation Commissioning Total initial investment Operating costs Operating cost Maintenance cost Utility cost (power, fuel, air, water, HVAC, other) Spare parts inventory increase
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
Historical production losses (due to poor design) Rebuild/replacement Salvage/residual value Other expenses Tax savings on depreciation Cash flow Discount factor Discounted cash flow Cumulative present value Present value Estimates should not be limited to only these parameters, which demonstrate the principle and serve as a model for use. As is demonstrated in this chapter, the input of maintenance and operations in providing historical experience and information for developing these estimates is critical, especially regarding production losses. Additional examples of the impact of poor design practices on operating and maintenance costs at other Beta plants follow. Perhaps these will trigger ideas that will help in your design efforts for ensuring lowest life-cycle cost, maximum uptime, lowest unit cost of production, and maximum profits.
Additional Case Histories Turkey Creek Plant. This plant had historically used carbon steel heat exchangers in a process that was fairly corrosive. At one point management had considered using titanium, but the initial cost was approximately six times that of the carbon steel. For as many heat exchangers as it had, replacing all heat exchangers would cost some $10M and was considered prohibitive. At the same time, the Turkey Creek plant seemed to be constantly plugging tubes, or replacing or repairing heat exchangers, in one way or another. One of the engineers had run some calculations on the high maintenance costs and had concluded that if the maintenance costs were added to the initial costs for the exchangers, titanium was still a better choice, but it would have taken several years
123
124
MAKING COMMON SENSE COMMON PRACTICE
to reap the return. Given limited capital budgets, the plant struggled along with carbon steel. However, the engineer had neglected a key cost component in the analysis. On reviewing the system, it turned out that deterioration in the heat exchanger performance was routinely resulting in minor delays in the production process. These occurred when heat exchangers were taken off line briefly for tube plugging, when plugged tubes reached a level where full rate could no longer be sustained, and when exchangers were simply off line for replacement. These losses seemed relatively minor and tolerable, until a full accounting was taken over a year-long period. It turned out that when the value of these production losses accumulated over a year and were included, titanium heat exchangers would have paid for themselves within the first year of installation through improved production alone. The reduced maintenance costs would be a bonus, and the ability to reallocate resources to more productive efforts would also be a bonus. The Turkey Creek plant replaced the carbon steel heat exchangers with titanium ones. Warco Plant. Warco was a relatively new production plant, but after 2 years was still having difficulty achieving its output on which the plant was justified for investment purposes. In fact, total production losses from the plant were valued at some $5M, making its return on net assets poor. On reviewing the plant, it was found that insufficient isolation valves had been provided to be able to isolate specific portions of the production line without having to shut down the entire line. As a consequence, on more than one occasion, the entire line had to be shut down for maintenance, resulting in substantial loss of production. Further, very limited analysis was performed regarding equipment criticality and the development of a critical bill of material (maintenance and operations were not involved in this “analysis”), and, as a consequence, insufficient spares were purchased as part of the capital project. This resulted in the unavailability of spares and loss of production capacity, as well as incremental maintenance costs. Further, instrumentation was difficult to access for routine maintenance and calibration, and the instrumentation was considered to be marginally acceptable for the application. As a consequence of difficult calibration of marginally acceptable instruments, the process was often not in control, resulting in production losses associated with poor quality and, over the long term, accelerating
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
corrosion of stationary equipment. While no major losses have yet occurred from the corrosion, they can be expected in the future. A $1 million improvement project was initiated to successfully eliminate these design defects, but the $5M already lost will never be recovered. Stamper Branch Plant. This plant had been operating for approximately 1 year at a level well below expectations. Production losses from poor performance were estimated in excess of $3M, not to mention extraordinary maintenance costs, which were running over 6% of plant replacement value. On reviewing the plant, it was determined that the pumps were of a new design from a new company with no demonstrated reliability record, but were quite inexpensive from an initial cost perspective. The pumps “contributed” significantly to lost production, as well as increased maintenance costs. The plant compressor was also selected on the basis of low bid, over the objection of the corporate compressor specialist. Likewise its unreliability “contributed” to major maintenance costs and plant downtime. The bulk storage area for the finished product also had several wrong and/or incompatible materials, resulting in extraordinary maintenance costs and a higher injury rate from more frequent exposure to a hazardous environment. Isolation valves were also insufficient in number and design to allow for on-line inspections of critical equipment, resulting in a run-to-failure mode of operation for portions of the plant. The specifications for the pumps were changed, and a small upgrade project totaling some $60K eliminated or mitigated most of the problems.
Payback Analysis—Too Simple and Too Expensive—Payback Is Hell! Finally, some discussion on the use of the payback analysis is appropriate. It’s apparently a highly popular method, particularly among accountants and senior managers, to justify, or reject, any given project. While perhaps reasonable for a first-cut review, this approach may not be the best method for any given project, as the following example demonstrates. A more rigorous life-cycle cost analysis may be the best choice for the business.
125
126
MAKING COMMON SENSE COMMON PRACTICE
As noted previously, according to the Society for Maintenance and Reliability Professionals (SMRP), 86% of the manufacturers surveyed do NOT use a life-cycle cost model when designing new capital equipment projects, generally preferring to use the more common approach of minimum adequate design (MAD), which is limited by the lack of accurate knowledge on minimum adequacy and is driven by constrained capital budgets and demanding schedules. As is illustrated in the following text, the use of “payback” may not be the best approach. For example, suppose we have a small project, outlined as follows: Anticipated cost
$1,000K
Annual estimated benefit $ 333K (from production debottlenecking) Payback period
3 years (our maximum)
However, among other policies related to capital projects, the company has recently implemented a project review process for all projects over $100K, which requires a fairly thorough review of the project by senior shop floor personnel, as presented by the project engineer. In this case, we’ve done this and our operators and mechanics have advised:
Some material should be stainless steel for much longer life.
Additional instrumentation is needed to better control the process.
Spares are inadequate.
Access and lay-down space are inadequate as shown.
The best (most reliable, easiest to maintain) pumps are not
Unfortunately, taking their advice increases the cost of the project to $1,200K, without any obvious increase in benefit above the $333K already anticipated. If we follow their advice, we no longer meet the payback requirement, so we initially decline their suggestions, and the added cost.
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
Figure 4-5. Historical maintenance cost for new capital assets (% of asset replacement value).
As it turns out, however, we’ve also reviewed our historical costs for maintenance as a percent of plant replacement value (PRV) and found the data shown in Figure 4-5. We’ve also done further research and found that:
Maintenance costs at the best comparable plants are sustained at nearly 2.5% of PRV.
Our unplanned equipment downtime is running 5% or more, vs. a best in class of 1%.
Our production losses are running 5%, largely due to instrumentation, vs. a best of 1%.
If we could reduce those production rate losses and minimize the maintenance costs by taking the advice of the people who are in a reasonable position to know what the problems are, then perhaps we could reduce future costs or increase future benefit. So, we review the potential impact of their suggestions, and find that it’s reasonable to expect the following: 1. Maintenance cost for the new equipment will be 2.5% of replacement value, from the beginning. 2. Production losses will actually be further reduced, and “reliability debottlenecking” will have a positive effect as a result of this “improved” capital project. We value this at a minimum of $50K per year.
127
128
MAKING COMMON SENSE COMMON PRACTICE
Table 4-1
Discounted Cash-Flow/Life-Cycle Analysis—Base Case.
Year Value
1
2
3
333
333
333
60
50
40
273
283
293
Minus: Maintenance Costs Net Cash Flow
4
5...
10
333 333 . . . $333K 25
40 . . .
$40K
308 293 . . . $293K
Project Net Present Value @10% DCF
= $1,784K
Rather than do a payback analysis, we now do a life-cycle analysis, assuming a 10-year life, and using the discounted cash-flow (DCF) technique. In the base case, the “payback” is shown in Table 4-1. However, considering the improvements suggested results in an improved return from the base case, as shown in Table 4-2. Taking the advice of the shop floor in this case minimizes future losses and costs through the design and:
Yields a 22% greater ROI on the project
Returns an additional $385K, nearly double the $200K incremental initial investment
Put a different way, it could also be said that taking into account the opportunity cost of the base case, the initial capital investment is Table 4-2 Improved Design, Resulting in Reduced Costs and Improved Output. Year
1
2
3
4
Value
333
333
333
333 333 . . . $333K
Minus: Maintenance Costs
30
30
30
30
30 . . .
$30K
Plus: Increased Output
50
50
50
50
50 . . .
$50K
353
353
353
Net Cash Flow Net Present Value @10% DCF
5...
10
353 353 . . . $353K = $2,169K (+22%)
PLANT DESIGN AND CAPITAL PROJECT PRACTICES
really $1,000K plus the opportunity cost of $385K, or $1,385K, when compared with the better scenario, which reduces maintenance costs and increases reliable production capacity. Of course, we’ve had to make a number of simplifying assumptions here, but none of these is as simple as the so-called payback analysis. Further, the data used for maintenance cost as a percent of plant replacement value are very common; production loss due to unreliable equipment is likewise very common; and the benefit to be achieved is common. Much of this loss and cost can be effectively eliminated in the design stage, if we look more fully at our equipment histories and our losses from ideal production, and why they are occurring. It is critical that we begin to think more in terms of life-cycle costs, and take more seriously the advice of those who are closest to the problems in our plants—the shop floor! To do otherwise could lead us to a situation where payback analysis, though simple, is expensive, and payback is hell!
Summary These experiences are not unusual, and at Beta International pressures related to budget and schedule for major capital projects had historically outweighed issues related to designing for uptime, reliability, unit cost of production, and lowest life-cycle cost. Lowest installed cost and minimum adequate design (without effectively defining adequacy) reflected the dominant management method for capital projects. However, it did not provide the best business position for maximizing profits long term. New standards and a new culture are being implemented at Beta and must be nurtured so that the focus is on lowest life-cycle cost, maximum reliability, and uptime. Indeed, other organizations have reported that designing for life-cycle cost can add some 5% to the cost of up-front engineering and front-end loading. But, they also report that this cost is readily offset by reduced engineering costs at the back end of the project. The net effect that they report is no increase in engineering costs. 7 These processes will help ensure maximum output, minimum unit cost of production, and maximum profits and return on net assets. However, simply improving the design and capital project process will not be sufficient. We still must have good procurement, stores, installation, operation, and maintenance practices. Those are discussed in the next chapters.
129
130
MAKING COMMON SENSE COMMON PRACTICE
References 1. Society for Maintenance and Reliability Professionals, Spring ’96 Newsletter, Barrington, IL. 2. Blanchard, B. S. Design and Manage to Life-Cycle Cost, MA Press, Forest Grove, OR, 1978. 3. Blanchard, B. S. Maintainability—A Key to Effective Serviceability and Maintenance Management, John Wiley & Sons, New York, 1995. 4. “The Business Stake in Effective Project Systems,” Independent Project Analysis, Inc., Presentation to The Business Roundtable of Western Michigan, Grand Rapids, MI, 1997. 5. Society for Maintenance and Reliability Professionals, Fall ’95 Newsletter, Barrington, IL. 6. Panko, D. “Life-Cycle Cost Analysis Determines Cost Benefits,” Society for Maintenance and Reliability Professionals, Summer ’97 Newsletter, Barrington, IL. 7. Johnson, W. “Designing for Reliability,” Presentation to the Society for Maintenance and Reliability Professionals, Annual Conference, October 3–4, 2000.
Procurement Practices
5
The quality of our offerings will be the sum total of the excellence of each player on our supply team. Robert W. Galvin Some time ago, Beta International began an intensive effort to review and improve supply chain performance. This effort addressed the entire supply chain from raw material to delivery of finished product to the customer, but particular attention was given to consolidation of the number of suppliers and getting better pricing from those suppliers; improved production planning and inventory turns on finished product; and, to a lesser extent, turns on stores/spares. Beta’s management had come to understand that good supply chain management required excellence in manufacturing. Plants had to be operated in a reliable, readily predictable manner to produce high-quality product that could be easily integrated into the supply chain and distribution strategy being developed. Plants that could not operate in this manner would make it extraordinarily difficult to meet the challenges of the global supply chain strategy being developed. Most of Beta International’s manufacturing plant performance as measured by weekly production rates, uptime, unit costs, etc., were in need of substantial improvement. Some business units were even purchasing a considerable amount of product from competitors
131
132
MAKING COMMON SENSE COMMON PRACTICE
to make up for plant production shortfalls and meet contracted customer demand. Further, as already noted, a benchmarking effort indicated that Beta’s aggregate plant asset utilization rate was typically no better than the average range of 60% to 80%, as compared with a worldclass level of 85% to 95%. On-time delivery performance for customer deliveries was typically 85% to 90%, as compared with a world-class level of 99%+. Reactive maintenance was typically over 50%, as compared with a world-class level of less than 10%, indicative of a highly reactive and often poorly functioning manufacturing operation. Many plants literally could not reliably anticipate from one week to the next how much product could be produced, or at what quality, because Cpk, a measure of product adherence to quality, was typically at 1 or less, compared with a world-class level of greater than 2. Some plants had production variances from their plans of 50% or more, from week to week. Until the company achieved manufacturing excellence, global supply chain objectives would be very difficult to achieve. Clearly, manufacturing excellence is a necessity at Beta, not an option, if the company is to achieve its key strategic objectives for supply chain management. Further, Bob Neurath had some concern that his message for improved supply chain performance was being misinterpreted by some people in procurement and some at the production plants. For example, he had heard through one of his staff that one of the plants had been advised by the stores manager that someone, apparently under the auspices of achieving supply chain objectives, had stated they were coming to remove half the spare parts in stores, because most of it had not “moved” in over a year. However, he understood that stores inventory levels (discussed in the next chapter) were secondary concerns when it came to supply chain management and, more importantly, that if the parts were not readily available for plant maintenance, it could result in additional downtime, further exacerbating broader supply chain objectives. This was particularly true for those plants that operated in a highly reactive mode. Granted, the best plants operate with a stores level of less than 0.5% of plant replacement value, compared with Beta’s typical level of 1% to 1.5% or more. However, the best plants were typically operated much more reliably, had higher uptimes, lower reactive maintenance levels, and lower maintenance costs. They didn’t need the level of spare parts required by most of Beta’s plants, and, when they did, they could more reliably anticipate those needs, allowing them to
PROCUREMENT PRACTICES
more effectively create supplier partnerships, use blanket orders, operate in a consignment mode, etc. Bob Neurath understood that given the current level of performance in many Beta plants, one of the worst things to do at this point is to arbitrarily reduce spare parts, putting the plant at risk for its uptime performance and exacerbating its ability to produce and deliver on customer obligations. After some discussion with his key staff, he concluded that the first order of business was to ensure manufacturing excellence, and as part of that to manage feedstock supplies more effectively. Management of stores, as described in the next chapter, will be a part of the overall improvement process but will have a lower overall priority. To achieve the goals associated with supply chain management— low-cost producer for each product, on-time deliveries of 99% or better, finished product inventory turns of 20 or better, etc.—Bob Neurath knew there were several issues that had to be addressed, particularly those associated with manufacturing excellence. He also understood that purchasing had a key role to play, particularly in the area of strategic supplier alliances, and in ensuring better specifications for plant reliability and uptime when these alliances were developed. Going with low bid because it was low bid was apparently creating several hardships at the operating level, which he wanted to alleviate. Examples of Beta’s efforts to address these two issues, in particular, are described in the following sections. Lendrum1 also provides additional information on the development of partnerships and alliances.
A Model for Development of Strategic Alliances—Goose Creek Plant When discussing strategic alliances, many people immediately revert to “discounts for volume.” While this may be part of a strategic alliance, the better alliances usually involve more than just a discount and volume formula for a given product. Indeed, at Beta’s Goose Creek plant, a strategy was developed that worked exceptionally well, involving more than volume discounts. Beta’s Goose Creek plant was generally operating at a more advanced level of performance than most other Beta plants, though not world class. Asset utilization rate was in the 85%+ range; prod-
133
134
MAKING COMMON SENSE COMMON PRACTICE
uct unit costs were lower than most; advanced operating and maintenance practices (described elsewhere) were being implemented resulting in lower operating and maintenance costs; on-time/ in-full rates were at 90% and better in any given month. Overall the plant was reasonably well run. However, as noted, the plant was not world class. The purchasing manager, an engineer who had also worked in production and in maintenance (a rare combination of experience), had concluded that supporting the global supply chain initiative would require extraordinary effort. One area that concerned him was the actual process for creating these strategic alliances. After considerable thought, he developed and executed the plan of action described in the following text. If manufacturing excellence was the objective, a key issue for purchasing was how suppliers working in a strategic alliance could help achieve excellence. It then seemed critical to analyze operating results—uptime, product and process quality, equipment life (mean time between equipment failure and total life), process efficiency, and operating and maintenance costs for given sets of equipment. Further, it was also critical to understand key production losses that were resulting from problems with major critical equipment. Having participated in an uptime optimization improvement effort (described in Chapter 2), which includes identification of major losses, the purchasing manager understood that a certain type of pump used in production was causing major losses. These losses were characterized as losses in uptime due to equipment failures and as out-of-pocket losses due to extraordinary repair costs. He decided to use this as a “test case” to construct a supplier alliance with a major pump supplier to eliminate most, if not all, of these losses. Working with maintenance and production, he contacted the existing supplier and two other suppliers that had been suggested and set up a meeting with each. At this meeting, which also had representatives from production and maintenance, he outlined the current situation with the existing pumps—production losses and maintenance costs attributable to the pumps, their problems as they understood them, etc. He also clearly stated that the goal was to establish a vendor alliance with a single supplier that would eliminate these problems. He made it very clear that continuing to buy spare parts for repairs from the existing vendor and operating in “forever fixing” mode was not an option in the long term and that having these problems “fixed forever” was a requirement. Privately, he did have some misgivings about the current vendor, because it
PROCUREMENT PRACTICES
seemed to be just as happy to sell spare parts for repair, rather than aggressively seek to resolve the problems. Vendors were encouraged to look at current operating and maintenance practices for the pumps and offer any improvement suggestions in the short term. Preliminary goals were put forth for vendor comment regarding: Production losses due to pump failures—output, product quality, yields, etc. Mean time between pump repairs Annual operating cost Annual maintenance cost—PM, overhaul, and repair Overall pump life Initial cost With this in mind, he asked each supplier to offer a proposal on how Goose Creek could minimize its losses, minimize its unit cost of production, and maximize its uptime. In return, Goose Creek would commit essentially 100% of its business for these pumps to the supplier. Additional opportunities for increases in business in other areas would be contingent upon achieving specific performance criteria in this area, e.g., reduction in losses, improved average pump life, quality technical support, access to results of R&D effort, etc. Further, the Goose Creek plant required that the pump supplier perform a failure modes and effects analysis (FMEA)/reliability centered maintenance (RCM) analysis on its pumps and in conjunction with failure history data, and use that analysis as a basis for its recommendations for spare parts and PM. It was no longer acceptable to recommend spare parts and PM on a take-it-or-leave-it basis—the recommendations had to be backed by solid engineering and statistical analysis. This was to be a true partnership effort. As an aside, the pump suppliers’ review of the operating and maintenance practices resulted in several improvement recommendations in the areas of pump operating procedures (for minimizing cavitation), in pump lubricating practices, in alignment and balancing practices, in overhaul practices, etc. These were incorporated immediately into current operating and maintenance practices for the pumps, and are summarized in the following section.
135
136
MAKING COMMON SENSE COMMON PRACTICE
Pump Reliability Best Practices At the Goose Creek plant a pump improvement team was established, consisting of specialists from the supplier, two senior operators, the plant engineer, the machine shop supervisor, two senior mechanics, and a condition monitoring specialist. The group met to review best practices for pump operation and repair and to develop an improvement plan that they would lead in their areas. Over the course of several days, they developed several improvement actions:
Operator tours were established for each shift to check/log items such as pump suction and discharge pressures, leaks, lubrication levels, unusual noises such as cavitation, etc., and recording the information as required.
A detailed pump startup and operation procedure was developed. While it varied for each pump service, it typically included some 20 steps/checks, any one of which, if improperly done, could result in pump damage or unsafe startup. Operator understanding of basic pump operating principles was found to be in significant need of improvement.
An in-line spares policy was developed, which, for each application, included consideration of issues such as potential effects on production downtime, effect of buffer stocks, ease of changeover, ease of detecting developing problems, historical pump reliability and fitness for purpose, local environmental conditions, and ease of access.
When in-line spares were being used, a policy was developed to ensure their reliability when needed, e.g., alternate operation, or mechanical and process stream isolation; PM of the back-up spare; pump status card and history; spares availability; etc.
The pump repair shop was improved by getting work flows and tools better organized, cleaning and tidying the area, better repair standards, establishing a “clean room” for precision work, and a general application of pride and craftsmanship.
PROCUREMENT PRACTICES
Precision standards were established for shaft precision, balancing, lubrication, seals, isolators, foundations, installation and startup, a precision alignment checklist, etc.
Additional information can be obtained from The Pump Handbook Series published by Pumps and Systems Magazine.
Supplier Selection After reviewing all proposals and having several discussions, Goose Creek finally selected a supplier and signed a formal contract. It included key metrics for pump performance, i.e., pump life—mean time between repairs and overall life; monetary losses from pump failures; production losses and repair costs; and annual maintenance costs, including PM requirements, overhaul, and repair costs. These factored into the RCM/FMEA analysis, which was acceptable to Goose Creek engineers for defining PM and spare parts requirements. The supplier also warranted the pumps to meet certain minimal requirements relative to these measures, so long as the Goose Creek staff performed certain tasks related to the operation and maintenance of the pumps. Of course, the supplier provided the standard warranty regarding the pumps being free of defects. Finally, the pump supplier also agreed to keep certain critical (and expensive) parts available in its facility and to make those available within 24 hours of notice to ship, and made certain other parts available on consignment in the Goose Creek stores. Included in the contract were data regarding baseline performance of the current operation developed using information that had already been developed and shared with the suppliers. Deadlines were included in the contract for new pump delivery and installation, including a small liquidated damages clause for failure to deliver on time. The pump supplier also included key training and technical assistance for the initial storage, installation, startup, operation, and maintenance methods. As part of on-going support and problem resolution, the supplier also included in the contract a process for continuing and regular communication, and for the methodology for failure analysis of any pump failures, e.g., root cause failure analysis and FMEA methodologies. The pump supplier detailed its process for quality assurance in
137
138
MAKING COMMON SENSE COMMON PRACTICE
the pump manufacturing effort, and included time for Goose Creek staff to tour its manufacturing plant and review its production methods and training processes for its technicians. As noted, all these issues were formalized in a contract between the Goose Creek plant and the pump supplier. The process by which this agreement was reached is summarized as follows:
Analyze current operating results and losses from ideal as a result of equipment performance, e.g., downtime losses, operating and maintenance costs, product and process quality, equipment life, etc.
Set operating objectives and cost reduction targets, including operating and maintenance costs, purchase price, training and technical support, etc.
Baseline current operation regarding key measures for partnership.
Commit minimum % of business to partner. Set increases in % of business to specific performance criteria, e.g., quality, cost, technical support, targeted improvements, access for results of R&D, etc.
Set performance standards and dates for achievement, reference baseline data for improvements.
Agree upon metrics and techniques for their measurement.
Provide for warranty of performance, e.g., better than any of competitor products, better than average of prior years, free of defects, minimal performance requirements, etc.
Define technical assistance requirements and commitments and consequences of not meeting commitments.
Baseline specifics of processes/products/practices.
Define schedule improvement in performance, including any lead times, forecasts, etc.
Define basis for frequent, clear communication; performance reviews; and basis for failure analysis process for product quality faults, etc., including report formats and frequency for communication and reporting.
PROCUREMENT PRACTICES
Define QA process to be used for products, and include supplier plant tour.
Define equipment for prototype or pilot run, as necessary.
Define procedure for resolving any disputes.
Document and sign a formal agreement.
The results of the alliance were exceptional. Total initial costs for the pumps was actually somewhat higher, considering the training, installation support, and continuing technical support, but then so was the initial value. Production losses were essentially eliminated because of pump failures, and maintenance costs were drastically reduced. Mean time between repairs for the pumps more than tripled, and maintenance costs overall were substantially reduced. Finally, this approach stands in marked contrast to that of Beta’s Grundy plant purchasing manager. At this plant, he had received several defective pump shafts, ones in which the pump impeller was not perpendicular to the shaft, visibly so, because it didn’t take precision calipers to see the angular offset. The purchasing people accepted the pumps for a 20% credit on the invoice, without checking with the machine shop or maintenance foreman to determine if the pump could be salvaged at all, and if so at what cost. It was not a good trade-off.
Process for Improved Specifications for Equipment Reliability—Mossy Bottom Plant As previously described, purchasing and the entire procurement process must be an integral part of improved reliability, equipment life, uptime, etc. In the best plants, purchasing is an integral part of the production team, helping to ensure maximum reliability in production equipment. Usually, however, ineffective communication and insufficient attention to reliability issues compromise the procurement process, and purchasing. This was the case at Beta’s Mossy Bottom plant. In helping production, maintenance, and engineering ensure maximum production capacity, purchasing must consider reliability and life-cycle cost as an integral part of the purchasing process. Reliability and value should be the critical measures for supplier selection.
139
140
MAKING COMMON SENSE COMMON PRACTICE
And, in fact, the purchasing manager at Mossy Bottom was quick to assert that he does consider life-cycle cost, reliability, and value. Experience has shown, however, that he was often missing key elements, both in his thinking and in his practices, for ensuring maximum equipment reliability, and typically had been conditioned to think in terms of lowest delivered cost, as opposed to equipment reliability and lowest life-cycle cost. The standards for reliability, life-cycle cost, and value, while simple conceptually, can be very difficult as a practical matter. The purchasing manager at Mossy Bottom very often necessarily assumed that the specifications were adequate to ensure meeting the plant’s needs. Unfortunately, many times the specifications did not specify requirements for maximum reliability, and, as we all know, it is often simply not possible to put everything into a specification. Further, Mossy Bottom’s purchasing department would often substitute alternate vendors that would, at least ostensibly, meet a specification, but in fact had a poor track record. For example, at a recent reliability improvement team meeting, it was announced that corporate materials management had signed a long-term supplier agreement with a particular pump seal manufacturer. Most of the people at the meeting were very unhappy with the decision, essentially echoing the comment, “They’ll look good at our expense—those seals require much more frequent replacement, and their technical support is poor.” The maintenance department had first-hand knowledge of those suppliers that provide the best quality, highest value, best service for seals, motors, pumps, etc., but were not consulted before the decision was made. On another occasion, purchasing had gone to a different supplier for suction cups that were used in the packaging area for product handling. It turned out that the new suction cups weren’t nearly as reliable as the old ones, and created considerable downtime, even though the vendor provided them under the same specification. In this case the savings in the cost of the suction cups was far less than the lost uptime and production losses incurred as a result. The purchasing manager tried to make the right decisions for the business unit, but poor communication, combined with autonomous decision making, had resulted in poor decisions. Further, they weren’t all held to common measures such as maximum uptime and minimum unit cost of production. Such problems (or opportunities waiting) should not simply be placed at the feet of purchasing, but more properly should be viewed
PROCUREMENT PRACTICES
as driven by the lack of good communication and the inability to put everything into a purchase specification. In some circumstances, even when everything is in a specification, many suppliers with a history of poor performance will bid to the specification, only to supply unreliable equipment. Equipment histories must be put in place to identify reliable equipment and, combined with a formal communications process between production, maintenance, engineering, and purchasing, include a reliability evaluation process to screen out those suppliers whose equipment falls short in actual use. Such was not initially the case at the Mossy Bottom plant. Writing better specifications, testing supplier equipment (in the factory and at installation where appropriate), and documenting problems and equipment histories would also substantially improve reliability in manufacturing equipment. After considerable review and a major effort between production, engineering, maintenance, and purchasing, the following processes were put in place at the plant.1-4 These methods are intended to offer a model for improved specifications rather than represent a comprehensive set of standards, and should be adapted for specific plant processes, needs, etc. Note also that these requirements also apply to contractors, because they are in greater use.
Vibration Standards for Rotating Machinery For critical equipment, the Mossy Bottom plant required that vibration standards be included in the specification for the equipment, and the supplier would be required to certify that its equipment meets those standards. The vibration standard chosen was General Motors Specification No. V1.0-1993 for use in setting maximum vibrations standards for particular types of equipment at specific rotational frequencies. The GM specification is applicable to motors, pumps, and fans, as well as production equipment such as machine tools. Typically, the specification is applied to equipment larger than about 10 horsepower, unless criticality of the equipment warrants testing at lower horsepower. An additional source of information for specifying acceptable vibration levels is Hewlett Packard Application Note 243-1, Effective Machinery Measurements Using Dynamic Signal Analyzers. When applying these standards, the following issues were addressed.
141
142
MAKING COMMON SENSE COMMON PRACTICE
Pride2 advises that the following procedure should be followed when developing a specification for vibration levels for a given machine: 1. Obtain nameplate data. 2. Define location of measurement points, typically two radial and one axial on either end of the machine train, and two radial on the inboard of the driver and driven machine. 3. Define instrumentation requirements for resolution (number of lines), and sensor types. Hand-held sensors are generally not acceptable. 4. Obtain narrowband vibration spectrum on similar machines, and establish acceptance levels for each narrow band, e.g., GM V1.0-1993. Differences in baseplate stiffness and mass will affect vibration signature. Overall measurements are only for general interest, and not part of the acceptance criteria. 5. Calculate all forcing frequencies, i.e., imbalance, misalignment, bearing defect, impeller and/or vane, electrical, gear, belt, etc. 6. Construct an average vibration signature for the similar machines. 7. Compare this average vibration signature with one of the guidelines provided, e.g., GM Specification V1.0-1993. 8. Note any deviations from the guidelines and determine if the unknown frequencies are system related, e.g., a resonance test may be required to identify those associated with piping supports. 9. Collect vibration data on new components at each of the bearing caps in the radial and axial directions. 10. Compare vibration spectrum with the average spectrum developed in step 5 and with selected guidelines. 11. Any new piece of equipment should have a vibration spectrum that is no worse than similar equipment that is operating satisfactorily.
PROCUREMENT PRACTICES
Critical machinery would also require a factory acceptance test of the equipment, typically including: 1. Driver uncoupled, including vibration and phase data. 2. Driver coupled to machine, including vibration and phase data, fully loaded at normal operating temperature. Also, consider collecting data during startup and coast-down to look for critical speeds; and for unloaded/cold, during warm-up, unloaded at normal temperature, or for other special cases. Any failure of the equipment to meet the specification results in rejection. In some instances a “bump” test could be performed on all piping, foundations, frames, etc., to identify potential resonance-induced problems, and take action as appropriate. Finally, all tests should be conducted by a vibration analyst who has been certified by the Vibration Institute as a Level I or II Vibration Analyst.
Balancing Specifications For balancing new and rebuilt rotors, the specification should include (1) the speed(s) for balancing the rotor; (2) vibration acceptance limits; (3) single or multiplane balancing; and (4) trim balance requirements at installation. ISO DR1940 Grades 1.0, 2.5, 6.3, and 16 are routinely used, with ISO Grade 6.3 having been the historical de facto standard. More recently best practice, particularly for critical equipment, has come to be ISO Grade 2.5 for most industrial applications, with ISO Grade 6.3 being reserved for noncritical applications. For more precision requirements, such as machine tools, ISO Grade 1.0 has become a typical requirement.
Resonance Specifications In recent years, in an effort to reduce costs, many manufacturers have reduced the mass of much of their equipment, particularly in high-efficiency motors. This reduction in mass can lead to an
143
144
MAKING COMMON SENSE COMMON PRACTICE
increase in resonance problems. Therefore, a resonance specification may be appropriate in the purchase specification. The resonance frequency of any component or frame should be at least 30% greater than the maximum attainable speed of the machine. Note, however, that it is fairly common to see machinery run at speeds significantly greater than design.
Bearing Installation Specifications The specification should include a requirement that the supplier publish and use good bearing selection, handling, and installation procedures during manufacture of the required equipment, and should provide the user with the brand and model number of bearings used in rotating machinery. Further, L10 life for bearings should be specified, with a typical industrial requirement being an L10 of 50, meaning that 90% of the bearing purchased should last some 50,000 hours of operation. (Note: That’s 6 full years at 90% run time.) Other considerations, such as ABEC number for bearing finish, and bearing fit up, e.g., C3 or C4, should be included. Requirements for use of an induction heater and oil bath for bearing installation should also be included.
Machinery Alignment The specification should include a requirement that the supplier publish and use good alignment practices for the supply of machine trains. A standard specification for dynamic check of alignment using a standard vibration specification for validation should be required. Include any belt- and pulley-driven machinery, including tensioning specifications.
Motor Specifications Beyond the normal specifications, it may also be prudent to require that three-phase motors not have a cross-phase resistance difference of greater than 3%, or a cross-phase inductance difference of greater than 5%, between any two phases.2,3 This will minimize one phase running hot, which leads to deterioration in motor life. Motors should be balanced within about 0.05 in./sec. vibration level at nor-
PROCUREMENT PRACTICES
mal turning speed. Vibration levels at 120 Hz should be less than about 0.05 in./sec. Levels in excess of this indicate a nonuniform magnetic field, due to voids in the shaft, eccentric shaft, stator faults, imbalance in the stator phases, etc. The feet of the motor (or other machinery) should be coplanar (flat) within nominally 0.005 in. (cast iron) and 0.015 in. (steel). Each motor should have a unique nameplate serial number.
General Requirements 1. Defects should not be found at bearing-fault frequencies. If they appear at a factory test or at startup, the fault should be corrected. Similarly, for belt-driven equipment, belt frequency and harmonics of the belt frequency should not appear. 2. Pump suppliers should provide grouting instructions and alignment specifications. 3. When setting up databases in a CMMS or predictive maintenance system, each piece of equipment and data collection point should have a unique identifier. 4. L10 life for bearings and a requirement for identifying the bearing make and model number for each should be provided by the equipment supplier. 5. Service factors for gearboxes and motors should be specified. 6. Run out measurements should be specified as follows:2 Maximum bore run out: ± 0.002 in.; maximum shaft run out: ± 0.002 in.; maximum face run out: 0.0005 in. Zero axial end play. 7. Jacking bolts should be requested for larger motors, pumps, etc., to facilitate precision alignment. 8. Threaded flange holes should be counter-bored to avoid material being compressed on tightening, resulting in a soft foot effect. 9. Instruction should be given to staff and vendors to ensure they do NOT lift motors and pumps by their shaft. 10. As a minimum, the general equipment information sheet shown in Figure 5-1, or similar, should be completed for
145
146
MAKING COMMON SENSE COMMON PRACTICE
each major piece of equipment purchased and provided to maintenance and engineering. This information reflects an example of the process for integrating the procurement function into a world-class reliability program. The key to ensuring that purchasing is an integral part of the production team is that it must be advised and conditioned to consider reliability issues, and it must be treated as an integral part of the team, not just someone to call when you want something purchased. These two areas are considered critical for the future success of Beta, and in particular for the Mossy Bottom and Goose Creek plants, and are already showing the following benefits: 1. Increased communication and teamwork between purchasing, engineering, production, and maintenance—a shift in the culture of the plant. 2. Improved equipment reliability and life. 3. Improved uptime and unit cost of production. 4. A sense of purpose regarding world-class manufacturing. Beta’s Mossy Bottom and Goose Creek plants are currently implementing these practices and have already achieved substantial improvement in these areas. These will be used as models for implementation at other plants within Beta’s business units. Purchasing and procurement practices must be made an integral part of plant reliability for ensuring manufacturing excellence. One final tip may be of use in improving supplier performance. At one of Beta’s plants, the purchasing manager prominently displayed two placards: one labeled “Top Dog,” with a cartoon of a happy puppy; the other labeled “Dog House,” with a sad hound hanging low in a dog house. The suppliers that routinely met the plant’s requirements for price, quality, delivery, service, etc., were rewarded with their names displayed under the “Top Dog.” Of course, the ones who were less than satisfactory in their performance were treated otherwise. None of the suppliers wanted to be in the “Dog House.”
PROCUREMENT PRACTICES
Equipment Identification__________________________________________ General—For each machine, the following should be obtained: Manufacturer _______________________ ID/Serial No. _______________ Drive and opposite end bearings model Nos.; all other bearing model Nos; all bearing L10 data Lubrication requirements—type, frequency, sampling points; other requirements—jacking bolts, rigid base plates, vibration disks Motor or Driver Rated hp ______________________
Rated voltage ______________________
Rated amps ____________________
Efficiency _________________________
No. of phases __________________
Drive and opposite bearing; L10 ______
No. of rotor bars _______________
No. of stator slots __________________
Frame ________________________
Service factor ______________________
Rated rpm _____________________
Type of coupling ___________________
No. of poles ___________________
Temperature detectors _______________
Electrical classification __________ Driven Equipment Speed of each stage _____________
Model No. ________________________
No. of vanes each stage __________
Input speed ________________________
No. of blades each stage _________
Constant ________ Variable __________
No. of stages ___________________
Drive and opposite end bearings; L10 __
Belt Driven Diameter of 1st sheave __________
Belt ID ____________________________
Diameter of 2nd sheave _________
Input shaft speed ___________________
Distance between centers ________ Gear Driven Input shaft speed________________
Output shaft speed _________________
Gear IDs ______________________
No. of input gear teeth ______________
No. of output gear teeth _________
Service factor ______________________
Figure 5-1. General machine information specification summary sheet.
147
148
MAKING COMMON SENSE COMMON PRACTICE
Raw Material Cost Improvement Raw material and feedstock purchases often represent some 50% or more of total manufacturing costs. Much effort is put into reducing direct labor costs, which in many larger manufacturing plants only represent 5% to 20% of total costs. If we could achieve a 10% reduction in raw material costs, this would amount to the equivalent of 25% to 100% reduction in direct labor costs. Yet, only recently has Beta begun to look for new methods for minimizing its raw material costs. Certainly, Beta has all along been doing what many others are doing, that is, reducing the number of suppliers in exchange for higher volume and lower pricing; searching worldwide for suppliers that appear to have good value, not just low price; improving manufacturing performance to increase yields and reduce waste and scrap; etc. Recently Beta has decided to try several new initiatives. The first, related to developing strategic alliances for suppliers of critical production equipment, was described earlier. The second, however, is a more novel approach. As noted, some 50% of Beta’s manufacturing costs are for raw materials coming into its factories. Each percentage reduction here has much higher leverage on total manufacturing costs than any other cost component. Granted, consolidating suppliers and constructing alliances should be done. However, what if Beta takes what it has learned, and is learning, and shares that with its key raw material suppliers? What if Beta sets up a procedure wherein part of the procurement qualification process is to compare its suppliers’ operations to world-class standards, and to work with their suppliers to make sure these practices and standards are being applied? Beta has recently begun a test effort with a few of its key suppliers. Essentially, the process involves auditing key suppliers against best manufacturing practices; making some estimate of the potential savings possible using better practices; sharing with those suppliers Beta’s application of best practice; and then negotiating with suppliers relative to improved raw material costs, sharing in the gains to be achieved. Intense competitive pressures worldwide appear to be lending credibility to this approach. Finally, while the information in this chapter provides considerable detail about rotating machinery, the intent is not to educate about rotating machinery but to provide a model that can be expanded to other equipment, and to show how to apply specifics
PROCUREMENT PRACTICES
for technical requirements and the development of alliances with suppliers for all plant equipment. For example, specific types of equipment, such as compressors, fans, etc., will obviously require additional detail to ensure good specifications; and, perhaps more importantly, analogous requirements could be developed for fixed equipment using this model. The use of strategic alliances for equipment suppliers is a key part of the improvement process, but it involves not just cost and volume, it involves improving operating performance. Raw material costs are driven by supply and manufacturing costs. Beta’s sharing of best practice with suppliers should improve raw material costs. This information, like much of Beta’s experience, is being shared as part of a process for continuing change and improvement.
Improving Supplier Reliability—A Case Study Beta’s Nekmi plant was planning on adding a major new production line to one of its current plants. In order to be competitive, however, this line not only had to be installed for under $100M, it also had to perform much more reliably than the current production lines. So, Nekmi’s plant and project management sent the following questions to the suppliers and typically received answers similar to those indicated. These answers are not especially responsive to the questions and were more or less “sales-speak.” After considerable discussion, the questions were revised as indicated, applying the concept of “Total Cost of Ownership.” 1. Question—What roles do reliability and maintainability play in the engineering of the equipment you are quoting? Typical answer: We’ve designed this equipment many times and work with plants to make them reliable. The question was resubmitted to the supplier as follows: We’re currently experiencing some 7.2% maintenance downtime with equipment similar to that which you are quoting. This is considered to be due to poor reliability and maintainability. Likewise, the lack of ease of changeovers is also affecting our ability to provide for rapid setup and low transition losses. We estimate this at some 3.5%. This totals over 10.7% of our production losses, which
149
150
MAKING COMMON SENSE COMMON PRACTICE
we find unacceptable, and could make the project untenable. We also consider your reply nonresponsive to our question. Please provide information for and/or answer the following: A. The definition, criteria, and standards you use for reliability and maintainability of the equipment being supplied, e.g., those established by the Society of Automotive Engineers, by the Society for Maintenance and Reliability Professionals, or by some other recognized body. B. What are the most common failure modes for this type of equipment? C. What downtime is typically experienced for each of these failure modes? D. What is the mean time to repair for each of these failure modes? E. What are the key operating and maintenance practices required to mitigate and minimize these failure modes? F. Which critical spares are needed in light of these common failure modes? What is the risk of production loss without these spares? G. Describe the methodologies being used for analyzing the reliability of the equipment, e.g., failure modes, effects, criticality analysis, reliability-centered maintenance, root cause failure analysis, etc. Give a specific example of each method being used for the equipment under supply. 2. Question—How can future downtime be prevented during the engineering phase? And what can be done to facilitate future troubleshooting activities? Typical answer: We’ll let you review the drawings and we’ll do good automation. The question was resubmitted to the supplier as follows: A. In light of the failure modes analysis described above, how will you modify the design to eliminate or mitigate these failure modes? B. In light of the fact that a lot of time is spent in mills already operating, talking to operators to find out the difficulties or get their ideas on how we can improve the
PROCUREMENT PRACTICES
design, describe three or four design changes that have been made as a result of this review, and why. What additional efforts are currently ongoing for improved reliability and maintainability in the equipment? C. Attached is a listing of several problems that we have had in reliability and maintainability. Please provide us with a description as to how you will address and resolve these problems in the design to ensure high reliability and maintainability. D. Will approval of the drawings be permitted in sufficient time to allow for equipment modifications and still meet the project schedule? Please describe how, and provide a sample schedule. 3. Question—What is your reliability experience with this type of equipment? Typical answer: This system has been installed in many places and operated very reliably. This equipment was also installed at the Yada Yada plant several years ago and is operating reliably; and more recently has been installed and commissioned at the Buda Buda plant, where we have had no special difficulties during startup and commissioning. The question was resubmitted to the supplier as follows. At the Yada Yada plant: A. What has been the average % maintenance delay and/ or unplanned downtime? B. What is the mean time between repair? C. What is the mean time to repair? D. What are the five most common failure modes? E. What is being done to mitigate or eliminate these reliability and maintainability issues? At the Buda Buda plant: A. How long did the commissioning process take to achieve full, sustainable production rates? B. What were the major problems encountered during commissioning?
151
152
MAKING COMMON SENSE COMMON PRACTICE
C. Which design changes were made to minimize these commissioning problems? D. What is the standard commissioning process? Please provide a description and outline of the process. We desire that the equipment run at least one full or typical production cycle while maintaining full production requirements. We also desire that this same test be repeatable within 3 to 6 months of initial commissioning. Please describe your process for meeting these requirements. E. What has been the average % maintenance delay and/ or unplanned downtime? F. What is the mean time between repair? G. What is the mean time to repair? H. What are the five most common failure modes? I. What is being done to mitigate or eliminate these reliability and maintainability issues? J. Is it possible for our personnel to visit these plants to review their practices and performance? 4. Question—How are spare parts recommendations developed? Typical answer: They are based on our experience and on feedback from our customers, separated into spare and wear parts, with expected lifetimes. The question was resubmitted to the supplier as follows: A. Which statistical methods and other techniques were used to convert this experience and feedback into intervals? B. Please provide one example of an analysis of each major component in the new production line, wherein techniques such as RCM, FMEA, and PM optimization were used in conjunction with statistical failure and wear data to determine spare parts requirements and PM intervals. 5. Question—What level of training is required for maintainers at a production line of this type? Typical answer: We do good training, and much of the equipment is automated,
PROCUREMENT PRACTICES
mitigating the need for training (Oh really? Shouldn’t this require more comprehensive training?). The question was resubmitted to the supplier as follows: A. Define clearly the mechanical and electrical maintenance training and operator training requirements. B. Please provide a detailed outline of all training and support requirements that will be necessary for supporting the reliable operation and maintenance of the equipment.
Supply Chain Principles— A Strategic Approach Beta International’s Bob Neurath also felt that his company had to go beyond simply demanding lower prices and using competitive bids and volume discounts to achieve better operating results. Certainly the experience described previously supports this proposition. He understood that the supply chain’s performance was only as strong as each link of the chain, and that demanding lower prices from one link in the chain might actually make that link weaker, to the detriment of the chain, and, more particularly, to Beta International. Since Beta had some significant success at its Goose Creek and Mossy Bottom plants, the thought was that this experience could be used to further develop the principles of supply chain management. On researching this area, and with a bit of luck, Beta found a process, described in the following text, that outlined what it believed would provide an excellent supply chain management model.5 The methodology requires a minimum of three parties as participants in supply chain improvement. Focus is given to the performance of the entire chain, not just any individual participating member. As Deming might say, we have to look at the performance of the “system,” where the system is defined as the aggregate performance of three or more companies that are part of the supply chain. The methodology is reported to have achieved the following results with several large Fortune 500 companies:
153
154
MAKING COMMON SENSE COMMON PRACTICE
1. 20% to 70% quality improvement 2. 30% to 90% shorter cycle times 3. 15% to 30% waste reduction 4. Threefold or more technology gains 5. Diminished hazards through shared risks Consensus in decisions is built through the members of the supply chain, internally first, and then externally through the other members of the chain. Key to this is creating a “mandate team,” within each company in the supply chain, which is made up of a “champion” in each of the following areas: 1. Designability 2. Produceability 3. Affordability 4. Saleability 5. Threefold or more technology gains 6. Other appropriate capabilities These champions on the mandate team develop the internal consensus for a given issue, and then work with the other members in the supply chain to develop external consensus among the members of the chain. They must also have sufficient knowledge of the issues to be able to make decisions. It is also required that each member of the team must be at least “70% satisfied” with the team’s decisions and direction. This is a subjective measure, which is literally voted upon by the team members. Otherwise, the decisions must be reevaluated until this 70% satisfaction is achieved for each member. Apparently this works well, even when some team members become stubborn about a particular issue—the forces of peer pressure, when combined with good logical arguments, are fairly compelling in achieving adequate agreement. An essential requirement for the mandate team is that it must always strive to identify the needs of the ultimate consumer of the products of the chain and to meet or exceed those needs. Using this
PROCUREMENT PRACTICES
approach, the benefits from supply chain integration generally are split 1/3 to the ultimate customer, who is of course a member of the Mandate Team, 1/3 to the principal members of the chain, and 1/3 to the other members. Fundamental principles that are required in this model include having: 1. A shared specific focus on satisfying their common end consumer 2. An alignment of vision 3. A fundamental level of cooperation and performance to commitment (trust) 4. Open and effective communication 5. Decisions that maximize the use of the competencies and knowledge with the supply chain 6. All stakeholders committed to generating long-term mutual benefits 7. A common view of how success is measured 8. All members committed to continuous improvement and breakthrough advancements 9. Any competitive pressure that exists within any one company’s environment being a competitive pressure within the extended enterprise Whether the members were abiding by these principles was tested through internal surveys of each member of the mandate team, with each member grading the others on a scale of 1 to 10 on all 9 axioms. Areas of substantial agreement or disagreement were then used to reconcile differences and support the establishment of a common strategy and set of goals. To measure the business success of process, a measurement system is established for measuring supply chain performance, both internal to each company and external to the entire supply chain. These include financial measures such as revenues and assets employed, costs, inventories, etc.; quality, including customer satisfaction and defect rates, scrap and rework; cycle time, including order to deliv-
155
156
MAKING COMMON SENSE COMMON PRACTICE
ery, product development time, internal raw material to finished goods, chain response rate, etc.; and technology, including enhancements to products and product/process enhancements for each link in the chain. As an incidental point, the Internet serves as an excellent medium for communications and consensus building for the supply chain members at both the company and champion levels. Beta will be requiring the application of these supply chain principles as an integral part of its process for improving business performance, and, of course, consolidation of suppliers will continue as an ongoing effort. But, neither of these is sufficient. Beta will also be asking key questions—“Are our suppliers focused on making us successful, or just selling us their goods? Are they helping us minimize total cost of ownership? Are they applying reliability principles in the design of their equipment? Do we have adequate specifications with sufficient information regarding process and equipment reliability requirements?” All these play a crucial role in all future procurement decisions.
References 1. Lendrum, T. The Strategic Partnering Handbook, McGrawHill, Sydney, Australia, 1998. 2. Pride, A. Pride Consulting, Inc., Knoxville, TN. 3. Talbot, C., Mars, M., Mars, M., and DiMezzo, B. “Reliability Improvement through Vendor Quality Specifications,” Owens-Corning, presented at the Society for Maintenance and Reliability Professionals 1994 Conference, Barrington, IL. 4. Dunton, T. “Discovering/Providing/Correcting Vertical Pump Misalignment,” Update International, presented at Enteract Conference, 1994, sponsored by Entek, Cincinnati, OH. 5. Parker, R. C. Director of the National Initiative for Supply Chain Integration, Presentation to University of Dayton’s Center for Competitive Change, Manufacturing Executive Council, May 19, 1998.
Stores/Parts Management Practices
6
You never miss the water till the well runs dry. Rowland Howard Stores should be run like a store—clean, efficient, everything in its place, not too much or too little, run by a manager with a focus on customer (facility maintenance and operation) needs. Moreover, stores should be viewed as an asset, not a liability or cost. Maintaining a good, high-quality stores operation is in fact the low-cost approach to operating a facility. Yet, Beta International plants typically treated their stores function as if it were a necessary evil, a burdensome cost, a nonvalue adding function. This was not an enlightened approach. As Beta found, IF properly managed, stores will help ensure a high-quality, low-cost operation. If not, stores will continue to be a “nonvalue adding” and expensive “necessary evil.” However, as we have seen, and will see in subsequent chapters, stores management must be viewed as part of an integrated reliability process. For example, if we could double the reliability and life of our equipment, we would need fewer spares. Maybe not half as many, but fewer. If we had good process and equipment condition monitoring in place to detect defects early, we could better manage the need for spares related to those defects. If we had excellence in maintenance planning and scheduling, which is integrated with the stores room, and which is “fed” by condition monitoring, then we could much more effectively manage our spares require-
157
158
MAKING COMMON SENSE COMMON PRACTICE
ments. If we do not have these, and simply reduce our spares holding because they have not “turned,” then we run the risk of reduced maintenance efficiency and increased production losses. The accounting and purchasing manager tends to manage working capital by reducing spares holdings. On the other hand, the maintenance manager tends to manage and minimize the risk of not having spares when they are needed by holding more. These two competing interests must be balanced and must be integrated into a broader process for manufacturing excellence.
What Are Stores? In most facilities, stores are typically viewed as spare parts. However, a broader and more accurate perspective is that all items not consumed directly or indirectly in production are included under the heading of stores. At Beta, stores are generally classified into five groups:1
Hardware and supplies, e.g., bolts, small tools, belts, pipe, valves, etc.
Materials, e.g., paint, lubricants, cement, refractory, etc.
Spare parts, e.g., bearings, gears, circuit boards, specific components, etc.
Spare equipment, e.g., complete assemblies and machines
Special items, e.g., lubricants, catalyst, steel banding, construction surplus, pilot, etc.
The stores function is to provide high-quality spare parts and other material as needed and where needed, primarily supporting the maintenance function. Ideally, stores would always have exactly what was needed, when it was needed, and be able to place it where it was needed at a moment’s notice. And it would have no more than what was needed—only that material and spare parts needed immediately, and only for a minimum period in stores. Unfortunately, few of us live in this ideal world, and at many of Beta’s plants “stock outs” were a frequent occurrence; reorders were often needed; delivery of spares was sporadic; etc. Beta, like most companies, however, found a “solution” to these problems—they carried lots of spare
STORES/PARTS MANAGEMENT PRACTICES
parts in considerably larger quantities than would ordinarily be needed on a daily basis. This resulted in excess inventory, poor cashflow management, and, often, sloppy management practices. There is a better way.
The “Cost” of Stores Spare parts and stores expenditures at Beta’s plants typically ran 25% of a given maintenance budget. Further, annual carrying costs (labor, space, insurance, taxes, shrinkage, utilities, etc.) typically ran 15% or so of the value of stores. Perhaps more importantly, if the parts were not available, or were of poor quality because of poor suppliers or poor stores practices, then plant function was lost for extended periods of time. These losses were often larger than the carrying costs. After an in-depth review, the cost of stores was recharacterized as losses associated with:1
Working capital “losses”—overstocked, underused material, parts, etc., sitting in stores
Carrying cost “losses” for maintaining the stores operation
Plant capability “losses” due to lack of timely availability of parts
Maintenance inefficiency “losses” due to poor management—wait time, transit time, etc.
Expediting cost “losses” due to poor planning
Shrinkage “losses” due to poor control—waste, theft, deterioration, obsolescence, etc.
At the same time, if we could eliminate or at least minimize these losses, then the “value” of the stores function would be readily apparent. More enlightened managers understand the need to minimize these losses and therefore put in place practices to ensure a socalled world-class operation—minimal losses, maximum support capability. So, how do we set up a good stores operation?
159
160
MAKING COMMON SENSE COMMON PRACTICE
What Stores Are Needed—Kind and Quantity Beta International had a stores operation at all its plants, some portions centralized, other portions plant specific, but often it was not functioning like a modern store to routinely meet kind, quantity, and availability requirements. Suppose, for example, you went into a local department store in your home town, and found it to be dirty, lacking the items you were seeking, managed by people who didn’t seem to care about the store, or you! Would you ever go back? At Beta, however, like most plants, there only was one store, and that’s the one everyone had to use. Unfortunately, most of its stores were not being managed like a modern store. To build a good stores operation, Beta had to reevaluate its entire understanding of customers’ needs—of plant maintenance requirements. Therefore, kind, quantity, and availability needs must be driven by a keen understanding of maintenance needs. Further, to facilitate good communication and understanding, maintenance had to put in place a good maintenance management system for equipment identification, work orders, planning and scheduling systems, the link to stores, kitting and delivery of parts for given work orders, and a cost accounting link for effective management. So, where did Beta start?
Bill of Material and Catalog Beta didn’t have a good bill of material for every critical item in its plants and is now in the process of developing one. This is being done in cooperation with maintenance and using its definition of critical equipment and understanding of spares needs. This updated bill of material will be used in conjunction with other needs to develop a catalog of all stores and spare parts required in each plant. Critical to this process is the identification of simple things like belts, bearings, gaskets, etc. This is a dynamic document and must be updated two or three times per year, or as needed, e.g., for a major change in suppliers, for major equipment additions, etc. Further, the catalog includes unique identifier codes, generally numerical, for each catalog item, and provides for a logical grouping of material for use in Pareto analyses, component use frequencies, equipment history analyses, etc. It also includes a standard nomenclature and descriptors for each item. This (catalog, standard nomenclature, grouping, descriptors) is not as simple as it sounds,
STORES/PARTS MANAGEMENT PRACTICES
and must be developed in cooperation with others whom it may affect, e.g., maintenance, design, suppliers, construction, and capital projects. This will ensure buy-in, acceptance, common understanding, etc., and will ensure greater probability of success in the effort to run a world-class stores organization. Finally, while hard copies are routine, the catalog must also be “on-line” and staff must be trained in using the catalog, both manually and “on-line.”
Management Methods Simple policies and procedures must be developed in cooperation with maintenance and engineering to ensure consistency of process and quality of operations. This should include a policy and/or procedure for:1
Development, use, and modification of the catalog system
Inventory classification, including process for obsolescence
Vendor stocking and consignment programs
Economic order practice—quantity, point, level, etc.—and dynamic adjustment
Consolidation of parts, suppliers
Repair/replace policy, including managing reconditioned (vs. new) equipment
Alternate sourcing (qualification, detailed drawings availability, etc.)
Communication to maintenance, engineering, purchasing of key policies
An audit process
Beta is developing policies and procedures in these areas dependent upon current practices and philosophy, the skill level of the staff, resources available, etc. A quick tip. Many companies have a repair/replace policy for their motors that essentially says that if the cost of the repair/rewind is less than, say 50%, of a new motor, then they will repair. This practice ignores the efficiency loss that often results from a rewind (esti-
161
162
MAKING COMMON SENSE COMMON PRACTICE
mated at up to 3% by some experts for each rewind), which translates to increased power consumption and shorter life for the rewound motor. These costs of increased power consumption and reduced motor life must be considered and put into a policy for repair/replacement. Beta will be following up on this and develop other examples.
Partnerships with Suppliers As noted in the previous chapter, suppliers were consolidated and standardized as much as possible. When selecting suppliers, key performance indicators should be used, in conjunction with key performance objectives. Key issues and requirements being used at Beta for supplier relationships are:
Partnership agreements detailing the basis for the agreement
Supplier stocking and consignment terms and methods for reducing physical inventory
Blanket orders wherever possible
Maintainability and reliability requirements
Feedback process for resolving problem areas
Use of electronic communications and order entry wherever possible
Measurement (and minimization) of emergency or spot orders
Measurement of stock types
Further, suppliers are routinely asked to provide information regarding spare parts recommendations and frequency of preventive maintenance (PM) requirements. A typical response is a list of spares to be used for annual PM. Rather than simply accept this at face value, which may be related more to next year’s sales plan than to good maintenance practices, Beta is now requiring key suppliers to provide the statistical basis for their recommendations, including the basis related to the application of the equipment and its working environment. In certain cases, the supplier is even required to perform an RCM analysis, using failure modes and effects, to determine
STORES/PARTS MANAGEMENT PRACTICES
proper PM and spare parts. For example, you should be asking questions such as, “What fails most often? What’s the primary failure mode? What spares or PM do I need to help me prevent or mitigate these failures?” Typically the most common spares needed are items such as belts, bearings, seals, gaskets, fuses, switches, and so on. Make sure you consider this when working with your suppliers to develop your so-called critical spares listing. It’s also important to remember that overhaul PMs essentially presume that all your equipment is average, a highly unlikely situation. It may also be that your application is very different from a typical application, and that you do not have consistency in your operation and maintenance practices. Suggestions for developing a strategic partnership with suppliers are provided in Chapter 5.
Standardization Supplier partnerships will facilitate standardization, but these partnerships must also be combined with seeking opportunities for further standardization. For example, at Beta, methods for standardization will include: (1) using materials that fulfill common if not identical requirements, (2) reviewing new equipment and purchases for standardization opportunities, and (3) ensuring consistency with corporate supplier opportunities. The standardization process includes input from the maintenance, operations, design and capital projects staff, and purchasing. All appropriate parties must be trained in the standards that have been developed, and a process must be put in place that defines how the standards are changed and/or waived. It is critical that project engineers and purchasing agents work diligently in support of standardization.
The Store The store itself should be well managed. There are several techniques to assist in managing the store:2
Layout
Procedures
Work environment and practices
163
164
MAKING COMMON SENSE COMMON PRACTICE
Technology/methods
People
Layout Routine work should be reviewed from a “time/motion” perspective. For example, Beta is currently having its plants review stores, asking: 1. Are material and spare parts conveniently located for minimizing the amount of time required to pick up and deliver to the counter? 2. Is the material received in a way to minimize the time and effort of stocking? 3. Are material and parts conveniently located, especially for frequently needed items? 4. Is the issue counter near the receiving counter? 5. Is management near the hub of the activity—issue and receipt? 6. Is a delivery system in place to provide the parts at the job location? 7. The layout of the stores should be reviewed with minimizing the time required to provide the needed deliveries to maintenance, all things considered.
Procedures Other issues being reviewed include procedures and processes: 1. Is bar coding in routine use? 2. Is receipt inspection routinely practiced? 3. Are automated reduction and inventory control points highlighted? 4. Are receipts recorded quickly and inspected?
STORES/PARTS MANAGEMENT PRACTICES
5. Have you considered contracting miscellaneous material purchases, such as shoes, towels, jackets, coveralls, etc? 6. Has the catalog been completed and input into a stores management system, which is linked to the maintenance management system and to accounting? 7. Is bar coding used to minimize clerical requirements? 8. Are carousels in use and controlled by a controller pad linked to the stores management system? 9. Will the carousel bring the material to the counter, charge the withdrawal to the customer, and mark the withdrawal against quantities for order point determination? 10. Is this process tied to maintenance planning and control? 11. Is material and/or parts kitted by work order, and, as required, delivered to the job location? 12. Is a process in place for cycle counting (verifying inventory quantities on a periodic basis)? 13. Is an electronic data entry and order entry process in place, especially for key suppliers? 14. Is there a process for managing equipment to be repaired or overhauled and restocked, e.g., separate area for “to be repaired and restocked,” cost accounting procedure, etc.? Are the following in place relative to suppliers? 1. Are supplier stocking and consignment terms and methods for reducing physical inventory in place? 2. Are blanket orders used wherever possible? 3. Is there a feedback process for resolving problem areas? 4. Is there a process for measurement (and minimization) of emergency or spot orders? It may be particularly important to do receipt inspection. At one of Beta’s divisions consisting of six plants, during an improvement effort for their purchasing and spares functions, it was found that on average some 14% of their receipts did not match their purchase
165
166
MAKING COMMON SENSE COMMON PRACTICE
orders, that is, the material received was the wrong material.3 The worst plant actually experienced a 22% mismatch. On investigation, it was found that about 3/4 of the problems were with the supplier, and 1/4 were because of poor internal processes for creating the purchase orders. After working for many months to reduce this problem, they achieved an average 4% mismatch, with work continuing. This suggests receipt inspection is essential for ensuring quality parts. Some have suggested they’re “saving money” by not doing receipt inspections. These data suggest otherwise.
Work Environment and Practices As to work environment, Beta asks the following questions to determine if it is using best practices in its stores facility: 1. Are the floors and walls clean and painted (floors with epoxy, nonskid for reducing dust)? 2. Is the stores area clean, comfortable, and well lighted? 3. Is access controlled and/or managed, e.g., limited to users of a swipe card? 4. Is there an air-conditioned, nonstatic area for PC boards and other electronic equipment? 5. Are you using high-density storage for appropriate items? 6. Are bearings and gaskets protected, sealed, and stored to minimize damage or deterioration? 7. Are motors covered to minimize deterioration? 8. Do large, critical motors have heaters to eliminate moisture accumulation in the windings? 9. Are the shafts of rotating machinery rotated regularly to avoid false brinneling? 10. Are shafts to critical rotors stored vertically and in a nitrogen-sealed, pressurized enclosure? 11. Is carbon steel equipment that may corrode coated with a thin, protective film?
STORES/PARTS MANAGEMENT PRACTICES
People Employees represent perhaps the easiest, and simultaneously the most difficult, issue in the stores management function. Most people want to do a good job, given the proper training, tools, systems, procedures, and encouragement. In many stores operations, the stores person has been assigned to the function without much training in stores management, other than what might have been garnered through on-the-job training and the practices that have been handed down over the years. Yet, the cost and value considerations for effectively managing a store dictate much more than a casual and historical approach to stores management. Having answered the questions regarding stores environment and practices, Beta must now develop an organization with the right people, in the right mix and quantity, and provide the tools, training, and encouragement to ensure superior performance. A matrix was created to outline the training its people need to accomplish the tasks identified. Further, measurements are routinely kept of the effectiveness of the stores operations. And customer satisfaction surveys are performed through the maintenance and operations functions, seeking to develop a strong supportive relationship with maintenance as well as purchasing, operations, and engineering.
Training Training is being formulated based upon a series of strategic questions: 1. What are my key objectives (reference key performance indicators)? 2. What are the skills required to achieve those objectives? In what quantities? 3. What are the skills of my people today? 4. Which new equipment and/or systems are coming into use in the future? 5. What are my workplace demographics—age, ability, etc.? 6. What are my training requirements in light of my answers?
167
168
MAKING COMMON SENSE COMMON PRACTICE
Beta is using the answers to these questions to develop a strategic training plan that ensures the skills are put in place on a priority basis to minimize losses and add value to the organization.
Contracting the Stores Function While Beta does not share many others’ enthusiasm for contracting maintenance (reduces ownership and core competencies, increases loss of equipment and process knowledge, increases risk of downtime, etc.), it may have its place in the stores function. Beta is currently reviewing the characteristics and prospective value of a good stores function, but has yet to conclude that it would be hard pressed to achieve this level of competence in a short period of time without contractors. It may ultimately decide that, all things considered, a good high-quality stores management function could be put in place quickly and effectively using contractors. This is the conclusion in reference 2, but in that situation there were also additional considerations related to the intransigence of the union and its work rules, etc. Apparently, the union was unwilling to work with management to improve productivity and performance and was replaced by a contractor, which is reported to be doing a very good job. Further, others have found that certain items may be better handled by a “roving trailer”–type contractor, which handles safety shoes, uniforms, coveralls, etc. Ultimately, all unions must recognize that they are in fact competing for the same jobs as contractors. All things equal, companies will normally stay with their employees. If major differences are demonstrable and/or compelling, then contracting must be considered. Some suggestions on contractors and their best use are provided in Chapter 12. Finally, though not specifically a contracting issue, in one area a supplier park4 was created through a cooperative effort of the purchasing managers of several large manufacturing operations. Essentially, the park was built on “spec,” and space was leased to several suppliers, e.g., routine bearings, lubricants, hose, piping, o-rings, belts, etc. These suppliers used a common same-day and next-day delivery system, electronic ordering, routine review of use and repair histories (from their records), and an integrated relationship with their suppliers to achieve a superior level of performance and lower inventories than what were otherwise achievable within an individual stores operation at the plants.
STORES/PARTS MANAGEMENT PRACTICES
Key Performance Indicators After review of the literature and doing some internal benchmark data collection, the following key performance indicators were developed at Beta. Above all, performance indicators were viewed as an indication of the success of the organizational objectives that were established.1,2 Best
Typical
Stores value—% of facility replacement value
.25–.50%
1–2%
Service level—stockouts
<1%
2–4%
Inventory turns (see following discussion)
2–3
1
Line items processed per employee per hour
10–12
4–5
Stores value per stores employee
$1–1.5M
$0.5–1M
Stores disbursements per stores employee
$1.5–2M
$0.5–1M
Further, measures are also being set up for the following: Use of catalog items in stock—%, quantities Use of preferred suppliers—%, quantities Carrying costs—total and % of stores value Receipt and issue backlog—% and delay days
Beta is currently establishing the processes previously described, and beginning to measure performance; it anticipates substantially improved performance. However, it may be instructive to review a case history on how one of Beta’s plants improved its inventory turns and managed its stores operation more effectively. Please note, however, that this plant already had most of the systems previously described in place, so that it could manage its stores operation more effectively.
169
170
MAKING COMMON SENSE COMMON PRACTICE
Minimizing Stock Levels—A Case Study Beta currently has a major effort ongoing to reduce inventory levels, principally finished goods, but also including stores. Certainly it should, because inventory generally represents capital that is not generating a return. Indeed, Beta recognizes that it costs money to store and maintain inventory. It could be compared with stuffing money under a mattress, one that you’re renting to store your money in. Reference 5 discusses minimizing stock levels. The drive to reduce inventory is often intense at Beta. Some of its plants have attempted to issue decrees, e.g., “A world-class level for inventory turns of spare parts is 2. Therefore, we will be at 2 turns on our spares inventory in 2 years!” or some other equally arbitrary objective. Middle management has then been left with the goal to reduce inventory, usually with limited guidance from senior management about the strategy of how to achieve this objective or whether or not this objective can reasonably be accomplished. Nonetheless, most will make a good faith effort to do so. However, most will be caught between the proverbial rock and hard place. If they simply reduce inventory across the board, they could jeopardize their ability to quickly repair failed equipment in a timely manner, due to lack of parts that resulted from reducing inventory, risking unplanned downtime, or incurring extra costs. Further, inventory often will be “reduced” only to find its way into a desk drawer, filing cabinet, storage closet, etc., for future needs. This is especially true in an organization that is highly reactive in its maintenance function, e.g., lots of breakdown maintenance, emergency work orders, run to failure, etc. So, what should Beta do? It was ultimately the decision at Beta that inventory should be driven by reliability and capacity objectives and a systematic strategy, not necessarily by arbitrary decrees. The first step in establishing a basis for spares inventory management was to segregate inventory into categories that can be managed, such as: 1. Obsolete—to be disposed of as economically as possible 2. Surplus—quantity > economic order point; to be managed in cooperation with suppliers 3. Project—excess of what is to be returned at the end of the project
STORES/PARTS MANAGEMENT PRACTICES
4. High volume/use items—most to be put on blanket order and delivered as needed 5. Low volume/use critical spares—most to be stored in-house using specific procedures 6. Low volume/use noncritical spares—most to be ordered when needed Other companies may have other categories better suited to their purpose, but this should start the thinking process for improved inventory management, while still ensuring maximum equipment reliability. Integral to this, management should consider: 1. Equipment failure histories 2. Parts use histories 3. Lead times 4. Supplier reliability (responsiveness, quality, service) 5. Stock-out objectives 6. Inventory turns objectives And, finally, all this should be integrated with the following: 1. Strategic objectives as to reliable production capacity 2. Application of a reliability-based strategy for knowing equipment condition There are other issues that may come into play at any given manufacturing facility, but these points should illustrate the principles involved.
Beta’s Wheelwright Plant At Beta’s Wheelwright plant, the management team had been directed by Beta’s senior management to reduce spares inventory, such that
171
172
MAKING COMMON SENSE COMMON PRACTICE
turns were 2.0 by year end. On assessing its current position, it found the following: Current inventory level: $10M Current annual parts expenditure: $10M The plant is presently turning its spares inventory at 1.0 times per year, which essentially means that inventory must be cut in half. This was perceived to be a major challenge, particularly in light of the current mode for maintenance, which was highly reactive. Beta’s senior management had further determined that unplanned downtime and maintenance costs were excessive as compared with industry benchmarks. Reactive, run-to-failure maintenance was resulting in substantial incremental costs due to ancillary damage, overtime, the unavailability of spares (in spite of high inventory levels), and, most importantly, lost production capacity from the downtime. A team of the staff determined that application of a reliabilitybased strategy would ensure a reduction in downtime and maintenance costs. A major part of this strategy was the application of equipment condition monitoring technologies (vibration, infrared, oil, motor current, etc.) to facilitate knowing equipment condition and therefore: 1. Avoid emergency maintenance and unplanned downtime. 2. Optimize PMs—do PMs only when necessary, because few machines fail at precisely their average life. 3. Optimize stores—plan spares needs based on equipment condition, move closer to JIT. 4. Assist in root cause failure analysis to eliminate failures, extend equipment life. 5. Commission equipment to ensure its “like-new” condition at installation, extending its life. 6. Systematically plan overhaul work requirements, based on equipment condition. 7. Foster teamwork among production, engineering, and maintenance.
STORES/PARTS MANAGEMENT PRACTICES
In doing so, the team felt they could extend equipment life, lowering spares requirements, and planning spares requirements more effectively for both routine and overhaul needs. On reviewing their inventory they found that the $10M in current inventory could be broken down as follows: Obsolete
$1.0M
Surplus
$2.0M
Project
$1.0M
High-volume/use
$4.0M
Low-volume critical
$2.0M
Low-volume noncritical
$1.0M
Note that the total adds up to more than $10M, because of overlapping categories. The team ignored this in the short term for ease of demonstrating the principles, but it could be handled through a matrix for multiple categories. The next steps are described as follows. Obsolete equipment was identified and brought to the attention of the plant manager. Some $1M, or 10% of the total (and not an uncommon number), was represented by obsolete equipment. It was anticipated that after liquidation income to a salvage operation that an $800K charge would be incurred. The plant manager was initially very reluctant to take such a charge on his statement, but after much negotiation at the vice-president level, it was agreed to amortize the charge over an extended period, as opposed to a one-time “hit.” An inventory control process was also established to dispose of inventory as it became obsolete, and not to continue to hold it in storage for the next plant manager to handle. The $800K charge on inventory disposal, even though spread over several months, was a painful expense, but one that shouldn’t occur again with the new process in place. Inventory level would now be about $9M, and turns were improved to 1.11, representing modest progress. Next, capital project inventory was reviewed. It was anticipated that half would be consumed by the project before year end. Another 25% would be returned to the supplier with a 20% return penalty. This was accomplished only after considerable pressure was applied to the supplier. In the future they will have a policy built into major capital projects for returns, and will make this policy a part of
173
174
MAKING COMMON SENSE COMMON PRACTICE
the contract with their suppliers. The final 25% was decided to be necessary as critical or important spares for future use, because the equipment for this project was not common to current equipment. The result was a $750K reduction in inventory. However, note that a $50K charge was incurred. Note also that in the future, project inventory and the timing of its use/disposal/return must be considered when developing inventory turn goals. Inventory turns were now anticipated at 1.21, allowing more progress. When combined with a commissioning process to test and verify proper installation based on standards for vibration, IR, oil, motor current, and other process parameters, management felt it could substantially improve equipment life and avoid rework. Its experience had been that half of equipment failures occurred within 2 weeks of installation. It also applied these same standards to the suppliers. Next, surplus inventory was reviewed for each of the designated stores categories. Economic order points were reviewed and it was determined that a total of $200K of what was considered to be surplus could be used before year end. More stringent policies were put into place regarding order points, including consideration of stockout objectives. More importantly, the Wheelwright plant put in place a comprehensive condition monitoring program, which allowed the company to anticipate parts requirements more definitively and to reduce the quantity of its order points. It now planned to provide suppliers with at least 5 days’ notice on certain spare parts needs. With good support, good planning, and ease of shipping to the plant, inventories could be reduced even further in the future. For the time being, with this nominal $200K improvement, inventory turns were expected to increase to 1.24—more progress. Next, a detailed review was performed of critical equipment (equipment whose failure leads to lost production). The equipment was detailed and listed down to the component level, which was kept in inventory. Lead times were reviewed. A 0% stock-out policy was established for this equipment. After a team from production, purchasing, maintenance, and engineering reviewed the listing, lead times, and inventory in stock, the conclusion was reached that “excess” inventory amounted to $500K, but that fortunately most of that would be used during an outage planned later in the year. All things considered, the team expected to reduce critical low turning inventory by $300K by year end. With regard to the longer term, the stores manager for the Wheelwright plant met with other plants concerning their inventory, identified common equipment and spares,
STORES/PARTS MANAGEMENT PRACTICES
and anticipated that an additional $500K in spares will be defined as critical/common, meaning several plants will be able to use this to reduce their slow moving, critical inventory. This inventory, while stored at this plant, was placed under the control of the division general manager. Inventory turns were now at 1.29. Next, high-volume, high-use rate inventory was reviewed. This inventory typically turned at two or more times per year, but the total dollar value of this inventory was not sufficient to substantially increase total inventory turns. Concurrently, the inventory was reviewed against equipment histories, identifying critical equipment overlaps. The inventory was categorized by vendor dollars and reviewed as to lead times in major categories. Reorder points were reviewed and in some cases trimmed. Condition monitoring was used as a basis for trending equipment condition and determining and planning needed spares. Suppliers were contacted and asked to maintain, under blanket order and/or consignment agreement, appropriate quantities of spares for delivery within specified time periods. A stock-out objective not to exceed 5% was determined to be acceptable for most routine spares. All things considered, it was expected that high-use spares could be reduced by $1M by year end, and probably even more over the next 2 to 3 years. Inventory turns were now estimated to be at 1.48. Next, low-volume, noncritical spares were reviewed, again, in light of lead times, equipment history, use history, stock-out objectives, etc. Suppliers were contacted concerning maintaining guaranteed spares under a blanket order and/or consignment agreement, with minimum quantities, etc. All told, it was felt that half of the low-turning, noncritical spares could be eliminated by year end, with most of the balance eliminated by the end of the following year. With this, inventory turns were now estimated to come in at 1.6, substantially below the decree of 2.0, but substantially above the historical level of 1.0. The company has positioned itself to increase available cash by $3.75M, but about half of this would come through after year end. A reduction of an additional $1.25M or more (considered well within its grasp) over the next 1 to 2 years should yield the objective of inventory turns of 2. The team was now ready to present its findings to senior management. After its presentation to management, the objective of 2 turns on inventory was considered not to be achievable before year end, but in light of capacity objectives, a good plan had been
175
176
MAKING COMMON SENSE COMMON PRACTICE
established to achieve that objective within 24 months. The plan was approved, with the proviso that the team would report quarterly on its progress, including any updated plans for additional improvement. Consistent with this, the company now put in place a strategic plan for reducing inventory, which included the approach described, and had the following characteristics: 1. Production targets considered achievable were: a. 95% production availability, including 4% planned maintenance and 1% unplanned maintenance downtime. b. 92% asset utilization rate, including 2% for process yield losses, 1% for transition losses, and 0% losses for lack of market demand. It could sell all it could make. 2. Inventory targets were set as: a. 0% stock-outs for critical spares b. 4%—fast turning, noncritical c. 6%—slow turning, noncritical 3. Supplier agreements under blanket order were effected for inventory storage and JIT shipment. 4. Interplant sharing of common critical spares was placed under control of the division general manager. 5. The plant set up the systematic application of condition monitoring to: a. Baseline (commission) newly installed equipment b. Trend equipment condition to anticipate and plan spare parts needs c. Comprehensively review equipment condition prior to planned overhauls d. Engage in root cause failure analysis to eliminate failures
STORES/PARTS MANAGEMENT PRACTICES
Spare Parts Criticality Ranking Model A question frequently asked is, “Which spares do we actually need to keep in stores?” Depending on who is asked, you might get answers ranging from “One of everything we have in the plant” from the maintenance manager to “Nothing, it’s too much working capital tied up doing nothing” from the accounting manager. While these are said only slightly in jest, they do highlight a particular problem with deciding what to actually carry in the stores room. Let’s begin by asking, “What fails most often?” Often these failures are related to spares such as bearings, belts, seals, gaskets, orings, small motors, and so on. Whatever fails most often, we should keep on hand, or have ready and inexpensive access to it through local suppliers. And, we should ask, “How do we stop these failures, or at least stop them from happening so often?” Our next question might be, “What’s the consequence of the failure?” If the consequence is zero or minimal, then we may not need to stock the item, and so on. The greater the risk of loss, the greater the probability we need the item. Our next question might be, “What’s the frequency of failure?” since frequent failures might dictate greater quantities in the short term and might suggest a root cause analysis to address longer-term reliability. Our next question is, “What’s the lead time?” Longer lead times would incur greater risk to our efficiency and productivity. Our next question might be, “How easy is it to detect problems early, and then manage them?” Ease of early detection reduces the risk to our efficiency and productivity. With all this in mind, refer to Table 6-1 for a Spare Parts Criticality Ranking Model, which should help with establishing those spares that are most critical to the operation. The criticality score from Table 6-1 is the sum of the scores in five areas. So, for example, if a part scored a 2 in each category, it would have a ranking of 10, a fairly high number. If we established the criticality cutoff there, then we would make sure we kept spares for all ranking greater than 10. Other issues may come into consideration, which would bypass or alter this ranking or could be added to the ranking. For example, we might decide that anything that has a score of 4 in any category would require stocking, or if we scored a 3 in any two categories, we would stock it. We might want to add other categories, or make some categories multiplicative. We might make the scoring different. There are endless possibilities. Like most
177
178
MAKING COMMON SENSE COMMON PRACTICE
Table 6-1
Spare Parts Criticality Ranking Model.
______________________________________________________________________ Part Name _________________________________________ Part Manufacturer __________________________________ Part No. __________________________________________ Stock Code No. ____________________________________ Below is a model for ranking the criticality of your spares. This is just a model and is intended to get you thinking about criticality in light of certain criteria. You are encouraged to develop your own model, which is adapted to your business circumstance. Criteria 1. Common Failure Mode
2. Frequency of Failure
3. Ease of Detectability
4. Lead Time for Part
5. Consequence of Failure
Score Similar equipment w/similar failure modes
1
Similar equipment in plant but w/ o this mode
2
No other like equipment with same failure mode
3
> 3 years
1
> 1/2 year, but <3 years
2
< 6 months
3
< 1 month
4
Easily detected by operators
1
Readily detected by inspection, PdM
2
Undetectable until failure is imminent
3
<24 hours
1
< 7 days
2
> 7 days
3
> 30 days
4
No effect and/or redundant systems
1
Production efficiency reduced <20%
2
Production stops
3
Safety jeopardized
4
STORES/PARTS MANAGEMENT PRACTICES
of the models in this book, this one is intended to get your thinking started about what is critical to the success of your business, so you can act on that, and is not necessarily intended to be specific to your situation.
Summary By putting these processes in place, the Wheelwright plant was well on its way to improved plant reliability and concurrently reduced costs and inventory. Good stores management at Beta first required that management recognize the value adding capability of a good stores function—increased working capital, increased plant capability, reduced carrying costs, reduced shrinkage, and improved maintenance efficiency. The losses associated with poor stores management practices should be intolerable to senior management. In setting up a quality stores management function, several issues must be addressed:
Development of a comprehensive catalog for stores requirements
Development of policies and procedures that facilitate effective management of stores
Establishment of procedures and practices that facilitate maintenance excellence
Establishment of supplier partnerships, blanket orders, consignment, etc.
Establishment of a stores layout that ensures efficient operation
Comprehensive training of all appropriate staff
Consideration of contracting the stores function to accomplish these tasks
Comprehensive management of stores and inventory turns by classification
Comprehensive performance measurements for ensuring a superior stores capability
179
180
MAKING COMMON SENSE COMMON PRACTICE
Doing all this in a comprehensive, integrated way will ensure that Beta has a world-class stores operation at the Wheelwright plant, and over the long term for all of Beta International. Manufacturing excellence requires an excellent stores operation, as well as excellence in maintenance and production. All are an integral part of a financially successful operation.
References 1. Jones, E. K. Maintenance Best Practices Training, PE–Inc., Newark, DE, 1971. 2. Hartoonian, G. “Maintenance Stores and Parts Management,” Maintenance Journal, Vol. 8, No. 1, January/February 1995. 3. Conversation with Neville Sachs. Sachs, Salveterra & Associates, Syracuse, NY. 4. Schaulberger, S. “What Is Supplier Park?” Society for Maintenance and Reliability Professionals, Barrington, IL, Annual Conference, Chicago, IL, October 1994. 5. Moore, R. “Establishing an Inventory Management Program,” Plant Engineering, March 1996.
Installation Practices
7
You have the right not to do things wrong. Winston Ledet, Sr. Knowing that a job has been done right in a manufacturing plant, consistent with your expectations and standards, is essential for ensuring manufacturing excellence. Installation and startup practices that set high standards and are then verified through a process for validating the quality of the work done, or commissioning, are an essential element of this process. Indeed, we’ll see in Chapter 9 that up to 68% of equipment failures occur in the “infant mortality” mode. This strongly implies the need for better design, procurement, and installation practices to avoid many of these failures. This point is reinforced by recent studies, which indicate that you are between 7 and 17 times more likely to experience safety and environmental incidents during startup and shutdown1 (and by inference 7 to 17 times more likely to introduce defects into your processes and equipment during startup and shutdown). Further, at a recent conference, it was reported that during startup and commissioning of certain rotating machinery:2 Even though functional performance was met, there were defects discovered in over 90% of all pumps and fans that would have caused premature failure. This equipment was
181
182
MAKING COMMON SENSE COMMON PRACTICE
either misaligned, improperly balanced, used improper sheaves, or had excess vibration. There were similar results in most other equipment tested. Let’s suppose just for the sake of argument that the individual was incorrect for some reason, and that the number was only half of 92%, or even a fourth of 92%. Would this make you feel any better? This dictates the need for excellence in installation, commissioning, and startup practices. Practices that influence and are related to installation and commissioning were reviewed in previous chapters and serve to highlight several opportunities for improvement. Additional information and recommendations, particularly as installation practices relate to contractors, are also provided in Chapter 12. Further, this chapter is not intended to substitute for extensive information developed by architectural engineers for major construction contracts when constructing and starting up a new plant. Rather it is targeted at particular problems and shortcomings observed at many of Beta’s plants related to the smaller capital projects, short shutdowns and turnarounds, and routine maintenance efforts occurring every day. These same principles, however, can be applied and extended to major projects, such as the construction and startup of a new production line or plant. Beta has been like most manufacturers when installing new capital equipment. It does a reasonably good job with the installation and commissioning of the production process. It would generally use considerable care to ensure that process pressures, temperatures, flows, critical process chemicals, or other operating parameters were within specification and that the production equipment was capable of producing quality product, at least at startup. In its batch and discrete parts plants, likewise, it always had the supplier do a demonstration run to ensure that, for example, machine tools were statistically capable of holding tolerances for a certain number of parts or that a few batches could be processed and meet specifications. It frequently reminded people that quality (of its products) was a top priority. The care and attention to detail of the process control quality, while still in need of improvement, as we’ll see in Chapter 8, were significantly better than those given to equipment quality at installation. If the equipment or process ran, did not make any unusual noises or smells, and made quality prod-
INSTALLATION PRACTICES
uct, then it must be a good installation and startup, was the apparent rationale. Beta’s plants rarely commissioned the equipment to specific mechanical standards to ensure its quality and its “likenew” condition. This was true of small capital projects, as well as overhauls, shutdowns, or turnarounds, and routine maintenance and repairs. Some tips Beta found useful are provided in this chapter. When combined with other chapters, this should provide most manufacturers with a model for improvement in this area.
Follow Existing Standards and Procedures The first suggestion was perhaps the most obvious, and that was to follow existing procedures for installation and startup. These had been previously developed at great expense, and occasionally following a major failure because of poor procedures or practices. In many situations Beta’s plants simply weren’t following these established procedures to ensure equipment reliability. With new engineers coming in, with the pressure to “get the plant back on line,” with the drive to cut costs (at times notwithstanding consequences), engineers and supervisors were often ignoring established procedures for best practice. A point to highlight is that they were very good about following safety procedures and practices and had achieved a betterthan-average safety performance. What they didn’t follow very well is standard engineering practices. They are now working hard to establish a new culture of doing things right, all the time—a daily struggle—in light of the pressures to improve performance.
Verify and Use Appropriate Manufacturer Recommendations The second suggestion is also simple, and that is to follow manufacturers’ recommendations. While they may not be 100% correct in their guidelines for every application, it’s a good starting point. Read the manufacturers’ instructions and follow them unless there is a compelling reason not to do so. At many of Beta’s plants this was not being done, or was only being given half-hearted support, principally for the same reasons previously mentioned—not enough time to do the job right, but plenty of time to do it over. Enough said on that point. The use of manufacturers’ recommendations can be further
183
184
MAKING COMMON SENSE COMMON PRACTICE
enhanced using the strategic alliance process described previously, which some of Beta’s plants are now putting in place.
Installation and Commissioning of Rotating Machinery One of the bigger areas for improvement is in the area of installation and commissioning of rotating machinery. As many of Beta’s plants have found, rotating machinery should generally be installed on a solid foundation, with rigid baseplates, with no looseness, should be precision balanced and aligned, and should be tested for meeting a specific standard of vibration at the time of installation, 1 to 4 weeks later, and then thereafter consistent with historical experience and/or specialist or manufacturer recommendations. GM Specification V1.0-1993 is an excellent starting point for setting vibration standards for rotating machinery. It provides standards for various types of equipment, but you must decide what, if any, modifications should be used for your application. Following these standards, and adapting them as appropriate to other types of equipment, should ensure high quality of the installation and long life of the equipment. As always, these should be reviewed and modified as appropriate to suit your particular applications. For example, Figure 7-1 shows that Beta’s standard for commissioning of rotating machinery is being established such that, in general, vibration due to imbalance should be below 0.1 in./sec.; vibration due to misalignment should be below 0.05 in./sec.3 Further, there should be no vibration due to bearing faults at specific ball pass
Figure 7-1. Vibration spectra.
INSTALLATION PRACTICES
frequencies, there should be no vibration due to looseness, and no significant “other” faults beyond 0.05 in./sec., e.g., electrical vibration faults due to a nonuniformity in the magnetic field of the motor, resulting in vibration at 120 Hz (or 100 Hz in most countries). These standards, when combined with GM Specification V1.01993, and adapted for any unique applications specific to Beta’s requirements, should serve Beta well for ensuring long-lasting equipment. Unique applications might include reciprocating compressors, shaker tables, etc. These same standards can also be used to set up alarm limits for defining any particular action to be taken for a given level of alarm. After a critical machine train has been overhauled or repaired, it will be tested at startup, and corrective action taken as appropriate to ensure the equipment meets these standards. Other requirements being established include: (1) a check at startup of motor current to ensure that it is below a certain level, of the running current, and, in some cases, a check of the cross-phase impedance to ensure a quality motor has been installed—normally, this should be done at the motor repair facility; (2) verification of proper lubricants being used, consistent with manufacturer recommendations, at proper levels, and with no significant opportunities for contamination of the lubricants before being put into the equipment; and (3) an infrared thermography scan in certain more limited circumstances, e.g., to ensure that heat is properly distributed across reactor vessels, to ensure motor connections are properly tightened, etc. Lubrication practices at many of Beta’s plants were abysmal. Indeed, in many cases lubricators had been laid off. This is a bit of a mystery, because it’s clear that most managers, even very senior ones, understand the need to lubricate their automobiles. In any event Beta’s lubricators (where it had them) often used small handheld pumps to fill reservoirs for certain equipment. The nozzles on these pumps were often left to lay on the ground (dirty); the pumping handle and rod readily accumulated dirt in a “well” where they entered the container. Beta soon found that practices had to ensure that clean lubricant was going into the equipment. Otherwise, this dirt that came with the lubricant acted as a “lapping compound,” and substantially reduced the life of the equipment. Granted, some of its practices may have been better than open buckets of oil, which was also sometimes used. Beta has also consolidated its
185
186
MAKING COMMON SENSE COMMON PRACTICE
lubricants and validated them for application, but through painful experience found that additives in lubricants from different suppliers were not always compatible. Mixing presumably same specification oils, but from different vendors, often resulted in “frothy oil,” and destroyed lubricating properties, requiring equipment to be shut down and the lubricant changed, again. The consolidated and standardized lubrication policy should prevent mixing of incompatible lubricants and advise purchasing not to change vendors without approval of maintenance and engineering. More on this issue is provided in the predictive maintenance section of Chapter 9.
Flange and Joint Installation A standard for the design, specification, and installation of flanges and gaskets had been developed at Beta, but was generally ignored. One of the easier areas for ensuring long-term equipment reliability is in the area of eliminating leaks, in particular in flange and joint installation. Leaks in piping, particularly at flanges and valves are generally inexcusable, and applying its own standards would ensure exceptional mechanical integrity of piping and vessel systems. Issues such as adequate pipe hangers, flange face configuration and finish, parallelism, torquing, thickness, handling, centerline to centerline offset allowance, gasket types, etc., for various applications were specified. More importantly, mechanics and technicians had to be retrained to follow the standards, like craftsmen who take great pride in their work. At one Beta plant, a mechanic was observed installing a gasket into a flanged connection. The gasket was lying on a concrete floor (dirty—lots of grit for scratching, adhering to the flange face, etc.). The mechanic was “struggling”—didn’t have the part fixed in place for the attachment, was not using a torque wrench, and was generally attempting to make the installation “free handed.” This was not good practice, and Beta has found that if you don’t want leaks at flanges and in valves, you must use precision standards and practices, and mechanics and other trades people must be allowed that freedom and encouraged to do the job with great care and precision, not fast. “Fast” will come with experience. And, finally, a similar scenario could be described for cleaning piping and tubing at installation.
INSTALLATION PRACTICES
Workshop Support As Beta has found, workshops that support installation efforts and repairs should be more than a lay-down, assembly, and fabrication area. They should be very clean, and certainly not dirty and disorganized. They should be operated in a controlled manner, not opened to everyone 24 hours a day, particularly if those who frequent the workshop show little ownership and low expectations of its contribution. The workshop manager is more than a custodian. His or her job is to manage a clean, efficient workshop, fully responsible and accountable for its performance, and ensuring that all work is performed with great care and precision and validated for its quality using commissioning methods.
Housekeeping On many occasions after a turnaround or major overhaul effort, or in some cases even a routine repair, Beta’s plants were found to be cluttered—boxes, parts, paper, debris everywhere, scaffolding still up, etc. And, more importantly, there were leaks too, one at the flange of a critical reactor vessel, immediately after a shutdown. This spoke volumes about the care and precision of the installation effort (or lack thereof) and should not be tolerated. It sent a very clear message that standards and expectations were poor, that people running the plant did not care, and therefore no one else should be expected to exercise care and precision. Beta is working hard today to change this culture and attitude and is having considerable success. If you set high expectations, people will work hard to meet them, in spite of some occasional “grousing.” If you set low expectations, people will meet those too, though not working as hard to do so. Setting expectations for precision in the installation effort and validating its quality through commissioning is a must.
Use of the Predestruction Authorization At several of Beta’s plants, it was found that the use of a “predestruction authorization” was exceptionally valuable. At these plants, the maintenance manager or other supervisors were often called upon by production managers to “just fix it!” and considerable pressure was placed on them to “get the plant back on line.” This, in
187
188
MAKING COMMON SENSE COMMON PRACTICE Predestruction Authorization I, (insert name), do hereby authorize premature destruction of the equipment being repaired under Work Order No. (insert number), because I have not allowed adequate time for the perfomance of certain maintenance and/or startup and commissioning tasks (insert task numbers). I understand that not doing these tasks is likely to reduce the life of this equipment and result in its premature failure. I also understand that it may also increase the overall maintenance costs and reduce the quality of the product being produced. __________________________ Signature __________________________ Title
Figure 7-2. Predestruction authorization.
turn, often resulted in poor craftsmanship and precision being applied to the maintenance or repair effort. Often, critical work for ensuring equipment reliability would not be done in an effort to quickly get the plant back on line. One ingenious supervisor decided to try to put a stop to this and created the predestruction authorization (see Figure 7-2.). When called upon to fix things quickly, as opposed to properly and with craftsmanship and precision, he would ask the requesting supervisor to sign this report. In effect it said that by signing this, the supervisor or manager would be authorizing the performance of bad work, something which few, if any, were willing to do. So long as the maintenance manager was willing to be held responsible for poor work, the production manager was willing to allow it to proceed, and berate future failures with the refrain of “just fix it!” When he had to authorize it, the tide turned, and greater craftsmanship was allowed, yielding longer equipment life. As Ledet said, “You have the right not to do things wrong.”
Installation and Startup Checklists Most all installation and startup efforts have some form of checklist to help verify that the appropriate work has been done properly.
INSTALLATION PRACTICES
This is the case whether the installation and startup is part of a capital project, the result of a major repair, or the result of a simple repair of changeover. We’ll look at changeovers a bit more later. Just as a reminder, following is a listing of key areas to address during a typical installation and startup effort. Clearly, this list will require lots of additional detail to be developed to support specific equipment and process requirements for any given plan, and each will vary significantly from process to process. In many cases, Beta had not even bothered to make sure these basic areas were covered and hence had not developed each area’s associated details and the standards for acceptance during the commissioning period.
Equipment Verify the following for critical equipment. Verification requires that all equipment be operating to the appropriate standard, consistent with its relative consequence of failure. For example, an equipment failure that has a high consequence to the business would likely require greater precision and testing for meeting those standards during installation and startup.
All vibration levels are below specified levels at designated frequencies.
All lubricants meet specifications as to quantity and quality, specifically verifying viscosity and viscosity index, additives, and contamination, e.g., moisture and dirt. Apply ISO, SAE, and other standards as appropriate.
Electric power is of high quality—not harmonics, surges, sags, or ground loops.
All critical motors have proper starting current; low crossphase impedance (e.g., <3% resistance, <5% inductance); proper capacitance and resistance to ground; less than 0.05 inches per second (1.25 millimeters per second) vibration at two times line frequency; and total indicated runouts on the shaft and bore of no more than 0.002 inches (0.05 millimeters).
189
190
MAKING COMMON SENSE COMMON PRACTICE
An infrared thermography survey shows no hot spots, e.g., good electrical connections, motor controllers, transformers, quality lagging, quality steam traps, etc.
A leak detection survey shows no air, nitrogen, compressed gas, or vacuum leaks.
Process Verify the following for critical processes. Verification requires that each process be operating to the appropriate standard, consistent with its relative consequence of failure.
All temperatures, pressures, rates, flows for key processes are in the appropriate range.
The process is sustainable for weeks or months at proper yields, conversion rates, cycle times, etc.
Control charts for key process variables are within defined limits during sustained periods, i.e., equipment is “capable.” Check initially, then routine verification, e.g., doing a comprehensive review at 6 months and then at 1 year.
Operators demonstrate ability to properly operate all critical equipment.
Few or no transients occur in both physical process and/or chemical process.
The process is in control for all key process variables.
A couple of points are noteworthy regarding Beta’s experience. First, this is the beginning of the effort, and, once achieved, you must have a program in place to maintain excellence and continuous improvement in each of these. Second, startup procedures need to be periodically verified using a step-by-step audit of the process. Often, the process and/or procedure would change at Beta, sometimes “de facto,” without formal review or authorization, and/sometimes not all operators “got the word” or got adequate training in these new startup procedures.
INSTALLATION PRACTICES
Commissioning Test Periods As we’ve discussed, comprehensive and precision installation and startup are critical for the reliability of the plant and success of the business. Eliminate the defects early and you’ll minimize their consequence later. That said, however, how do we know the commissioning period we should use for a given process verifying that a given equipment or process is capable. Many of Beta’s plants would run a new or overhauled process until “it worked,” and presume completion. However, this may not be an enlightened approach. Certainly it should go beyond the one day when the system does finally “work.” Hansen4 advises that commissioning test periods should be determined as follows:
where:
MTBF is the desired mean time between failure.
T is the test period required given certain confidence limits and degrees of freedom.
X2 is a statistical value from a standard table, available in Microsoft’s Excel Program.
α is the confidence limit for the desired MTBF – α = 0.2 for 80% confidence, α = 0.1 for 90%, etc. r is the degrees of freedom, or in this case the number of failures permitted in the test period.
For example, suppose we decide that an acceptable mean time between failure for a given process is 120 hours, and we want to have an 80% confidence of being able to achieve this; then we can determine the number of failures that are allowed during a given period and still meet our requirement. In this case the calculation yields: R (failures)
T (hours)
2
193
4
359
6
513
191
192
MAKING COMMON SENSE COMMON PRACTICE
With these results, we then know that we can have no more than R failures in T hours, and so we select the time period, T, that we like, during which we can have no more than the indicated number of failures, R. A couple of points are worth noting. This calculation assumes that your system will behave according to a standard statistical distribution, e.g., normal. So if your system doesn’t meet this distribution, that is, for some reason it is skewed, then the result will also be skewed. You may find this out with time, but this approach is a good starting point and is certainly better than not doing anything at all. Second, a “failure” of the system is not just an equipment failure. It should be defined as any time the process or system does not meet any of its functional requirements.
Quick Changeover Principles Hansen4 also advises of the need to review and improve our changeover procedures, especially where those result in significant losses to production capability. He provides a 10-Point Checklist for facilitating improvement in changeover practices. This approach uses a team of people doing a videotape of a typical changeover. 1. Videotape a typical changeover event—you may need several synchronized cameras. Include preparation and follow-up actions. 2. Review the video as a team and document:
Each action
Its elapsed time
The reason for each action
3. Analyze each action—define whether it is internal to the process or external.
If internal, it must be done within the changeover process and timeline.
If external, it could be done off-line in preparation for the changeover effort.
Review all internal efforts for those that can be performed in parallel.
INSTALLATION PRACTICES
4. Review internal critical path items for those that can be reduced in time, e.g.:
Using quick-lock connections vs. screw on
Redesigning fixtures and dies to mount to prefitted guideposts or points
Preheating outside the changeover, etc.
5. Review resources needed for reducing the timeline—people, parts, materials, etc. 6. Justify the economic or other benefit of applying additional resources. 7. Plan the revised changeover process, and train all involved in the proposed changes. 8. Do the changeover in the new format two to four times. 9. Review the new changeover for additional improvements, and modify accordingly, that is, repeat the process as necessary. 10. Train everyone in the methodology and apply it in a disciplined way. One final point that should be made is to apply the outlined installation, startup, and commissioning principles, as appropriate, following the changeover. After all, a changeover is like a “mini” installation and startup effort. It will not likely require the comprehensive effort of a major changeover, but should follow similar principles, adapted for the business circumstance.
Summary Installation and commissioning practices will not correct a poor design. However, they may identify design deficiencies. Therefore, a process must be established to provide feedback to plant personnel regarding potential improvements that have been identified through the commissioning process. Commissioning should be done as part of a partnership among plant staff. The goal is not to search for the guilty, but rather to learn from our potential shortcomings and put procedures in place that ensure continuing excellence. When problems are found, they
193
194
MAKING COMMON SENSE COMMON PRACTICE
should be used as a positive learning experience, not a condemnation exercise. For projects and shutdowns, the commissioning process begins with the design of the project, or the planning of the shutdown. Specific equipment should be identified, procedures and standards defined, and time built into the plan, to verify the quality of the installation effort. In a study by the Electric Power Research Institute in the late 1980s, it found that half of all equipment failures occurred in the first week of startup, and lasted less than 1 week. 5 This begs for a better installation and commissioning process, and, as Beta has found, particularly for the equipment. Let’s consider Beta’s operational and maintenance practices next. As you’re reviewing this, consider where these may apply to the installation, startup, and commissioning process.
References 1. Post, R. L. “HAZROP: The Approach to Combining HAZOP and RCM at Rohm and Haas,” Reliability Magazine, February 2001. 2. Dundics, D. “Use RCM for Acceptance Testing to Detect Installation Defects,” Proceedings of the Machinery Reliability Conference, Phoenix, AZ, April 2001. 3. Pride, A. Pride Consulting, internal document, Knoxville, TN, 1997. 4. Hansen, Overall Equipment Effectiveness, Industrial Press, New York, 2001. 5. Smith, A. Reliability-Centered Maintenance, McGraw-Hill, New York, 1993.
Operational Practices
8
Reliability cannot be driven by the maintenance organization. It must be driven by the operating units . . . and led from the top. Charles Bailey Beta reviewed the operating practices at several of its manufacturing plants and concluded that poor practices in plant operation, process control, and production planning were often at the root of poor plant reliability and uptime performance, resulting in increased operating and maintenance costs, poor product quality, and poor delivery performance, among other things. Indeed, a preliminary analysis indicated that while maintenance costs were well above world class, over half of these costs were being driven by poor process control and operational practices. Maintenance had historically been “blamed” for equipment downtime and high maintenance costs, but on closer review, most of the maintenance costs were the result of poor operational practices. The conclusion was reached that Beta had to have much better consistency of process, greater precision in plant control, and much better operator training and expertise to eliminate or minimize the root cause of many problems.
195
196
MAKING COMMON SENSE COMMON PRACTICE
Improving Operating Practices to Reduce Maintenance Costs At Beta’s Wayland plant, an expert had been called in to help reduce maintenance downtime. It seemed that the plant was having numerous equipment failures, resulting in lots of downtime, lost production, and out-of-pocket costs. In the course of the review, an inquiry was made as to key process control parameters, at which time the process engineer brought forth several graphs. In reviewing these graphs, it was determined that the plant should be run with moisture content of the process stream below 30 ppm; that an occasional spike above 30 ppm was acceptable, but even those should last no more than a few hours, and be no more than 60 ppm. This was because excess moisture (>30 ppm) reacted with the process stream, creating acid in the stream, which dissolved the carbon steel piping, vessels, and valves. Further, compounding this effect was the fact that when the acid dissolved the carbon steel, it also released free iron into the process stream, which reacted with other process constituents, creating sludge that fouled the reactor, severely impairing its ability to operate. So, it was very important that moisture levels be controlled very tightly to avoid this problem. Indeed, the design engineers had assumed that the plant could be operated below 30 ppm with ease, and all material specifications were placed accordingly. Apparently, they gave more credit to the process control capability of the plant than warranted, or ignored the potential for upsets in the process stream. Because moisture was an inherent byproduct in the production process, it may have served them well to consider better control or alternative materials, or both. Because no system can be perfectly controlled, process transients (both physical and chemical) should always be considered in the design process. However, on reviewing the actual data, which took some time to find because they were not actually used in operating the plant, it appeared that the plant rarely operated below 60 ppm, which was the maximum at which it should have been operated. In fact, the plant typically ran above 100 ppm, and more than occasionally spiked the instrument scale above 200 ppm. (See Figure 8-1.) Further, on a more detailed analysis to determine whether these data were correct (in general they were), it was also discovered that:
OPERATIONAL PRACTICES
Figure 8-1. Wayland plant moisture control chart.
1. The measurement systems were relatively poor—frequent instrument failure, infrequent calibrations, etc. The instruments were sufficiently accurate to verify that moisture content was way too high, but not sufficiently accurate for precision process control being planned. 2. The control capability was insufficient, both from a process perspective and from a feed-stock perspective. Feed-stocks, which came from an upstream plant at an integrated site, typically had moisture content well beyond specification. In addition to this, however, process control capability was limited, partly because of poor instrumentation and also because of inadequate design and business expectations. Moisture carry-over from a feedback process loop had limited control capability. 3. The plant was being expected to run beyond its design limits to increase yields, which also increased moisture carry-over into the process stream. This effort to make up the difference in production shortfalls proved less than successful and even short-sighted. 4. The laboratory analysis capability for measuring process stream parameters was inadequate. Measurements of the process stream were only taken twice a day, but feed-stocks would often change three to four times in a given day. And, when they were taken, the laboratory instruments were not
197
198
MAKING COMMON SENSE COMMON PRACTICE
capable of providing accurate information on the process condition in a timely manner. After a short operating period, a given reactor (there were several on stream) would have to be shut down and cleaned or, in some cases where the damage was too severe, replaced. Asset utilization rates were running at 50% or less, well below world class, and maintenance costs were very high—more than 12% of asset replacement value, as compared with an average level of 3% to 6%, and a world-class level of nominally 2%. Breakdown production losses were running over 20%, compared with an average level of some 5%, and to a world-class level of less than 1% to 2%. Was this a maintenance problem? Clearly, maintenance costs were excessive, and equipment downtime was large. Was maintenance at the root of the problem? For the most part, it was not. The results of an audit comparing maintenance practices against a world-class standard did indicate that the maintenance department had things it could improve upon. For example, the plant needed better planning and scheduling, better condition monitoring, better training, better equipment history records for Pareto and RCM analysis, better installation and alignment of pumps, etc., but given the constant state of crisis in the organization, maintenance had little time to improve the processes until the root cause of the failures was addressed through better process chemistry control. Were the operations people at fault? Perhaps, but it turned out that they had no choice but to accept feed-stocks of varying quality from upstream plants that were part of an integrated site production process for the entire site where Wayland was located. They had several things they could do to minimize moisture content, and mitigate its effects, but until the question was asked, they did not fully appreciate the implications of current operating practices. They just continued to accept the feed-stocks they were given without challenge. To make a long story short, feed-stock quality was a serious problem, exacerbated by site operations practices and expectations, by poor plant process control, and by poor maintenance practices, much of which could have been mitigated by better design standards. Everyone had a part to play in this drama, and until they all sat in a room together as a team to recognize that the parts had to be played in cooperation, the solution to improving performance was elusive.
OPERATIONAL PRACTICES
Applying better process control practices, maintenance practices, and working with site management to mitigate feed-stock effects, the Wayland plant is now well on its way to substantially improved performance. For example, breakdown losses are now down to 5% or so, a tremendous improvement over the dismal levels of 20%+. Wayland recently also recorded its best production month ever, some 30% better than any previous month in the history of the plant. And yet, it still has much opportunity for improvement. With continued leadership, no doubt Beta’s Wayland plant will be among its best performers in the months and years to come.
Consistency of Process Control Beta’s Auxier plant was struggling to meet customer demand at one of its plants. It seemed it could sell everything it could make, but just couldn’t make enough. A new plant manager had been hired to help bring the plant to a better level of performance. After spending just a few weeks in the plant, he found, among many things, that each shift within the plant operated the plant using different control points for key process parameters. It seemed that empowerment had been taken to the lowest level possible, and each operator had the power to change control points based on his or her view of where the process ran best, so long as each kept the process within a control band for key parameters. Good in theory, but bad in practice, without some overall guidance on the process. The new plant manager gathered his team together and advised them that the plant had to have consistency of process, but rather than dictate control parameters, he asked that each shift select what it thought were the control points that should be used. Likewise, he and the process engineer selected a set of control points. They concluded the exercise with four separate sets of specific control points. Then he advised all the operations staff that for the next 4 weeks they would run the plant each week strictly using one of each of the set of control points developed during the previous exercise—no deviations during that week from the control points selected without his specific approval. Every shift during a given week had to use the same control points, no exceptions. At the end of the 4-week test period the plant showed better performance for each week with each of the different sets of control points than it had in several months. It also turned out that the
199
200
MAKING COMMON SENSE COMMON PRACTICE
points that the plant manager and process engineer had selected ranked third out of four in the overall performance. As it turned out, every time a shift supervisor or operator would change the operating conditions without just cause, he or she would destabilize the system and reduce plant performance. Consistency of process yielded better results in every case and was much better than the inconsistency between shifts that had previously been in place.
Control Loops Surveys1 have indicated that in a typical process manufacturing plant only 20% of control loops perform as desired, and, conversely, some 80% of control loops can actually increase process variability when operated in automatic mode. Control loops are also reported to account for 90% of the variability in product quality. With this in mind, and given that it is true, is it any wonder that operators are reported to routinely disable process control systems? Control loops and systems must be properly designed and must be maintained, that is, routinely serviced and calibrated to be effective. Otherwise, “. . . operators will leave a controller on closed loop control, if he does not have to worry about the controller getting him into trouble. If he has to worry about what the controller is doing as well as the unit, he will turn the controller off. If operators find they can trust the controller to keep them out of trouble and it does not increase their work load, they will readily accept it.”2 Further, “Process operators are incapable of coping with the multidimensional aspects of (changing operating conditions) when the plant is operated at its constraints,” necessitating precision process control.3 Beta’s experience has been quite consistent with these situations. For example, at Beta’s McDowell plant, the process control system engineer was asked how often the operators bypass or disable the control system, to which he replied that they rarely did this. One of
OPERATIONAL PRACTICES
the senior operations supervisors was then asked to check the digital control system (DCS) to see how often in the past month the system had been disabled or bypassed. It turned out that in the previous month this had occurred some 300 times, on average every 2 to 3 hours. Clearly, the control system was not being used as intended, for whatever reason. Further emphasizing the point, one of Beta’s statistical process control (SPC) specialists, who has only recently come into higher respect and appreciation, has indicated that if process control systems are not maintained, some 50% lose their effectiveness after 6 months of operation. Let’s consider some fundamental issues related to statistical process control.4,5 It is generally agreed that one must: 1. Understand your key process variables, those used to control the process. 2. Plot each key process variable for a routine period and determine the mean and standard deviation. Verify that the data are normally distributed. Establish the upper control limit (UCL), the mean plus 3 standard deviations, or × + 3σ, and the lower control limit (LCL) is × – 3σ. Six is then the natural tolerance. The process is not in control if: 1.
There are any points outside the control limits.
2.
There are seven consecutive points trending up.
3. There are seven consecutive points trending down. 4. There are seven consecutive points above the mean. 5. There are seven consecutive points below the mean. We should not confuse upper and lower control limits with upper and lower specification limits (USL and LSL). Control limits are inherent in the system’s natural variability. Specification limits represent our expectation for the process or the product, which may or may not be met within the inherent capability of the system. When expectations are not met, then the process may not be capable of achieving production requirements.
201
202
MAKING COMMON SENSE COMMON PRACTICE
Control performance capability, or Cpk, is a measure of the ability to deliver quality product through quality processes within a plant. It demonstrates what the process can best achieve and should be the objective of any control improvements. It is defined as the lesser of (USL – ×) / 3σ, or (× – LSL) / 3σ. For example, if Cpk is greater than 1.5, you could expect that just over 0.1% of your product would be off specification; whereas if Cpk = 1, you could expect that over 2% of your product will be off specification. The higher the value of Cpk, the less off-spec product, and vice versa. Many believe that a Cpk of 2 is about optimal. Higher values probably mean that the specification limits are set too high. Lower values mean the process is not in sufficient control. Beta is presently spending considerable effort to ensure that processes are in control and that Cpk of key variables of the production process are measured, not just the end quality of the products. Controlling the process effectively leads to higher-quality products. In summary, basic process control in a process plant requires: 1. Knowledge of key process variables and measurement methods 2. A process control philosophy 3. Understanding of control limits for each variable 4. Measurement capability—accurate, well-maintained instrumentation for key variables 5. Control capability—controller/operational set points and biases 6. Operation within control limits (and, if not, an understanding of why not for each event) 7. Understanding of process variable lag/lead times and interrelationships 8. Logic and sequence interlocks 9. Alarm points and philosophy 10. Hazard operations analysis, if necessary 11. Timely, appropriate, and accurate laboratory/QC analysis 12. An understanding and rigorous application of SPC principles
OPERATIONAL PRACTICES
A similar model could be developed for nonprocess applications. Finally, specific products should have very definitive “recipes” for running the production process to ensure meeting product specifications. For most operators these recipes and the basis for running the plants for each “recipe” should be second nature for them, through training and practice. Ensuring quality of production process will ensure quality of the product.
Process Conformance for Operational Excellence Beta’s Rock Fork plant was having considerable difficulty with the quality of its product. First-pass quality was only 90% at its best, and sometimes dipped to below 80%. This was compared with typical performance of 92% to 95%, and to a world-class level of 98%+. This was resulting in product rework, off-grade product selling at much lower margins, the loss of customer confidence and orders, and overall poorer business performance. The situation was developing into a crisis. In reviewing the production process for conformance to standards, a number of problems were noted. These were reviewed and categorized, and compared with standard process conformance methods for ensuring statistical process capability. Following this, a plan was developed for improvement, which had to be rapidly implemented. This is described in the following text. Fundamentally, excellence in process conformance is essential for good control and ensuring production statistical “capability.” It requires that we understand all our key process variables and their upper and lower control limits; that we understand our demonstrated performance for operating within these limits; and, finally, it requires a shiftly review of our conformance to these limits. The model chosen at the Rock Fork plant assumed that if the key process variables were not in conformance, then it was likely that there was a problem in one of four areas: 1. Standard operating conditions/procedures 2. Quality, calibrated instrumentation 3. Quality raw material 4. Equipment reliability
203
204
MAKING COMMON SENSE COMMON PRACTICE
Any nonconformance had to be identified as to magnitude and duration, and its cause assigned to one of these four areas. These data were recorded and then used to resolve the root cause of the nonconformance. This process is shown in Figure 8-2. Using Figure 8-2 for our discussion, for example, the process is in conformance until we experience a nonconformance due to poor use of standard operating procedures (SOPs) or standard operating conditions (SOCs). We measure the magnitude and duration of this nonconformance and record that information in a database as we get the process back in control. Next, we have another nonconformance event due to poor quality raw material. Again, we measure the magnitude and duration of the nonconformance as we get the process back in control. And we do the same thing for other nonconformance events, which could be driven by poor instrumentation or equipment reliability. The Rock Fork plant began collecting the data as to the magnitude and duration of these events for each key process variable, recording them shiftly and compiling them in a database for further analysis. The initial analysis was done using a simple Pareto Chart, as shown in Figure 8-3. The plant found that nearly half of its nonconformance was due to poor standard operating procedures and conditions. Further analysis indicated that this was due to a variety of causes. In some cases,
Figure 8-2. Process model for eliminating nonconformances.
OPERATIONAL PRACTICES
Figure 8-3. Pareto of nonconformances.
inadequate training had been provided for operators. In other cases, the procedures were simply wrong. Several changes had occurred to the process and the procedures had not been updated; procedures had been modified in the field by the operators to reflect actual experience but were not reflected in the official procedures; or different operators were interpreting the same procedures in different ways. One of the more common situations that was occurring was that different shifts operated the plant differently, even with the same procedures. So, every shift was “tweaking” the plant not long after each shift change. A specific plan was established to ensure proper training, accurate procedures, and consistency of operating practice across the shifts. The next biggest cause of nonconformance was poor equipment reliability. Considerable inconsistency in the process was being introduced due to equipment failures. In some cases operators were working around the failures but had much greater difficulty in controlling the plant effectively. In some cases they were actually inducing equipment failures through poor operating practice. In other cases, equipment downtime was also inducing lots of transient events during shutdown and startup or during large production rate swings, which were of course more difficult to manage. Likewise, a specific plan was established to improve equipment reliability, one including both operations and maintenance in the improvement process.
205
206
MAKING COMMON SENSE COMMON PRACTICE
Next, of course, was instrumentation. The problems here were related to two basic issues—poor design/location and inadequate calibration/care. The issue of calibration and care was basically straightforward once an instrument engineer was put in place. The design and location problems were longer term and required capital. Finally, raw material quality was driving the fewest nonconformance problems. However, this was the easiest one to resolve. It seems that purchasing had bought material to the required specification but from a different supplier. It seems this supplier was cheaper, at least on price. After review of the situation with the supplier, the decision was made to use it on an exception basis only, that is, only when the primary vendor was unable to deliver. One point that should be emphasized here is that the problems were fairly complex and came from a variety of sources, such as design, purchasing, operations, and maintenance practices. Resolving the problems required a team-based approach for resolution. To its credit, the staff at the Rock Fork plant was very good in addressing these problems. One other technique it used to prioritize resources, which is also fairly simple and common, is the use of a benefit/effort matrix. Each major nonconformance issue was plotted in the matrix, and, for example, the problems that had a high benefit and low effort would be given higher priority. Those that reflected the opposite condition, low benefit and high effort, of course had lower priority. And there were variations in-between, but it was a fairly simple tool to help the staff arrange priorities. The results of applying the process conformance model were dramatic. First-pass quality went from the 80% to 90% level to 95% within 6 months, and efforts are continuing to bring the plant to a world-class level of performance.
Operator Basic Care Notwithstanding application of good basic process control, operator basic care (some might even call it PM, depending on the circumstance) is one of the simplest, most powerful, and yet scarcely used tool in most manufacturing plants today. If we operated our manufacturing plants with the same care and diligence that we do our cars, at least those of us who care about our cars, plant reliability, uptime, and costs would immediately improve substantially, if not dramatically! Only the most callous of individuals would ignore the
OPERATIONAL PRACTICES
basics of maintaining their cars—lubrication, fuel, water, routine PM, looking, listening, smelling, feeling, taking care of “funny” noises before they become catastrophic failures, etc. If we could combine basic care with good process control, we would be well on our way to world class. Yet in most manufacturing plants, and Beta’s plants are no exception, the sense of care and ownership is more the exception than the rule. Why is this? The answers are not clear. Part of the answer, no doubt, is the sense of disenfranchisement that costcutting and downsizing create among employees, particularly down in the ranks. For example, one of Beta’s mechanics asked in despair, “Where’s the dignity?” after learning of his friend being abruptly laid off, having served the company some 20+ years. He went on to remark that the shop floor people understood that the company had to be competitive, but the process being used for becoming competitive (downsizing) left most scared and powerless, like waiting for a death sentence. This is not an environment that creates a sense of loyalty and ownership. Another part lies in the fact that management has not stated its expectations of the shop floor. Beta has encouraged its senior management to state that they expect the operators and mechanics to treat the equipment in the plant with care, like they would their own cars. Not stating the obvious leaves it in doubt or, at the very least, does not reinforce the expectation. At one of Beta’s plants someone remarked, “This is the place where you make your living. If you don’t take care of the place where you make your living, it may not be here to take care of you.” They went on to remark that while the employees did not literally own the equipment, they did own their jobs related to the equipment and should take that ownership as part of their craftsmanship. Sometimes ownership is created by stating that it is expected. Finally, another part (and there may be more not discussed here), lies in not creating an environment that fosters ownership and care.
Establishing Operator Care/Ownership At Beta’s Bosco Plant, the question was raised at to whether or not Bosco operators were doing any basic care or PM of the equipment. The answer was no, not really. The reasons were numerous, and related to union work rules, historical behavior, culture, etc. The general conclusion was reached by management that operators
207
208
MAKING COMMON SENSE COMMON PRACTICE
should do some PM, but how to go about this was another question. Many thought the idea was good, most thought that operators, and the union bosses, would not go along with this. After much thought, it was decided to conduct a reliability workshop with the shop floor—operators, mechanics, union stewards, and plant engineers. The goal of this workshop would be to explore expanding operator basic care and PM, but operators would decide, in cooperation with union stewards, plant engineers, and mechanics, electricians, etc., exactly what would be done. Teams of people from each area of the plant were assembled, and first a halfday reliability workshop was presented, stressing the need to stay competitive, outlining best practices, and stressing their opportunity to have some say in the future of their plant. It was also stressed that in this effort we must: 1. Ensure that no safety hazards were created. 2. Provide for adequate training in any new skills required. 3. Resolve and/or negotiate any union work rule issues. 4. Not have a negative impact on existing job requirements. 5. Work as a team for the good of all employees. Most people, when offered the opportunity to do a good job, will take it enthusiastically. These employees did. After the reliability workshop, the larger group was broken into smaller groups, and examples were provided of what operators were doing at other plants. This was not offered as an example of what was expected of them, but rather to stimulate their thinking about what was possible or reasonable. Examples included: 1. Lubricating of equipment—understanding levels, frequencies, types 2. Minor adjustments, e.g., checking/tightening belts, conveyors, parts, interlocks, etc. 3. Cleaning of equipment 4. Minor PM, e.g., changing packing, filters, etc. 5. Minor instrument calibrations
OPERATIONAL PRACTICES
6. Preparatory work for the following day’s maintenance, e.g., draining, loosening, etc. 7. Log sheets for noise, air pressure, temperatures, steam/air/ gas leaks, housekeeping, etc. 8. Log sheets for specific process conditions 9. Above all, look, listen, smell, feel, and think about the plant and its care 10. Other ideas that the group developed After considerable discussion and effort, each team recommended what was reasonable to expect operators to do, without affecting existing job requirements, and including requirements for training, safety, union rules, etc. Each group had a little different view, but in the end all the issues were resolved and operators were soon involved in greater basic care and PM and, more importantly, had created a sense of ownership among everyone. Eventually what evolved with operations input was actually a maintenance hierarchy: 1. Operator basic care and PM, ownership 2. Area crafts/trades, support functions 3. Central maintenance support functions 4. Contract maintenance
Operator-Owner and Maintainer-Improver Principles In essence, by applying this approach at Beta, the operators became operator-owners,6 that is, they became responsible for:
Observing equipment performance variations in process and abnormalities in condition
Operating within agreed process limits; measuring any variations outside those limits
Maintaining good equipment condition, e.g., tighten, lubricate, clean, and manage environmental factors
209
210
MAKING COMMON SENSE COMMON PRACTICE
Daily monitoring and inspection, and eliminating obvious defects—identifying deficiencies in the operating procedures and basic design
Prioritizing improvement work for the support team and related resources
Measuring OEE and categorizing losses from ideal for each shift
Working with the “maintainer-improver” to balance operator care and PM
In turn, the maintainer-improver became responsible for:
Being the technical expert—mechanical, process control/ instrumentation/electronics, electrical/power
Defect elimination—analyzing and measuring equipment performance and potential defects; supporting root cause analysis
Supporting operator-owners regarding standards for inspections, lubrication, minor adjustments, or routine care/service
Assisting operator-owners in diagnosis of problems and variations in performance
Developing the preventive maintenance plan:
Proactive maintenance
Time-based and condition-based procedures and results Timing and scope of corrective work needed Overseeing large preplanned maintenance work Analysis of the effectiveness of PM/corrective work Supporting root cause analysis and design-out efforts Participating in design reviews for new projects Supporting commissioning of new/modified equipment
Providing systems support:
Reviewing and upgrading procedures, specs, and tasks to ensure reliability Ensuring excellence in spares and bills of material Assisting with technical documents and drawings Maintaining equipment performance and maintenance history Contributing to the reliability and manufacturing improvement plan
OPERATIONAL PRACTICES
Assisting in asset management planning Mentoring junior engineers
In doing this Beta achieved a much greater degree of teamwork and performance.
Shift Handover Process The exchange of information between shifts, especially as it relates to plant and equipment performance, can be crucial to plant success. Beta reviewed its shift handover process and found it to be lacking when compared with best practice.7 In many circumstances, shift handover was not much more than a wave in the hallway or dressing rooms, or sometimes even in the parking lot. Certainly, there were times when supervisors and staff spent considerable time discussing what had happened on the previous shift and advising of problems or issues. But this was more by happenstance, and far too dependent on the personalities and moods of the individuals involved. Therefore, Beta established a shift handover procedure, which defined current plant or process condition, any problems or upsets that had occurred, any unusual events, etc. In fact, Beta ensured that the shift supervisors and senior operators actually wrote the procedures for each plant, creating a sense of ownership of the process. The general rules of the shift handover process included: 1. It must be conducted face to face. 2. It must follow specific procedure(s), based on an analysis of needs. Procedures are especially important: a. When there’s a big difference in experience between shifts b. After a major maintenance or capital project effort c. Following and during the lengthy absence of key personnel d. After a significant transient, particularly if plant/system lag or response time overlaps shifts 3. It must allow as much time as necessary for communication, especially when problems have occurred on the prior shift.
211
212
MAKING COMMON SENSE COMMON PRACTICE
4. Both shifts are responsible for communication, teamwork, and joint ownership. Examples of issues discussed include equipment and process conditions, any maintenance activity, and lock-out condition; any significant transients or upsets; any personel issues or absences; any new products, practices, procedures, etc.
Production Planning One of the more troubling issues at several of Beta’s plants was that of production planning. As noted in Chapter 3, production planning was often subject to the whims of marketing, or more accurately sales, and changed from moment to moment at many plants. While integrating its marketing and manufacturing strategies and having a process for rationalizing product mix would help tremendously, Beta also felt that several actions were also necessary at its plants to ensure plant facilitation of the integration process. Among these were: 1. Creation of a written sales and production planning procedure, including forecasting, supply/demand balancing, verification of production forecast against demonstrated plant performance, conformance to production plan, and causes for failure to do so, etc. 2. Refinement of a sales/production forecasting system. A simple tool called a “stagger chart”8 (see Table 8-1) would be used to track actual performance against forecast. 3. Balancing of supply, demand, and inventory levels. 4. Planning of raw material requirements as a result of these reviews. 5. Linking current performance to business financials and trending this performance. 6. Creation of a longer-term logistics requirements plan from this procedure. 7. Creation of a process for resolving conflicts in the supply chain.
OPERATIONAL PRACTICES
Table 8-1
Stagger Chart. Forecasted Incoming Orders For
Forecast Made In:
Jan
Feb
Mar
Apr
Dec
100
125
135
120
Jan
105
120
140
135
130
95
130
125
140
130
115
140
145
140
150
110
130
120
120
140
115
120
135
140
145
105
115
125
130
130
100
110
120
125
120
110
120
130
140
Feb Mar Apr May Jun Jul Aug
May Jun
Jul
Aug
Sep
Oct
Nov
Dec
140
Numbers in bold represent actual performance against forecast. Note that, at least for this sales group, which is typical at Beta, it is almost always more optimistic than its actual performance, but at least now Beta has a calibration capability that should be fairly accurate for the existing set of products. Beta is also exploring using software programs that incorporate “fuzzy” logic into the forecasting process, as well as other statistical tools, but at present this is only at the developmental stage.
8. Linking to the supplier information systems for planning purposes. 9. Integrating more fully the production planning and maintenance planning functions. 10. More comprehensive training of production planners in the planning systems and in integrated logistics and supply chain issues.
Advanced Process Control Methods Beta’s first order of business is to establish the basics of process control, as previously described. However, at one of its advanced operations, the Pikeville petro-chemical plant, Beta has implemented a program for more advanced process control.9 Essentially, advanced process control was applied in a multivariable environment, one that needs integration of key quality and uptime. The first step in the effort was to build a dynamic model from data associated with the
213
214
MAKING COMMON SENSE COMMON PRACTICE
key variables, e.g., a mathematical representation of the production process, including thermodynamic, equilibrium, and kinetic relationships, as well as key process constraints. The model was developed and run on-line to initially predict system behavior and track driving variables. Coefficients were used to fit the model to the variables that represented the dynamics of the process and calculate the “trajectories” of the process. Once satisfied with the dynamic model, an optimizer was embedded into the model, which included economic considerations related to the process plant, e.g., raw material cost, product being made, process rate, gross margins anticipated, etc. Next, the model was tested in a noncontrol mode for 2 weeks, including several changes to the process to ensure that the model is properly predicting system behavior. Once satisfied with this, an advance controller was designed to support the plant’s economic objectives, fine tuned and implemented through a commissioning process that verified that the controller could effectively use the dynamic model to optimize economic performance of the plant. Beta has achieved millions in savings using this more advanced methodology. That said, the process will not work unless the basics are in place first. Finally, several of Beta’s plants have tried with varying degrees of success to use a more advanced technique called design of experiment (DOE), which helps determine the sensitivities of product outputs to various input parameters, facilitating greater control of those input parameters that offer the greatest opportunity for optimization of outputs. The technique has proven itself in several instances to be very beneficial. However, it does require considerable discipline, as well as the ability to vary and accurately measure key variables that control the production process during a given time period. This discipline and capability are, for the most part, in need of improvement throughout Beta’s manufacturing operation. Once the basics are well established, this technique represents an opportunity to take Beta’s plants to the next level of performance.
Summary In Beta’s effort to improve manufacturing and to achieve a level of excellence, it found several areas that needed improvement. The following conclusions are based on those areas where it has been
OPERATIONAL PRACTICES
observed that its plants didn’t do a very good job. Beta is currently implementing processes to ensure that these areas are addressed: 1. Review of instrumentation and control systems, and their maintenance, to ensure process measurement and control capability. This will also include an assessment of QC and laboratory analysis methods for validation of process information. 2. Improved production planning methods, as previously described; integration with sales; and supply chain issues. 3. Advanced methods as appropriate to the current practices of a given plant. 4. Back to basics for all plants to ensure their operators are properly trained and employing best practices, as described as follows. Specific improvements for operators and their training and practices include: 1. Training in process basics and control limits, and using their expertise to ensure precision control of both the physical process (pressure, temperature, flow, etc.) and, in many cases at Beta, the chemical process. This is particularly true in plants that require precision process chemistry control. A similar statement could be made for Beta’s batch and discrete plants, but with different control parameters. 2. Use of control and trend charts to measure and trend all key process indicators, physical, and, as necessary, chemical. Trends should be used to anticipate and trend potential developing problems and mitigate them long before they become serious. 3. Training in statistical process control, inherent variability, inherent stability of a given process, and ability to use this knowledge to operate within the basic stable envelope of a given process. Many operators go “chasing their tails” trying to stabilize a process when it is already within its inherently
215
216
MAKING COMMON SENSE COMMON PRACTICE
stable limits, often actually creating instabilities that would not otherwise exist. 4. Training in the basics of pump and valve operation. This knowledge must be used to operate a more stable and reliable plant. Far too often, pumps are run in a cavitation mode because operators do not understand the characteristics of a cavitating pump, what causes cavitation, proper “net positive suction head,” proper discharge conditions, etc., or how to address problems causing cavitation. Likewise, valve operation is not fully understood or appreciated and must be improved. 5. Training in startup, shutdown, and other transition periods such as moving from one product or process stream to another. This is from a standpoint of ensuring that process “recipes” are followed, as well as good process control. Frequently, changeovers and transient periods will result in the most harm to equipment due to process stream carryover, cavitation of pumps, control valve chatter, “hammer” on piping and heat exchangers, running seals dry, etc. Transient control must be given particular attention to avoid equipment damage and ultimately loss of production in both process and batch plants. 6. Training in the basics of condition monitoring and trending. Control charts must be used as a form of condition monitoring, but other techniques and technologies are available that may be made part of the operator’s capability. Methods such as vibration, oil, infrared, leak detection, and motor current monitoring will be used to supplement an operator’s knowledge of process and equipment condition to ensure greater precision in process control. 7. Training in and providing basic care and PM of the equipment they operate. In the better plants, operators spend some 30% of their time doing so. This is similar to how most people take basic care of their cars, and only go to maintenance when they need to do so, all the while collaborating with maintenance to ensure a proper solution is put in place. Experience has shown that operators, themselves, should determine which PM and basic care they are most capable of
OPERATIONAL PRACTICES
doing, considering training, safety, and other operating requirements. 8. Training in and selection of certain visual controls, which will facilitate their performance. 9. Routinely walking through the plant, not just for filling in log sheets—they all do that—but for seeking ways in which problems can be anticipated and resolved long before they become serious. Filling in log sheets is only the beginning, not the end. Yet in most plants, too many operators view the job as done when the log sheets are completed. Or, in the case of large process plants, they sit for endless hours looking at mimics and screens. Plant tours should be made an integral part of good plant operation. 10. Keeping instruments and equipment calibrated; electronic instruments in particular must be kept cool, clean, dry, and powered by “clean” electricity—no voltage or current surges, no spikes, no harmonics, etc. It has been said that process control effectiveness deteriorates some 50% after 6 months of operation without maintenance. 11. Ensuring measurement and control capability, as well as laboratory analysis capability. 12. Finally, and perhaps most importantly, operators must be conditioned to take ownership and pride in their operation. Beta has found that its operators were reluctant to do so with the company focused incessantly on cost-cutting. Beta is now taking the position that a greater degree of success will be achieved if it sets a few key goals related to uptime, unit cost of production, and safety, and then provides the freedom, training, and encouragement for the operations, engineering, and maintenance staff to achieve those key objectives. Beta has shifted focus from cost-cutting, per se, to manufacturing excellence, and expecting that the costs will come down as a consequence of best practice. Beta’s plants used a self-audit technique to measure its operating performance and rated itself at just over a 50% level. This performance is about average, and fairly typical of most manufacturers. Beta is currently working intensively to improve its performance in these areas.
217
218
MAKING COMMON SENSE COMMON PRACTICE
In summary, Beta’s operational practices must be improved to ensure precision process control; routine use of control charts for key process parameters; methods for ensuring consistency of process; and, perhaps most importantly, to encourage a sense of ownership and basic care, every day, in all its operating plants. Beta must also put in place a measurement system for ensuring that all its people understand uptime, OEE, and asset utilization rates as key performance indicators. Beta must measure every hour of lost production and the reason why, and use those measures to drive the improvement process. Relatively few plants do the basics well. Beta is working hard to make sure it is one of those few who do.
References 1. Aiken, C., and Kinsey, R. “Preventive/Predictive Maintenance: The Secret to World-Class Productivity,” Conference Proceedings, Association for Facilities Engineering, Las Vegas NV, October 1997. 2. Cutler, C., and Finlayson, S. “Considerations in Applying Large Multivariable Controllers,” Instrumentation Society of America, New York, Annual Conference, October 1991. 3. Cutler, C., and Perry, R. “Real Time Optimization with Multivariable Control,” Computers and Chemical Engineering, Vol. 7, No. 5, pp. 663–667, 1983. 4. McNeese, W., and Klein, R. Statistical Methods for the Process Industries, Marcel Dekker, New York, January 1991. 5. Shunta, J. P. Achieving World-Class Manufacturing through Process Control, Prentice Hall, Englewood Cliffs, NJ, August 1994. 6. Gordon, I. Internal Management Presentation, Silcar, Ltd., Melbourne, Australia. 7. “Effective Shift Handover—A Literature Review,” Health & Safety Executive, June 1996. 8. Grove, A. S. High Output Management, 2nd ed., Vintage Books, New York, 1995. 9. Williams, S. Discussion, Aspen Technology, circa June 1997.
Maintenance Practices
9
So long as they are focused on events, they are doomed to reactiveness. Generative learning cannot be sustained in an organization where event thinking predominates. Peter Senge According to an old story, a lord of ancient China once asked his physician, a member of a family of healers, which of them was the most skilled in the art. The physician, whose reputation was such that his name became synonymous with medical science in China, replied, “My eldest brother sees the spirit of sickness and removes it before it takes shape, so his name does not get out of the house. My elder brother cures sickness when it is still extremely minute, so his name does not get out of the neighborhood. As for me, I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords.” A Ming dynasty critic writes of this little tale of the physician: “What is essential for leaders, generals, and ministers in running countries and governing armies is no more than this.”1
Going from Reactive to Proactive It could be added that what is essential for plant managers, vicepresidents of manufacturing, and CEOs is no more than this. Most
219
220
MAKING COMMON SENSE COMMON PRACTICE
manufacturing plants are operated in a highly reactive mode—routine changes in the production schedule, routine downtime for changeovers and unplanned maintenance, etc., often resulting in a need to “. . . puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords,” rather than being very proactive and ensuring that one “. . . sees the spirit of sickness and removes it before it takes shape. . . .” As in ancient times, there’s more glory today in fixing things after they become a problem than there is in stopping them from happening in the first place. How much we change and yet stay the same. And yet we must change if we are going to compete in a global economy. This chapter outlines maintenance and reliability best practices. At Beta, like most companies, executives historically viewed maintenance as a repair function, a necessary evil, an unnecessary cost, a fire-fighting function, a group of grease monkeys and knuckle draggers (unspoken attitude), or any combination of these and other unflattering terms. This was an unenlightened view, because few had an appreciation for the value a world-class maintenance function could provide to their company. Because of this, a good deal of education was necessary for Beta executives to understand this issue. Hence, the need for a fairly large body of information. We can only hope that you, like many of the Beta executives, will take these issues to heart and intensively implement the practices discussed in the following text. From a 1992 study,2 and more recently in 1997 and 2000 (detailed in Appendix A), average US manufacturers were reported to have the levels of maintenance practices in their manufacturing plants that are shown in Table 9-1. Table 9-1
Comparative Maintenance Practice: 1992–2000.
Maintenance Practices
1992
1997
2000
Reactive Maintenance
53%
49%
55%
Preventive Maintenance
25%
27%
31%
Predictive Maintenance
15%
14%
12%
Proactive Maintenance
7%
9%
na
Other Maintenance
na
na
2%
MAINTENANCE PRACTICES
In these studies, participants (70 plants in the 1992 study and nearly 300 plants in the 1997 study) were asked, “What percentage of your maintenance is driven by the following type of behavior?”: 1. Reactive maintenance—characterized by practices such as run to fail, breakdown, and emergency maintenance. Its common characteristics are that it is unplanned and urgent. A more stringent view of reactive maintenance is work you didn’t plan to do on a Monday morning, but had to do before the next Monday morning. 2. Preventive maintenance (time based)—characterized by practices that are periodic and prescribed. Examples are annual overhauls, quarterly calibrations, monthly lubrication, and weekly inspections. 3. Predictive maintenance (condition based)—characterized by practices that are based on equipment condition. Examples include changing a bearing long before it fails based on vibration analysis; changing lubricant based on oil analysis showing excess wear particles; replacing steam traps based on ultrasonic analysis; cleaning a heat exchanger based on pressure-drop readings; replacing a cutting tool based on deterioration in product quality, etc. 4. Proactive maintenance (root cause based)—characterized by practices that focus on eliminating the root cause of the maintenance requirement or that seek to extend equipment life, mitigating the need for maintenance. These practices use the maintenance knowledge base about what was going wrong with equipment to make changes in the design, operation, or maintenance practices, or some combination, and seek to eliminate the root cause of problems. Specific examples might include root cause failure analysis, improved design, precision alignment and balancing of machinery, equipment installation commissioning, improved vendor specifications, and better operational practices to eliminate the cause of equipment failures. What is striking about the three sets of data (1992, 1997, and 2000) is their similarity. After years of effort defining benchmarks and best practices, and creating a reliability model for manufactur-
221
222
MAKING COMMON SENSE COMMON PRACTICE
ing plants to follow, the data are essentially the same. As the saying goes, “The more things change, the more they stay the same.” These typical plant maintenance practices were then compared with so-called benchmark plants. These benchmark plants were characterized as those that had achieved extraordinary levels of improvement and/or performance in their operation. For example, a power plant increased plant availability from 50% to 92%; a fibers plant increased uptime by 40%, while cutting maintenance costs in half; a primary metal plant increased capacity by some 30%, while reducing labor levels by 20%; a motor manufacturer increased OEE from 60% to over 80%, reducing unit costs of production commensurately; a paint plant doubled production output; and so on. The striking differences between benchmark plants, as compared with typical plants, are shown in Figures 9-1 and 9-2.2 1. Reactive maintenance levels differed dramatically. The typical plant incurred some 50%+ reactive maintenance, while the benchmark plants have 10% or less. 2. The benchmark plants had a heavy component of predictive, or condition-based, maintenance practices, and have exceptionally good discipline in planning and scheduling most all their maintenance work, including PM compliance, and then using their knowledge of equipment condition as the driver for the work that actually gets done. The dotted line in Figure 9-2 indicates that, depending on the definition of preventive, predictive, and proactive maintenance, each individual level can vary significantly. Similar conclusions were reached by Ricketts,3 who states that the best oil refineries were characterized by, among other things, the “religious pursuit of equipment condition assessment,” and that the worst were characterized by, among other things, “staffing . . . designed to accommodate rapid repair,” and failures being “expected because they are the norm.” In the same study, he also provides substantial data to support the fact that as reliability increases, maintenance costs decrease, and vice versa. A more recent study4 of some 250 small and medium manufacturers in Australia is quite consistent. It reported that companies that adopted a policy of strategic asset management showed increased
MAINTENANCE PRACTICES
Figure 9-1. Typical vs. benchmark maintenance practices.
profits of 25% to 60%, increased productivity of 25% to 50%, reduced equipment downtime of up to 98%, and reduced maintenance costs of 30% or more. Table 9-2 provides a summary of additional key results of this review.
Figure 9-2. Benchmark maintenance practices.
223
224
MAKING COMMON SENSE COMMON PRACTICE
In the study, the average of the plant data on maintenance practices is represented as the base case profile. It shows a typical plant having 58% reactive maintenance, 27% preventive, and the balance split approximately equally between predictive and proactive. Maintenance costs were viewed as a measure of success, and were determined as a percent of current plant replacement value (PRV) for each plant, with the average value being normalized to 100%. When comparing the typical plant maintenance practices with other plant maintenance practices, a clear trend is evident. Those plants that experienced lower reactive levels, or reciprocally higher preventive, predictive, and proactive levels, also experienced much lower maintenance costs. For example, as shown in Table 9-2, when reactive maintenance was less than 15%, maintenance costs were 44% of base case; when predictive maintenance was greater than 25%, maintenance costs were 42% of base case; and when proactive maintenance was greater than 20%, maintenance costs were 47% of base case. Note that while preventive maintenance does improve maintenance costs, it does not do so to the same degree as the other practices. As Beta found at one of its plants, discussed in Chapter 12, increasing preventive maintenance was accompanied by an increase in reactive maintenance when maintenance practices did not include predictive and proactive methods. As one might expect, it was also found that plants with higher rates of proactive or predictive maintenance were also balancing the other practices for improved effect. For example, those plants with more than 25% predictive also had less than 25% reactive, and were applying the balance of effort to preventive and proactive Table 9-2 Comparative Results of Maintenance Practices and Maintenance Costs. Base Case Profile Reactive
58%
Preventive
27%
Predictive
7%
Proactive
8%
Effects of Various Practices <30% <15%
Maintenance 100% 78% Costs (as a % of PRV)
>50% >60% >10%
44%
72%
59%
75%
>25%
42%
>10%
>20%
58%
47%
MAINTENANCE PRACTICES
maintenance, reflecting a balanced, reliability-driven approach to maintenance in their plants. The study also reported that those plants with specific policies for maintenance, including plans and goals for equipment reliability and maintenance performance, also experienced lower maintenance costs than the group as a whole. None of this should be surprising. As Dr. David Ormandy once said, “Do all the little things right, and the big things don’t happen.” Beta had similarly gone through a review of its manufacturing practices, with particular emphasis on its maintenance practices, and had concluded that in spite of pockets of excellence in some areas, all in all it was among the mediocre in its performance in essentially all areas, including maintenance, or, as one of its executives proclaimed, “Beta is thoroughly average.”
Worst Practices Reactive maintenance at levels beyond about 20% to 30% should normally be considered a practice to avoid in most manufacturing plants, or a “worst practice.” Reactive maintenance tends to cost more (routinely twice as much as planned maintenance5) and leads to longer periods of downtime. In general, this is due to the ancillary damage that often results when machinery runs to failure; the frequent need for overtime; the application of extraordinary resources to “get it back on line,” NOW; the frequent need to search for spares; the need for expedited (air freight) delivery of spares; etc. Further, in a reactive mode, the downtime period is often extended for these very same reasons. Moreover, in the rush to return the plant to production, many times no substantive effort is made to verify equipment condition at startup. The principal criterion is often the fact that it is capable of making product. Hudachek and Dodd6 report that reactive maintenance practices for general rotating machinery cost some 30% more than preventive maintenance practices, and 100% more than predictive maintenance practices. These results are consistent with that reported for motors.7 Most Beta plants experience reactive maintenance levels of nearly 50%, and sometimes more. There should be little wonder why maintenance costs for Beta are well above world class, and in some cases even worse than average.
225
226
MAKING COMMON SENSE COMMON PRACTICE
Best Practices Plants that employ best practices, that operate at benchmark levels of performance, have a strong reliability culture; those that are mediocre and worse, have a repair culture. At Beta, this was true— the plants that had the best performance held the view that maintenance is a reliability function, not a repair function. Of course, they understood that repairs were often necessary, but these repairs were done with great care, precision, and craftsmanship, and with a view that the equipment must be “fixed forever” as opposed to “forever fixing,” building reliability into the equipment. These plants also understood that the reciprocal was not true, that is, reliability is a maintenance function. Reliability required doing everything right, in design, purchasing, stores, installation, operations, and, of course, maintenance. Unfortunately, most of Beta’s plants viewed maintenance as a rapid repair function.
Beta’s Repair Culture In Beta’s repair culture, the maintenance department people were viewed as those to call when things broke. They became very good at crisis management and emergency repairs; they often had the better craft labor—after all, the crafts were called upon to perform miracles, but they could never rise to the level of performance achieved in a reliability culture. They often even viewed themselves as secondclass employees, because they were viewed and treated as “grease monkeys,” repair mechanics, and so on. They often complained of not being able to maintain the equipment properly, only to be admonished when it did break, and then placed under extraordinary pressure to “get it back on line, now.” They were rarely allowed the time, or encouraged, to investigate the root cause of a particular problem and eliminate it. Seeking new technologies and methods for improving reliability was frequently not supported—“It’s not in the budget” was a familiar refrain. The times when they were allowed to investigate the root cause were a kind of “reactive proactive”—the problem had become so severe that it had reached a crisis stage. They were eager, sometimes desperate, to contribute more than a repair job, but were placed in an environment where it often just was not possible. They were doomed to repeat the bad practices and history of the past, until the company could no longer afford them
MAINTENANCE PRACTICES
or stay in business. The good news is that this is now changing at Beta, which has a newfound resolve toward creating and sustaining a reliability culture. In a reliability culture, reliability is the watchword. No failures is the mantra. Equipment failures are viewed as failures in Beta’s processes and systems that allowed the failure to occur in the first place, not failures in the equipment or in the employees. In a reliability culture, preventive, predictive, and proactive maintenance practices are blended into an integral strategy. Condition monitoring is pursued, as Ricketts put it, “religiously,” and to the maximum extent possible maintenance is performed based on condition. Subsequently, condition diagnostics are used to analyze the root cause of failures, and methods are sought to avoid the failure in the future.
Maintenance Practices Before we explore specific maintenance practices that ensure plant reliability, however, we need to consider equipment failure patterns as shown in Figures 9-32,8 and 9-4,9 which profile the conditional probability of failure as a function of age for plant equipment. Referring to Figure 9-3, preventive maintenance, or PM (interval based), often takes the form of periodically overhauling, repairing,
Figure 9-3. Classical failure profile used in preventive maintenance.
227
228
MAKING COMMON SENSE COMMON PRACTICE
or otherwise taking equipment apart, replacing certain parts, and reassembling. It assumes that a given set of equipment will experience a few random, constant failures, but after a time will enter a period where the conditional probability of failure rises sharply (the wear-out zone). Therefore, the invasive PM (overhaul, repair, etc.) should be done just prior to entering the wear-out zone. There are several problems with using this failure profile as the basis for routine invasive maintenance. First, and perhaps foremost, most equipment does not fit this profile. Referring to Figure 9-4, typically only about 1% to 2% of equipment fits this profile. While the US Navy reported that some 17% fit this profile, most of that was related to sea water corrosion, which had a definitive age failure pattern. Perhaps more importantly, thse data are from the nuclear submarine fleet, which dictates an exceptionally high degree of consistency in operations and maintenance practices. For those who have been in the nuclear navy, you understand that you will follow procedures, or bad things will happen. The combination of an age or wearrelated failure mode with a high degree of consistency in operating and maintenance practices results in a higher percentage of failures that fit the classical failure profile used for invasive preventive maintenance. But, it’s still only 17%. Further, most plants do not have the data or equipment histories that allow determination of the conditional probability of failure; or if they do, the data look something like Figure 9-5,10 which happens to be for 30 bearings that were run to failure. These bearings were from the same manufacturer’s lot, installed with the same procedures, operated under the same loads, and run to failure, and yet experienced very high variability in failure period. Recognize that these data could represent any type of equipment for which histories have been collected, e.g., heat exchangers, electronic boards, etc. In any event, it should be obvious that using data like these is relatively meaningless when trying to determine PM intervals for changing bearings. For example, using mean time between failure (MTBF) ensures both overmaintaining, for those that have considerable life remaining beyond their mean time between failure (MTBF), and undermaintaining, for those that fail before the PM interval is ever reached. Only 5% to 10% of the bearings in Figure 9-5 could be considered as failing at an “average” interval. Hence, the data are not very useful for determining PM intervals for bearing changes,
230
MAKING COMMON SENSE COMMON PRACTICE
Figure 9-5. Bearing failure component histogram.
1. These data could be used to facilitate root cause failure analysis and/or to recognize patterns of failure in clusters for a given set of equipment. 2. The data could be used to determine intervals for predictive maintenance such as vibration monitoring. For example, if we replotted the data from lowest to highest, what we would find would be a constant rate of failure at about every 10 units of time. If the decision were made to try to avoid all catastrophic failures, we could monitor the bearings at something like one-half or one-fourth of the failure rate, and thus ensure that we identify pending failures well before they become catastrophic failures, allowing us time to plan the maintenance requirement and minimize production losses and maintenance costs. Note that this assumes perfection in the vibration analysis effort, something that is problematic. The best vibrations analysis programs are reported to have a 90% to 95% success rate for fault detection. 3. These data could be used to begin a discussion with the supplier, which might ask questions about your design, specification, storage, and installation practices. What did you specify for this application? Do you use proper storage and handling procedures? Do you use an induction heater and/or oil bath for installation? Do you use a clean room for pump
MAINTENANCE PRACTICES
overhaul? Do you use proper lubricants and lubricating procedures? Do you do precision alignment and balancing of the equipment? This could be very helpful in eliminating the root cause of the failure. 4. These data could be used to begin a discussion with operations or design to determine if operational or design issues such as cavitation, excess vibration, process conditions, poor shaft design, etc., are creating the poor performance. And so on. These data serve as a basis for determining whether PM does apply, and, if not, as a basis for further investigation and problem resolution to get to the root cause of the failures, and eliminate them. Further, if we replot the histogram data from Figure 9-5 in order of increasing life, from lowest to highest, what we’ll see is a relatively constant failure rate, though at two different rates, as shown in Figure 9-6. Given a relatively constant rate of failure, this suggests using a condition monitoring technology or method that is capable of detecting the failure mode well in advance of system functional failure. Once we detect onset of failure, we use that knowledge to plan and manage the need for maintenance, including integrating our
Figure 9-6. Bearing failure component histogram, redrawn.
231
232
MAKING COMMON SENSE COMMON PRACTICE
Figure 9-7. Understand the degradation process; avoid/minimize the consequence of failure.
plans with production planning. Note also that the frequency of application of the condition monitoring technology or method is typically more frequent than the failure rate. Along the same line, most people have come to accept intuitively the “bathtub” failure pattern, that is, equipment tends to have some early life failures, some constant rate of failure during its life, and eventually the equipment just wears out. This is shown in Figure 94. However, as the data indicate, this only applies to some 3% to 4% of equipment. Figure 9-4 shows that depending on the study, “infant mortality” or early-life failures represent some 29% to 68% of failures. This typically means within 60 to 90 days of startup. This has profound implications relative to a given plant’s maintenance strategy. PMs make no allowance for these infant mortality failures. Indeed, these failures are more properly eliminated through better design, procurement, installation, commissioning, startup, and operational practices, not PM. Reinforcing this point is the following: 1. The Nordic paper industry, which routinely replaced bearings on a PM basis, found that on actually inspecting the bearings, only 5% could have led to catastrophic failure; 30% had
MAINTENANCE PRACTICES
nothing wrong; and 65% had only minor defects due to poor lubrication, alignment, balancing, or installation.11 2. The US Navy found that some 10% to 20% of all maintenance work at one of its facilities required some type of rework, suggesting a high probability for introducing defects doing time-based PM on equipment.12 3. Beta’s Maytown plant found that when it increased PM, without applying predictive and proactive methods, reactive maintenance also increased. See Chapter 12. Figure 9-4 also shows that the next biggest age-related failure pattern is constant and, depending on the study, makes up 14% to 42% of failures. This also has profound implications on your maintenance strategy. Note that this constant failure rate is similar to the data shown in Figure 9-5, that is, a random failure pattern in which failures are occurring at a relatively constant rate. In situations where you have a random failure pattern, the best strategy is to ensure that you have good condition monitoring in place to detect onset of failure long before it becomes serious, allowing for planning and scheduling of the maintenance requirement. The type of condition monitoring recommended is not just the standard predictive maintenance tools, such as vibration, oil, infrared, etc., technologies. It also includes process condition monitoring and the integration of the two to ensure minimal opportunity for failures. This same highquality condition monitoring can also be used effectively during installation and startup to ensure the quality of the maintenance and startup effort and to minimize infant mortality failures. Perhaps this concept is best shown in Figure 9-7. As the figure illustrates, a particular set of equipment is being monitored using technology consistent with its ability to detect known failure modes. At some time, we detect a potential failure, a developing problem, but the system is still capable of meeting all its functional requirements. As the system continues to deteriorate, we will ultimately reach a point where the system is no longer capable of meeting its functional requirements, at which point we have a functional failure. Perhaps we’re not able to run at normal rate, we’re experiencing quality problems, or some other problem. We’re still running, but not as well as we should, and, more importantly, we’re experiencing performance losses. If we continue to run, we’ll eventually break down, an event we should try to avoid, particularly for critical systems. The time
233
234
MAKING COMMON SENSE COMMON PRACTICE
period between the potential failure and the functional failure is our P-F interval, or the period of time during which we should be doing our maintenance in order to avoid or minimize the consequence of failure to our production line or operating system. When combined, infant mortality and constant rate failure patterns make up some 71% to 82% of all failures. Developing and applying a maintenance strategy that specifically addresses these failure patterns will have a profoundly positive effect on your maintenance and operational performance. Ensuring that these failures are eliminated or mitigated is essential to ensure world-class maintenance. Finally, looking at Figure 9-4, note that for some 80% to 90% of the failure patterns, there is no “end-of-life” or “wear-out zone.” Equipment following these patterns could theoretically last forever, presuming you could get spare parts to accommodate their constant failure rate. Beta’s engineers often underestimated or misunderstood the effects just described. Design, procurement, installation, startup, and operating practices can have an enormous effect on equipment reliability and consequent uptime and costs. Many of these effects can be prevented or mitigated by specifying more reliable equipment and requiring the use of nonintrusive testing, i.e., commissioning to specific standards discussed previously. These specifications must be verified by performing postinstallation checks at a minimum. For major equipment, the vendor should test the equipment prior to shipment and provide appropriate documentation. The equipment should be retested following installation to ensure the equipment was properly installed and not damaged during shipment. And, as always, better operational practices will minimize equipment failures.
Preventive Maintenance As data from numerous studies and substantial anecdotal evidence suggest, a program dominated by a PM, or time-based, maintenance strategy is not likely to provide for world-class maintenance and hence manufacturing performance. Given this is the case, when should preventive maintenance be used? Best practice for preventive maintenance should allow for timebased maintenance, but only when the time periods are justifiable. For example, appropriate PM might be used:
MAINTENANCE PRACTICES
1. When statistical data demonstrate that equipment fails in a generally repeatable wear-related mode. Note that this “rule” also applies to vendors and their PM and spare parts recommendations—it must be backed by statistical data, not just arbitrary proposals. 2. For routine inspections, including regular condition monitoring. 3. For basic care and minor PM efforts, such as filter changes, lubrication, minor adjustments, cleaning, etc. 4. For instrument calibrations. 5. For meeting regulatory-driven requirements. 6. For other purposes based on sound judgment or direction. Beyond the routine PM outlined, the preventive maintenance function should be used as an analysis, planning, and scheduling function. This will almost always include a comprehensive computerized maintenance management system (CMMS). Using this as a tool, the maintenance function can establish and analyze equipment histories, perform historical cost analyses, and use these cost and equipment histories to perform Pareto analyses to determine where resources should be allocated for maximum reliability. The specific functions for a maintenance planner are:13
Establish work order process and standards
Unique equipment IDs—ease of capturing repair frequency, nature, cause, correction, result of repairs General requirements, including permits
Issue work orders, plus any instructions, work plans, and sequence of activities
Establish and analyze equipment histories
Pareto and cost analysis
PM selection, scheduling, and optimization
Maintenance planning and coordination:
With maintenance supervisor and trades With stores: parts, tools, special equipment With production, including job status With projects, including contractors required
235
236
MAKING COMMON SENSE COMMON PRACTICE
Keep current documentation, drawings, procedures, permit process, etc. Keep current equipment bill of material Publish schedules, periodic reports Intensive coordination during shutdowns
Coordinate closely with reliability engineering
Preventive maintenance requires good maintenance planning, ensuring coordination with production, and that spare parts and tools are available, that permits have been issued if necessary, and that stores inventories and use histories are routinely reviewed to minimize stores, maximizing the probability of equipment uptime. Beta also found that a lock out/tag out pocket guide14 for its skilled trades, as well as solid training in procedures, helped reduce the risk of injury during maintenance. In summary, best practice in preventive maintenance includes: 1. Strong statistical base for those PMs done on a time interval 2. Exceptional planning and scheduling capability 3. Maintenance planning and production planning being viewed as the same plan 4. Solid equipment history analysis capability 5. Strong and flexible cost analysis capability 6. Comprehensive link to stores and parts use histories 7. Comprehensive training in the methods and technologies required for success 8. Comprehensive link between maintenance planning and scheduling, condition monitoring, and proactive methods Best practice for use of the CMMS supports those methods and includes: 1. Good work order management 2. Routine planning and scheduling of work (>90%)
MAINTENANCE PRACTICES
3. Equipment management, including cost/repair histories, bills of material, spares 4. Pareto analysis of cost and repair histories 5. Quality purchasing and stores management interface with purchasing 6. Good document control 7. Routine use of personnel and resource allocation Additional detail on the comprehensive implementation of a computerized maintenance management system is provided in Chapter 11. When one of Beta’s vice-presidents for manufacturing was presented with a plan for purchasing a computerized maintenance management system, the vice-president asked, “What’s the return on investment for this system?” To his question the reply was, “It doesn’t matter,” surprising this vice-president. He was then quickly asked, “Does the company have a finance and accounting system to manage its cash and other financial assets?” To this he replied, “Of course, it does. The company couldn’t be run effectively without it.” And then another question to him, “What’s the return on investment for that system?” Then there was a pause, but no response. And then the statement was made to him, “Your division has several hundred million dollars in fixed assets. Do those assets not require a system to ensure their proper management?” The system was approved, without any return on investment analysis.
Predictive Maintenance This is an essential part of a good manufacturing reliability program. Knowing equipment and machinery condition, through the application of vibration, oil, infrared, ultrasonic, motor current, and perhaps most importantly, process trending technologies, drives world-class reliability practices, and ensures maximum reliability and uptime. It is generally best if all the traditional predictive technologies are in a single department, allowing for synergism and routine, informal communication for teamwork focused on maximizing reliability. For example, knowing that a bearing is going into a failure mode weeks before probable failure allows maintenance to be planned and orderly—tools, parts, and personnel made available to
237
238
MAKING COMMON SENSE COMMON PRACTICE
perform the work in a planned shutdown. Moreover, it also allows a more accurate diagnosis of the cause of the failure so that action can be taken to prevent the failure from occurring in the future. Condition monitoring allows overhauls to be planned more effectively, because it is known what repairs will be needed. Overhauls can be done for what is necessary, but only what is necessary. Therefore, they take less time—much of the work is done before shutdown; and only what is necessary is done. Condition monitoring allows for checking repairs at startup to verify the machinery is in like-new condition. Condition monitoring allows better planning for spare parts needs, thus minimizing the need for excess inventory in stores. Best predictive maintenance practices include: 1. “Religious” pursuit of equipment condition assessment, including linking to process condition and control charts. 2. Application of all appropriate and cost-effective technologies. 3. Avoiding catastrophic failures and unplanned downtime by knowing of problems in equipment long before they become an emergency. 4. Diagnosing the root cause of problems and seeking to eliminate the cause. 5. Defining what maintenance jobs need to be done, and when they need to be done—no more, no less. 6. Planning overhaul work more effectively, and doing as much of the work as possible before shutdown. 7. Setting commissioning standards and practices to verify equipment is in like-new condition during startup. 8. Minimizing parts inventory through knowledge of equipment and machine condition and therefore the effective planning of spare parts needs. 9. Comprehensive communications link to maintenance planning, equipment histories, stores, etc., for more effective teamwork. 10. Comprehensive training in the methods and technologies required for success.
MAINTENANCE PRACTICES
11. An attitude that failures are failures of the processes in place, not the equipment or the people. 12. Continuously seeking ways to improve reliability and improve equipment performance. One of Beta’s plants illustrates the value and potential conflict in applying predictive maintenance. Beta, like most manufacturers, has several large air compressors at each plant. These compressors are typically on an annual or biannual PM, which often includes an overhaul schedule by the vendor. Accordingly, the PM/overhaul plan and schedule routinely include this PM effort. As Beta was approaching the time for this work, several routine tests were performed that were a part of its new predictive maintenance program for doing condition monitoring. Using routine condition monitoring, the maintenance manager found that just 2 weeks before the scheduled overhaul: 1. All vibration levels in all frequency bands and at all fault frequencies were below the lowest alarm level prescribed by GM V1.0-1993, as modified by plant engineering. 2. 2. All oil analysis parameters, e.g., viscosity, viscosity index, additives, water, and wear particles were in the normal range per the vendor’s original lubrication specification sheet. 3. Motor current readings indicated that in-rush current and normal operating current were within normal limits on all phases. 4. Cross-phase impedance measurements on the motor indicated inductive and resistive impedance were within 5% and 3%, respectively, for all phases, Beta’s standard for this motor. 5. A recent infrared thermographer’s survey indicated that the compressor had no unusual hot spots. 6. Ultrasonic leak detection has, however, found a number of leaks in the compressed air piping system. These leaks have historically been given inadequate attention because of other more pressing work, like overhauling the compressor.
239
240
MAKING COMMON SENSE COMMON PRACTICE
Further, on checking operations logs and confirming them, Beta found that: 1. Discharge pressures, temperatures, and flow rates were all normal. 2. High moisture content is indicated in the instrument air. Apparently this too has been ignored to handle more pressing problems. 3. The next time during which the overhaul can be performed is not for another year. With all this in mind, does Beta’s maintenance manager overhaul the compressor? Would you? If history is any indicator, the overhaul will be performed. Most of Beta’s maintenance managers will not “take the risk.” But then, where is the greater risk? Is it in overhauling the compressor, for sure spending the money and risking the introduction of defects, or is it in continuing to run the compressor, risking that the technology and monitoring may have missed a developing fault that will result in failure? Those who have confidence in their condition monitoring program will not overhaul the compressor, but they will continue to appropriately monitor to detect any onset of failure. Those who have poor monitoring capability, or lack confidence in what they have, will overhaul. There’s estimated to be a 10% to 20% likelihood of introducing defects when this is done. There are risks either way. Who’s right? Only time will tell, but what can be said is that the best plants use condition monitoring and inspections religiously to determine the work to be done and to verify its quality at completion. Let’s consider another way of looking at this issue. Suppose your car dealer recommended that you overhaul your car engine at 100,000 miles. Let’s further suppose that you’ve checked and there’s no indication of any problem. For example, the compression is good, gas mileage is good, acceleration is good, and you haven’t detected any unusual noises, smells, or vibration Would you overhaul the engine? Not likely is it? So why persist in doing things in our manufacturing plants that we wouldn’t do with our personal machinery? Some food for thought.
MAINTENANCE PRACTICES
Beta’s Current Predictive Maintenance Practices An initial assessment of several of Beta’s plants revealed several issues concerning Beta’s practices. Because of regulatory requirements, Beta had very good capability, resources, and practice for nondestructive examination (NDE), or condition monitoring of stationary equipment, principally pressure vessel and piping inspection using various NDE technologies. However, consistent with its generally highly reactive maintenance organization, Beta had only limited capability and/or resources in the area of predictive maintenance (PDM) for equipment condition monitoring. Notwithstanding a few people’s clear understanding of rotating machinery and vibration analysis, the resources and expertise needed to fulfill its prospective needs throughout the company, or for that matter at a single large integrated site, was very limited—insufficient resources, technology, training, management processes, etc. Further, the application of other technologies, such as infrared thermography, motor current analysis, oil analysis, etc., was also very limited, and in some plants essentially nonexistent. Beta was well behind its competitors and well below world class in machinery condition monitoring practices. This was surprising, particularly in light of the demonstrated value of vibration analysis at one site, which had an acknowledged and published 10:1 benefit-to-cost ratio for the current vibration monitoring program. This same program was also vastly underutilized. Beta would be hard pressed to achieve and sustain manufacturing excellence without a comprehensive equipment condition monitoring program that was fully integrated with precision process control and operations input. In reviewing any number of plants, it was found that maintenance planning and scheduling alone were not sufficient to support worldclass maintenance and manufacturing—nor was condition monitoring alone sufficient. Rather, the best plants (highest uptime, lowest unit cost of production) were very proactive about using their CMMS to do equipment histories and Pareto analysis, using maintenance planning and scheduling tempered by process and equipment condition monitoring in a comprehensive way, to focus on defect elimination to achieve superior performance. All the technologies were necessary; none alone was sufficient.
241
242
MAKING COMMON SENSE COMMON PRACTICE
Beta’s Predictive Maintenance Deployment Plan Survey of Existing Capability. Initially, Beta surveyed its existing capability in condition monitoring, both capability and application. At the same time, the survey also included actual application practices, e.g., commissioning of equipment to strict standards, routine condition monitoring and trending, root cause diagnostics of failures, etc. The results of the survey were then used to determine the prospective needs within the operating plants, to correlate those needs to operational performance and production losses, and to work with them to ensure their buy-in and support of any needs identified to their specific plants, as well as to develop a broader corporate strategy for the technologies and their implementation. Creation of Core Capability. Further, the survey was used to develop a corporate level specialist support function, which would provide support in the key predictive and proactive maintenance methods. Other companies had used this practice very effectively to help drive the reliability improvement process. For example, one company had a corporate support function with a senior specialist in each of the technologies, i.e., vibration, oil, infrared thermography, ultrasonic emission, motor current, alignment, balancing, etc. These specialists were recognized as leaders in their specialty and facilitated the implementation of the technologies throughout the company. Their actual functions included, for example: surveying a given plant to ensure that the technology applies, assessing the prospective benefit, working with the plant to set up a program—equipment, database, commissioning, trending, integration with other technologies and with process monitoring, continuing quality control and upgrade, problem solving and diagnostics, and last, but not least, training in the technology to ensure excellence in its use. The specialist did not necessarily do the program, except perhaps in a startup mode, but rather facilitated getting it done, and may from time to time help identify and use contractors for certain support functions. In other words, these specialists facilitated the review and implementation of given technologies that are considered to provide maximum benefit to a given manufacturing plant. Beta is using this model for the predictive maintenance implementation process.
MAINTENANCE PRACTICES
Proactive Maintenance At the best plants proactive maintenance is the ultimate step in reliability. At plants that have a strong proactive program, they have gone beyond routine preventive maintenance and beyond predicting when failures will occur. They aggressively seek the root cause of problems, actively communicating with other departments to understand and eliminate failures, and employing various methods for extending equipment life. Predictive maintenance is an integral part of their function, because the predictive technologies provide the diagnostic capability to understand machinery behavior and condition. Proactive maintenance is more a state of mind than a specific methodology. The following are specific concepts and techniques being used at Beta’s plants. In a proactive culture the staff have realized, for example, that alignment and balancing of rotating machinery can dramatically extend machinery life and reduce failure rates. They have also learned that doing the job exceptionally well in the plant is not sufficient, and that improved reliability must also come from their suppliers. Therefore, they have supplier standards that require reliability tests and validation, and they keep equipment histories of good suppliers’ equipment performance. They constantly seek to improve the way they design, buy, store, install, and operate their plants so they avoid the need for maintenance. For example, their motor specifications would probably require that most motors: (1) be balanced to less than 0.10 in./sec. vibration at one times turning speed; (2) have no more than 5% difference in cross-phase impedance at load; (3) have coplanar feet not to exceed 0.003 in. Proactive “maintenance” practices are less about maintenance than they are about eliminating the defects that result in the failures that require maintenance. As we’ve seen, and probably experienced, most of the defects are introduced during design, installation and startup, and operation. Examples of proactive “maintenance” that have yielded exceptional results at some of Beta’s plants include: 1. Excellence in design practices, which includes consideration of life-cycle costs and in-depth shop floor input. 2. Excellence in procurement practices, which includes analysis of operating results using key suppliers, and the concept of total cost of ownership, not just initial price.
243
MAINTENANCE PRACTICES
Figure 9-4. Age-related failure curves.
for example, in pumps, fans, compressors, etc. However, such data can be very useful for other purposes. For example:
229
244
MAKING COMMON SENSE COMMON PRACTICE
3. Excellence in installation, startup and commissioning, and shutdown practices. Small differences in craftsmanship in both operation and maintenance make big differences in the life of the equipment. Recall that up to 68% of failures occur in an infant mortality mode. Many of these could be eliminated with better installation and startup. 4. Application of root cause analysis and reliability-centered maintenance methods to eliminate defects and optimize practices. 5. Training of staff in precision and craftsmanship methods in both maintenance and operating practices. 6. Excellence in instrumentation care, e.g., keeping instruments cool, clean, dry, calibrated, and powered by high-quality power. Actual maintenance practices that are proactive include application of: 1. Precision alignment and balancing 2. Excellence in lubrication practice—clean, quality lubricants certified to ISO standards 3. Ensuring high-quality power And, of course, all this must include continuous communication with engineering, production, and purchasing to eliminate defects and ensure proactive behavior. Additional examples and details of proactive maintenance being used at Beta’s plants are discussed in the following text. Precision Alignment. Beta’s research on precision alignment of rotating machinery found that after implementing a precision alignment program, Boggs reported an increased plant availability by 12%, an increased bearing life by a factor of 8, and reduced maintenance costs of 7%, all from precision alignment.15 Further, McCoy reported precision alignment reduced electrical energy consumption by up to 11%.16 Intuitively, this energy savings makes sense—misaligned machinery tends to vibrate at higher rates than aligned machinery. Imagine the energy it takes to “shake” a 200-horsepower motor and pump. As a practical matter, this would mean that in
MAINTENANCE PRACTICES
most Beta plants the energy savings alone would more than pay for implementation of a precision alignment program, and, as a bonus, they would get much longer machinery life and uptime. Figure 9-8,17 which is the middle graph of a series of graphs for this type bearing, shows that for every minute of misalignment (1/60 of 1 degree) the life of that type bearing is reduced by about 5% to 10%. As a practical matter, this means that precision alignment, laser, or even reverse dial indicator, done with care, will substantially improve equipment life. Beta’s maintenance department cannot achieve that kind of precision with “a straight edge and flashlight.” It also implies that as a practical matter over a 12-in. distance, you want to hold 0.001-in. tolerance to ensure long life. It further requires a good stiff base plate, a solid foundation, minimal shims (made of stainless steel), no rust or other debris to create soft foot, and no “pipe spring” to negate the precision work being done. Some of Beta’s plants require that piping flanges be fit up “centerline to centerline” within 0.040 in. Precision and craftsmanship are essential for world-class maintenance and reliability.
Figure 9-8. Life vs. misalignment for a 309 cylindrical roller bearing: radial load = 3,530 lb (C/4)(15,800 N).
245
246
MAKING COMMON SENSE COMMON PRACTICE
Further, flex couplings do not eliminate the need for precision alignment. Consider the following scenario. Let’s take an 8-ft.-long perfectly round, perfectly straight shaft. Let’s place a flexible coupling in the middle of this perfect shaft. Next, let’s bend the shaft at the coupling (misalignment) to a 30 thousandths total indicated runout (TIR), typical as a result of poor alignment practices. Next, let’s place this bent shaft in a machine train (pump and motor). Then, spin the shaft at 1,800 rpm (run the pump). Does this sound like good practice to you? Would you knowingly place a bent shaft in your automobile? In your machinery now? Why would you put one into your critical equipment? Precision alignment is critical to long equipment life. Illustrating the benefit of precision alignment is the experience of Beta’s Wayland plant, where pump repairs were cut in half after it implemented a precision alignment program. See Figure 9-9. Precision Balancing. Figure 9-1018 represents prospective increases (or decreases) in equipment life (in this case a pump) for use of a given International Organization for Standardization (ISO) grade for balancing. In this particular case differences of about 1.5 ounceinches of imbalance on a 175-pound shaft rotating at 3,600 rpm result in years’ increase, or decrease, in the life of the equipment. This is a dramatic change for such a small difference. In this same effort, Spann18 reported that his company had been ordering pumps balanced to ISO Grade 6.3, but had never checked to see what was actually being received. On checking, he found that the balance standard being received was substantially poorer than what was specified. On further investigation, he also found that ISO Grade 2.5 would best serve many purposes for his plant, all things considered. He also found that multiplane balancing of pump impellers was essential when the impeller length was greater than six times the pump diameter. Over 3 years, the plant improved output by some 50,000 tons, in large measure due to improved balancing standards and reduced vibration tolerances for rotating machinery. As a bonus, its maintenance costs also were reduced by some $2M per year. Other examples of precision making a large difference in equipment life can also be cited. For example, for every 30ºC increase in the operating temperature of mineral oil, we reduce the life of the oil by a factor of 10. Keep your lube oil coolers in good order. Or, when we accept ISO Grade 13 cleanliness in our lubricants as
MAINTENANCE PRACTICES
Figure 9-9. Benefits of precision alignment.
opposed to ISO Grade 12, we are accepting a doubling of the wear particle concentration, and commensurate wear and reduced life of the equipment. Oil Analysis as a Proactive Tool. Check new oil, as well as unused oil, for contamination prior to use. Fitch19 and Mayo20 reported that new oil is not necessarily clean oil, having been contaminated from internal and external sources, such that its ISO grade quality for purity was well below what was acceptable for the application. Further, oil analysis can detect developing problems related to moisture contamination, wear particle, etc., long before the problem shows up as a vibration problem. In addition, minimizing excess temperature will maximize lubricant life. Studies have also shown that for mineral oil with oxidants, for every 25˚C increase in temperature above 80˚C, lubricant life is reduced by a factor of 10. Similar data could be developed for other lubricants, but as a practical matter, this means that lube oil coolers should be maintained to ensure proper cooling; bearings should not be overlubricated such that they generate excess heat in the grease; and equipment that has been rerated and is now being run hotter, longer, harder, etc., should have its lubrication requirements reviewed and upgraded as necessary to meet the new operating conditions. Indeed, lubricants should be checked and modified if necessary as a part of rerating equipment.
247
248
MAKING COMMON SENSE COMMON PRACTICE
Figure 9-10. Benefits of precision balance.
Power Quality. This is becoming an increasingly important issue. Larkins21 reported that the use of new power-saving devices and methods, such as power factor correction, improved lighting ballast, energy-efficient motors, variable-speed AC drives, etc., are all introducing the opportunity for high-frequency harmonics. He further reported that the quality of the power being received from the electric utility can also be of poor quality, with surges, sags, etc. It was recommended that a precision recorder be used from time to time to monitor a given plant’s power quality, looking for harmonics, voltage sags, interruptions, and other disruptions. Using this information, the plant can then address the problem with the power company, within its own design processes using better performance specifications for suppliers, and with suppliers of electrical equipment. Solutions include better specifications, putting the burden on suppliers to guarantee no high-frequency harmonics, isolating variable-speed drives, and derating isolation transformers. Valve Diagnostics. Recent advances in control valve and transmitter diagnostics have shown that considerable savings are available with these advances.22 For example, some 230 control valves were scheduled for overhaul, but had added the diagnostics. Of these, 37.1% only needed adjustment, 27.2% were repaired on line, 4.5% required
MAINTENANCE PRACTICES
nothing, and only 31.2% had to be pulled and overhauled. For transmitters, 63% only required routine checks or had no problems. Root Cause Failure Analysis. This is arguably the most powerful tool for proactive maintenance and, for that matter, proactive operation and design. Numerous courses are taught by universities, companies, and consultants, so we will only do a brief overview of the methodology here. One consultant has characterized it as the “five whys,” or asking yourself why at least five times, then normally you can get to the root cause of a given problem. Note, however, that for each of the levels after the first why, there is the potential for many responses, so the process may grow geometrically. As a practical matter, however, the solution often converges fairly quickly. According to Pride,23 Nelms,24 and Gano,25 the basic steps can be characterized as follows: 1. What happened? Describe the effect. 2. Why did it happen? Describe at least two potential causes. 3. Repeat the process until the root cause is found (typically at least five times). Some useful hints for the process are: 1. Causes may not present themselves in order of occurrence, so don’t expect it. 2. The sequence of causes will likely branch, so chart the cause/ effect. Wishbone charting is also useful. 3. Each event may have many causes, but eliminating the root cause will prevent the event. 4. It may be useful to work backward from the event. 5. Document acceptances and rejections of root causes. 6. Correlate primary effect to causes. Most root causes can be boiled down to three areas—people, equipment, and procedures. Improve the people through training and leadership; the equipment through better design, operation, and maintenance practices; and the procedures; and good things will happen. If you don’t, bad things will happen. Failures are a conse-
249
250
MAKING COMMON SENSE COMMON PRACTICE
Figure 9-11. Inspections, PM/PdM, and operator care flow into planning and scheduling.
quence of poor processes, not poor people or equipment. Proactive methods will ensure excellent processes and defect elimination. Finally, the evolution of the improvement process can be shown in Figure 9-11. Many companies begin the improvement process with some 50%+ reactive maintenance, and with some preventive and predictive maintenance, which, in turn, feeds into their planned, corrective maintenance. If we are to improve, we must expand our PM and basic care (avoid the defects in the first place) and our equipment and process condition monitoring (detect those that we can’t avoid so we can minimize their consequence); then feed that information into our planning and scheduling process to make us more efficient. Over time, as we expand our basic care and condition monitoring from say 15% to some 30% to 40%, we’ll see: (1) a drop in our reactive maintenance to 10% or less, (2) an expansion of our planned/schedule corrective work, and (3) an increase in our maintenance planning horizon from 1 week or less to 2 to 4 weeks. More importantly, our maintenance costs will come down dramatically, some 40% to 50%.
MAINTENANCE PRACTICES
Focused Factories and Maintenance Practices: Centralized vs. Decentralized Maintenance Some, in their zeal for focused factories, have suggested that centralized maintenance has crippled our factories’ ability to perform as well as they once did; that computerized maintenance management systems (CMMS) are a burden and there’s simply no need for work orders; that machines should be completely rebuilt during maintenance, replacing all parts subject to deterioration, regardless of condition; that having redundant equipment provides the best automatic backup during maintenance; and that the vast majority of factory successes center on equipment modifications and physical improvements in factory floor and office layout.26 However, Beta’s experience does not support these strategies. Many methods related to focused factories are exceptionally good and have helped Beta make substantial improvements, especially at its batch and discrete plants, where focused factories seem to work best. Beta has improved asset and equipment utilization on its almost universally underutilized assets, and, by paying greater attention to every detail of operations and maintenance, Beta has made substantial improvement. However, as Harmon26 states, producing high value requires quantum improvements in equipment reliability at minimal preventive maintenance costs. In Beta’s experience this quantum improvement in equipment reliability at minimum cost is only possible through using the methods previously described, and is highly unlikely, if not impossible, employing the strategies Harmon suggested. Let’s consider the experience at one of Beta’s plants that was using the focused factory concept: Beta’s Allen Central plant, a discrete parts manufacturing operation, had recently converted to small focused factories for each major product line; had decentralized maintenance; had begun employing TPM practices, Kaizen, or continuous improvement methods; etc., and had shown considerable overall improvement. However, at least as practiced at Allen Central, and because historically maintenance had been a rapid repair function anyway, this only “put the firefighters closer to the fire.” Granted, this should allow them to reduce the duration of the fires, but did little to eliminate the cause of the fires. Production had been rapidly ramped up to meet dramatically growing demand, and rather than improve overall equipment effectiveness, or OEE (too slow a process), Allen Central did indeed elect
251
252
MAKING COMMON SENSE COMMON PRACTICE
to buy more equipment for automatic backup, increasing capacity, and, coincidentally, capital, operating, and maintenance costs. Neither work orders nor the computerized maintenance management systems were used to any great extent. We could go on, but, in effect, the plant was using the maintenance strategy suggested by the focused factory concept. The results: Reactive maintenance was over 70% of the total, and maintenance costs were running at nearly 6% of plant replacement value, or nearly three times a nominal worldclass level. OEE, when we measured it, was typically about 50% at the bottleneck process, some 10% below average, and 35% below world class, much of which was due to equipment breakdowns. After a benchmarking review and considerable effort, Beta has since begun employing OEE as a key performance indicator; has begun to employ its maintenance management system to good effect, particularly for inspections and routine PM; has begun to engage operators in minor PM and basic care; has established a condition monitoring program for critical production equipment; and has recentralized some of its maintenance function. Additional detail for the process used is provided in Chapter 13. After much discussion and review, Beta concluded that simply putting the firefighters closer to the fire would not achieve the desired results. Initially, a maintenance hierarchy was established, with the following priorities for maintenance: 1. Operator minor PM and basic care, e.g., tighten, lubricate, adjust, clean 2. Area, or focused factory maintenance teams 3. Central maintenance support 4. Contractor maintenance support Each affected group was trained in the relevant maintenance technologies and methods, and in equipment repair for the equipment in its factory. Central maintenance was assigned the following responsibilities to develop standards and practices for maintenance and to facilitate and support their deployment: 1. Installation and commissioning procedures.
MAINTENANCE PRACTICES
2. Precision alignment procedures, training, and fixtures for machine tools. 3. CMMS application support—setup, support, training, equipment histories, Pareto analysis, etc. (the shop floor still did the work orders and scheduling). 4. Predictive maintenance, including working with the shop floor and operations to ensure actions were taken on problem machines. 5. Machine repair shop operation. 6. Major stores requirements; minor stores needs were still situated at the factory floor. 7. A manufacturing equipment reliability engineer. In general, central maintenance is now responsible for facilitating the success of the focused factories and providing services that were not cost effective if done within each of the factories. The manager of central maintenance became much more proactive about understanding the needs and requirements of each factory, meeting regularly with each of the factory maintenance managers and production managers to develop a better understanding of needs. Central maintenance became a value adding contributor, not a cost to be avoided. Within central maintenance were senior skilled trades, who had picked up the informal title of leadmen (all were in fact men, but it wasn’t clear what would happen to informal titles when a woman came into the group). These individuals had the responsibility for understanding best practice in their trades, e.g., mechanical, electrical, instrumentation, etc., and for making sure these best practices were being applied at the factory floor level by the maintenance team, including training as needed. These leadmen filled in during times of peak demand or labor shortages within a given factory, and had two or three people reporting to them who also filled in. They also helped manage the maintenance requirements for the second and third shifts, using their staff to smooth out workload requirements. Finally, one of the senior leadmen managed the machinery repair shop. Much of this work had been contracted out, but still at this large Beta factory, a small repair shop was in routine use to support factory floor needs.
253
254
MAKING COMMON SENSE COMMON PRACTICE
Beta’s Allen Central plant concluded that a hybrid of centralized and decentralized maintenance was best at this plant, and, more importantly, contrary to conventional wisdom from focused factory enthusiasts, applying a reliability strategy that combines preventive, predictive, and proactive methods, in cooperation with production, would provide for the quantum improvement in equipment reliability at minimum maintenance cost.
The Need for Integrating Preventive, Predictive, and Proactive Practices At the best plants, preventive and predictive technologies are used as tools, not solutions, to help create a proactive culture, and are combined with operational, design, procurement, installation, and stores practices to ensure the success of the plant. The need to view preventive and predictive technologies as tools to become proactive is best illustrated by Ledet,27 while working at a large chemical manufacturing company. In the middle 1980s, this chemical manufacturer began an effort to improve its maintenance practices, and employed several methods. After several years, it decided to analyze whether or not these efforts were successful. The analysis is summarized as follows: 1. One group of plants had evolved a strong maintenance planning and work order management culture. Few jobs were done without a work order; everything had to be planned and/or scheduled. At these plants, where PM and maintenance planning and scheduling were highly valued, a slim +0.5% to 0.8% improvement in uptime was realized on the whole. It turned out that people were so busy doing work orders and planning that they didn’t have time to do the good reactive maintenance that they had previously done to put the plant back on line quickly, and hence achieved little improvement in uptime. 2. A second group of plants employed all the latest predictive technologies, almost to the exclusion of other methods. In their zeal to use these predictive technologies, they also neglected to put in place a proper maintenance planning and scheduling function, or to get buy-in from the operations
MAINTENANCE PRACTICES
staff. At these plants a –2.4% improvement in uptime was realized—they lost uptime. Apparently, the data on equipment condition were being collected but not being used by operations or maintenance to change any practices or processes. Resources had been reallocated, but resulted in a negative effect. 3. A third group linked preventive maintenance, and in particular maintenance planning and scheduling, to predictive maintenance or condition monitoring. At these plants the condition of the equipment was used to adjust the PM or maintenance activity such that better use of resources was achieved. At these plants, an average of +5% higher uptime was achieved. 4. Finally, there was a small group of plants that used all the technologies, but with a very different perspective. They viewed the CMMS, maintenance planning and scheduling, and predictive technologies as tools that allowed them to be proactive, not solutions in themselves. At these plants the CMMS was used to collect equipment histories, and this information was used to do Pareto analyses and prioritize eliminating the major defects first. Maintenance planning and scheduling were balanced against equipment condition. Predictive maintenance was used for more than just trending the condition of equipment and avoiding catastrophic failures. It also included equipment commissioning and root cause diagnostics. A proactive culture was created that was facilitated by various tools to eliminate defects and get to the root cause of problems. This group achieved on average a +15% improvement in uptime. With such data, it’s clear that the technologies and methods must be integrated to ensure optimal performance. Further, Ledet also confirmed Beta’s general experience and found that plants tend to move from one domain of behavior to another, as depicted in Figure 9-12.27 As depicted, the lowest domain is the “regressive domain,” essentially allowing a plant or business to deteriorate to the point of being nonfunctional. Once the decision has been made (or perhaps remade) to stay in business, people become very motivated to keep the company or plant in business, responding to urgent needs and moving into the “reactive domain,” keeping the operation running,
255
256
MAKING COMMON SENSE COMMON PRACTICE
Figure 9-12. The five maintenance domains.27
often at significantly greater maintenance costs. After some time, however, it will be natural to want to improve and get out of the reactive or firefighting mode, at which point the better plants will begin to move into a mode wherein maintenance requirements can be anticipated, planned, and scheduled—the “planned domain.” Those with the best leadership and greatest sense of teamwork will eventually move from the “planned domain” to the “proactive domain,” wherein defect elimination is viewed as the key to success. Eliminating the root cause of defects yields superior performance. Finally, a very few plants will move into the “strategic domain,” wherein defect elimination is second nature and ingrained in the culture of the organization, and where learning, positive differentiation, supply chain integration, and business alignment are integral to the daily operation. As a given plant moves from one domain to the next, it does not “forget” the learning of the previous domain, but incorporates that learning and those practices into its performance in the next domain. Perhaps, more importantly, it should be pointed out that according to Ledet,27 you can get from a reactive domain to a planned domain by being maintenance led. However, if you want to get into the proactive or strategic domain, the improvement process must be
MAINTENANCE PRACTICES
manufacturing led, using cross-functional teams with representatives from production, maintenance, and other functions.
Life Extension Program Within its maintenance and engineering effort, Beta has also been developing an asset evaluation program, which looks at current asset condition; anticipated life; current and planned operating requirements, such as asset utilization rate; conservatism in design; anticipated capacity increases above design; etc. Also being considered in this evaluation are current and anticipated maintenance requirements—routine, overhaul, replacement. These considerations have been used to develop the matrix shown in Table 9-3 for estimating maintenance and capital needs for major categories of equipment. This is combined with an estimate of the timing of the requirements and has a relatively large margin of error, e.g., ±40%. Refinements are made as the timing and criticality of the equipment replacement or repair comes closer. These estimates also exclude capital expansion and debottlenecking projects. Note that condition assessment is performed using predictive and NDE technologies, as well as production performance, and is critical to the ability of the team to understand current condition and implement the life extension program. Table 9-3 Estimate of Maintenance Costs and Life Extension Requirements. Equipment Category
Normal Maintenance*
Overhaul/ Refurbishment*
Civil Control Electrical Rotating Machinery Piping Vessels Specials * Estimates to be completed on a plant-by-plant basis.
Replacement*
257
258
MAKING COMMON SENSE COMMON PRACTICE
Maintenance Practices and Safety Performance For several years now, Beta has engaged in an intensive program to improve safety performance and has had considerable success. Improving procedures, training, and, perhaps more importantly, awareness and attitude, have contributed to dramatic improvement in safety performance. However, in recent years, this improvement trend appears to have stalled. Figure 9-13 shows that one of its division’s safety performance in lost-time accidents had improved fivefold between 1988 and 1994. However, observing the period 1993 to 1997, we see that the injury rate may have in fact stabilized at a lost-time accident rate that is now oscillating between 0.4 and 0.5. This is of considerable concern to Beta’s management. The current view is that the improvements to date resulted from improved awareness, training, procedures, etc., but that if further improvements are to be achieved, then the current “system” must somehow change. Beta’s reactive maintenance level is typically at about 50%, with substantial need to improve maintenance practices. The view is that this level of reactive maintenance is likely to expose people to greater risk of injury, and more often. This view has been substantiated at other Beta operations, which indicate that as reliability improves, as determined by overall equipment effectiveness (OEE) and/or uptime, the accident rate declines proportionately. This phenomenon is shown in Figure 9-14.
Figure 9-13. Safety rate/200K hours.
MAINTENANCE PRACTICES
Figure 9-14. Correlation between overall equipment effectiveness (OEE) and injury rate.
Note in Figure 9-14 that as OEE improves, injury rate declines (improves), and, conversely, as OEE declines, injury rate increases (deteriorates). The correlation coefficient for these data are 80%, remarkably high. Similar data have been observed at other plants, with correlation coefficients ranging from 0.6 to 0.9. What does this strongly infer? It indicates that the same behaviors and practices that improve plant operation and reliability also reduce injuries. At Beta International safety is not an option, it is a condition of employment. That being the case, and since safety and manufacturing excellence are so closely correlated, then manufacturing excellence is not an option either! Colin McLean, general manager of a large refinery in the United Kingdom, said it best in 2003, after having achieved 10,000,000 hours of operation without a lost-time accident, simultaneously with a remarkable increase in production capacity and a major reduction in maintenance costs: “. . . the three are indivisible, a safe site is a reliable site is a cost-efficient site.”
259
260
MAKING COMMON SENSE COMMON PRACTICE
Summary Most maintenance requirements and costs are “preventable,” but, as has been demonstrated, not by using the traditional preventive maintenance methods. For example, a study of some 15,000 work orders28 found that 69% of the maintenance costs were preventable by using better design and engineering, better construction, better operations practices, better maintenance practices, and, of course, better management practices. Indeed, better design and operations practices were found to make 50% of the maintenance costs preventable. This is consistent with the observations of MCP Consulting Company in its work with the Department of Trade and Industry in the United Kingdom, where it found that over 50% of maintenance costs were a result of poor design and operating practices. What Beta has found is consistent with what Schuyler found:29 When maintenance costs are reduced with no change in maintenance strategy, mechanical availability is reduced (e.g., moving from A to B using “fixed-interval” maintenance strategy). Under the same conditions, when mechanical availability is increased, maintenance costs increase. As shown [in Figure 9-15], the only way to simultaneously reduce maintenance costs and increase mechanical availability is to move toward more proactive maintenance strategies. Schuyler also points out that as maintenance costs increase for a given strategy, there may be a point of diminishing, or even negative, returns. That is, there may be a point at which incremental investment under a given strategy will result in lower availability. He also states that data from analyzing many of his company’s plants substantiate Figure 9-15.30 Beta understands that the comprehensive application of preventive, predictive, and proactive maintenance practices in an integrated philosophy, integrated with operations, engineering, purchasing, and stores practices, is essential to its success. Application of best practices is the key to the success of each of its manufacturing plants, particularly as it relates to effective application of supply chain and lean manufacturing principles. Beta has found that by learning what the best plants were achieving and how all Beta’s
MAINTENANCE PRACTICES
Figure 9-15. Equipment availability as a function of maintenance costs under various maintenance strategies.
plants compared (benchmarking), by emulating how those best plants were achieving superior results (best practices), and by adding a measure of common sense and management skill, Beta’s plants could substantially improve on their current level of performance, and, perhaps in the long term, even exceed the current benchmarks. Beta is using the knowledge of the maintenance function to ensure that it designs, buys, stores, operates, and maintains its plants for maximum reliability and uptime. Uptime and overall equipment effectiveness are improving dramatically, resulting in increased production, better asset utilization, higher profit, and, eventually, world-class performance at its plants.
References 1. Tzu, The Art of War, Random House, New York, 1989. 2. Moore, R., Pardue, E., Pride, A., and Wilson, J. “ReliabilityBased Maintenance—A Vision for Improving Industrial Productivity,” Computational Systems, Inc., Industry Report, Knoxville, TN, 1992.
261
262
MAKING COMMON SENSE COMMON PRACTICE
3. Ricketts, R. “Organizational Strategies for Managing Reliability,” National Petroleum Refiners Association, Maintenance Conference, New Orleans, LA, May 1994. 4. De Jong, E. “Maintenance Practices in Small and Medium Sized Manufacturing Enterprises,” The National Centre for Advanced Materials Technology, Monash University, Melbourne, Victoria, Australia, 1997. 5. Idhammar, C. “Preventive Maintenance, Planning, Scheduling and Control,” University of Dayton, Center for Competitive Change, Dayton, OH, 1997. 6. Hudachek, R., and Dodd, V. “Progress and Payout of a Machinery Surveillance and Diagnostic Program,” Mechanical Engineering, Vol. 99, No. 1, 1977. 7. “Maintenance Management, Comparative Cost of Maintenance Strategies,” Modern Power Systems, July 1994. 8. Nowlan, F. “Reliability-Centered Maintenance,” United Airlines, NTIS Document No. AD/AO66-579, Washington, DC, December 1978. 9. Pride, A., and Wilson, J. “Reliability as a Corporate Strategy,” American Institute of Plant Engineers (now Association of Facilities Engineers), Cincinnati, OH, Annual Conference, May 1997. 10. Eschmann, P. et al., Ball and Roller Bearings: Theory, Design & Application, John Wiley & Sons, New York, 1985. 11. “Early Detection of Bearing Failures Affects Reliability and Availability at Nordic Paper Plants,” Reliability, May/June 1994. 12. Discussion with A. Pride, Pride Consulting, Inc., Knoxville, TN, 1994. 13. Day, J. “Maintenance Vision for the 21st Century,” Reliability, Directory Issue, Knoxville, TN, 1996. 14. Lock Out/Tag Out Pocket Guide, Schenectady, NY, 1993.
Genium Publishing,
15. Boggs, R. “Rotating Misalignment—The Disease, But Not the Symptom,” Technical Association of Pulp and Paper Industry, Annual Conference, Atlanta, GA, 1990.
MAINTENANCE PRACTICES
16. McCoy, R. “Energy Savings from Precision Alignment,” internal memorandum, Computational Systems, Inc., Knoxville, TN, 1992. 17. “Life vs. Misalignment for a 309 Cylindrical Bearing,” SKF Engineering Data Handbook, Stockholm, Sweden, 1986. 18. Spann, M. “Balancing Parts—Waking to Reality,” Society for Maintenance and Reliability Professionals, Winter Newsletter, 1994. 19. Fitch, R. “Designing an In-house Oil Analysis Program,” ENTEK Corporation User Conference, Cincinnati, OH, September 1994. 20. Mayo, J., and Troyer, D. “Extending Hydraulic Component Life at Alumax of South Carolina,” Reliability, January 1995. 21. Larkins, G. “Energy Efficiency, Power Quality, and Power Systems Commissioning,” American Institute of Plant Engineers (now Association of Facilities Engineers), Fall Conference, Cincinnati, OH, 1995. 22. MARCON Conference Proceedings, Maintenance and Reliability Center, University of Tennessee, Knoxville, TN, May 1997. 23. Pride, A. “Root Cause Failures Analysis Presentation to PT Badak, Bontang, Indonesia,” Pride Consulting, Inc., Knoxville, TN, March 1996. 24. Nelms, R. “Root-Cause Failure Analysis,” Seminar, University of Dayton, Center for Competitive Change, Dayton, OH, 1997. 25. Gano, D. “Getting to the Root of the Problem,” Apollo Associates, Richland, WA, June 1997. 26. Harmon, R. Reinventing the Factory II, The Free Press, New York, 1992. 27. Ledet, W. “The Manufacturing Game,” Annual TQM Conference, Kingwood, TX, November 1994. 28. Oliverson, R. “Preventable Maintenance Costs More Than Suspected,” Maintenance Technology, September 1997.
263
264
MAKING COMMON SENSE COMMON PRACTICE
29. Schuyler, R. “A Systems Approach to Improving Process Plant Performance,” MARCON Conference Proceedings, Maintenance and Reliability Center, University of Tennessee, Knoxville, TN, May 1997. 30. Discussion with R. Schuyler, E. I. DuPont, Wilmington, DE, January 14, 1998.
Optimizing the Preventive Maintenance Process
10
There are no solutions, only consequences. Gilbert G. Jones Performing preventive maintenance (PM) on an interval basis has long been recognized as a means for improving maintenance effectiveness, and as a means for improving equipment reliability. Hudachek and Dodd1 reported that maintenance costs for rotating machinery using preventive maintenance were over 30% less than those costs incurred from a reactive maintenance approach. In recent years predictive maintenance using condition monitoring equipment has become more and more important to a good PM program, allowing better maintenance planning by considering equipment condition prior to actually performing maintenance work. Not surprisingly, Hudachek and Dodd also state that maintenance costs for rotating machinery using a predictive maintenance approach are nearly half that of reactive maintenance. Numerous other studies depicted in previous chapters, as well as many anecdotes, have provided overwhelming evidence of the benefits of viewing maintenance as a reliability function, as opposed to a repair function,2 and yet most manufacturing plants continue to operate and maintain manufacturing equipment and processes in a predominantly reactive mode. While this is discouraging, there does appear to be a general shift in the acceptance of reliability principles, with considerable effort to improve maintenance practices and therefore equipment reliability in
265
266
MAKING COMMON SENSE COMMON PRACTICE
manufacturing plants. This is certainly true at Beta. This need for improved reliability has also received increased emphasis at many plants with the advent of OSHA 1910, Section 119j, which requires a mechanical integrity program at plants that deal with hazardous materials to prevent the release of those materials. Unfortunately, since 1992, when one of the first efforts was made to characterize maintenance practices and behavior for some 70 manufacturers,3 not much has changed. That is to say, most manufacturers, including many at Beta International, continue to report operating with a typical level of reactive maintenance of nearly 50%.4,5 As someone once said, “The more we change the more we stay the same.” In any event, more and more companies, including Beta’s manufacturing divisions, are purchasing and implementing computerized maintenance management systems (CMMS) for better management of their manufacturing assets and to ensure better maintenance practices and equipment reliability, including integration of predictive and proactive methods. This section offers a model developed for Beta for optimizing the PM process as managed by a CMMS and for incorporating predictive and proactive methods. It presumes that you have a functional CMMS. For plants that haven’t reached that level of implementation, then a few suggestions are offered on “getting started,” including putting in place appropriate equipment histories, even if no equipment history database is available.
First Things First Optimizing the PM process presumes that you have PMs to optimize. Therefore, you should have completed your CMMS database, including: 1. All critical equipment (stops or slows production, or creates a safety hazard if not functioning properly) 2. All appropriate PMs and related procedures, overhaul or turnaround procedures 3. A complete equipment database, including a bill of material for spares for your critical equipment
OPTIMIZING THE PREVENTIVE MAINTENANCE PROCESS
4. A process for managing work orders, including planning and scheduling, Pareto/cost analysis, equipment-specific histories, etc. If not, or if not fully developed, you can still get started using the following process, which is similar to that outlined in Chapter 12.
Creating Equipment Histories from “Scratch” If you don’t have equipment histories, a good “jump start” can be created by sitting some of your best mechanics, electricians, operators, etc. (a cross-functional team) in a room, tracing a production block diagram on a white board, and then walking them through the production process. At each step in the process ask—What’s been happening here with our equipment that has resulted in failures? How often? How long? How much downtime? How much cost? Keep good notes and keep asking enough questions to allow you to determine what equipment fails, why it fails, how long it is down, and how much it costs to get it back on line (approximations are OK for this brainstorming session). Also note that many times the problems will be related to issues other than maintenance—raw material quality, finished product quality, process problems, etc. But this is good information too, and will facilitate the team-building process. Using this information, define that PM effort for each set of equipment that will alleviate or eliminate the failures being experienced, or at least detect the onset of failure early in the failure process (condition monitoring). Alternatively, define the operating, engineering, procurement, etc., practice that will eliminate or mitigate the failure. Somewhere in this process, you should also be performing a couple of mental analyses: 1. Pareto analysis—which major issues are causing the most downtime? The most increased costs? The greatest safety hazards? 2. Where are my production bottlenecks, and how have they shifted from where they were perceived to be as a result of equipment and/or process problems?
267
268
MAKING COMMON SENSE COMMON PRACTICE
This process will allow you to better prioritize your resources to improve production capacity and reduce operating and maintenance costs, not to mention getting your production and maintenance people to begin working as part of a team with a common objective— maximum equipment reliability at a minimum cost—a kind of superordinate goal that helps transcend minor, and sometimes petty, differences. Once you get all that done and have all your PMs defined, as well as planned and scheduled maintenance built into your system, then you can begin the process for optimizing PMs. If you’re just beginning, the process will still be very helpful.
The Model Step 1—Set up Your Database Categorize your PMs. The following are suggested, but you may have others that are more applicable in your plant. The key is to be systematic and disciplined about the optimization process. 4 (See Figure 10-1.) For starting the process, they might be: 1. Regulatory driven or required 2. Calibration and instrument PM requirements 3. Inspection, test, and surveillance (including condition monitoring) 4. Manufacturer defined 5. Staff defined 6. Shutdown, turnaround, or outage-related PMs 7. Other PMs
How often? Why? History based?
Pareto analysis needed? Weibull analysis?
Condition monitoring needed?
Proactive methodologies needed?
OPTIMIZING THE PREVENTIVE MAINTENANCE PROCESS
Figure 10-1. The PM optimization process.
Step 2—Determine PM Necessity Critically analyze your PMs in each category, or for a particular machine or equipment. The fundamental question that must be asked—Is the PM task necessary? 1. Does it help me detect onset of failure so that I can manage a developing problem (e.g., inspections, predictive maintenance, process condition monitoring, etc.)? 2. Does it help me avoid premature failure or extend equipment life (e.g., oil and filter changes, lubrication, cleaning, tightening, simple care, etc.)? 3. Is it related to a known, consistent, wear- or age-related failure mode (e.g., brushes on a DC motor failing every 10 months, periodic corrosion, etc.)? 4. Is it a statutory requirement that I must do (e.g., boiler inspections, pressure vessel inspections, etc.)? 5. Is it driven by some other mandatory requirement?
269
270
MAKING COMMON SENSE COMMON PRACTICE
If it doesn’t fit these criteria, then it’s not likely to support lower cost, higher reliability, or improved risk and safety. It’s also likely that we could eliminate the PM, but before we reach that conclusion, let’s analyze our PM requirements further.
Step 3—Analyze Your PMs Assuming that the PM is necessary, ask yourself: 1. How often do I need to do this PM and why? Is it based on an equipment failure/history analysis? A known failure rate? My best judgment? Regulatory requirements? 2. For inspections and condition monitoring, what is your current “hit rate”—number of inspections resulting in detecting a problem requiring action? Is that sufficient? 3. Should the PM frequency be extended or reduced? Based on histories, “hit rate,” trends, Pareto/cost analysis, experience? 4. What is the consequence of failure? For example, more serious or costly consequences dictate more frequent inspections, greater design and operational attention to detail, or other actions. 5. Could the need for certain PMs be mitigated or validated using condition monitoring or inspections to verify their need? 6. Could certain PMs be consolidated? That is, could we do several PMs on one machine/process at the same time, vs. revisiting the machine/process frequently for “single” PMs? 7. Do we need to take any proactive steps to avoid the PM, or failures, altogether, e.g., better design, procurement, installation, commissioning, startup, joint making, alignment and balancing, and so on, to support improved reliability or maintainability and extend equipment life? Define any actions needed, and next steps. 8. Could certain PMs be done by operators to avoid or detect onset of failure? Define the process for this. Which issues or obstacles must be considered—safety, training, union, plant culture?
OPTIMIZING THE PREVENTIVE MAINTENANCE PROCESS
9. Finally, are there other failures known where PM tasks need to be added to detect or avoid those failures?
Step 4—Optimization Using this analysis, begin the process of optimizing your PM activities, step by step. This process will not be easy, nor is it likely to ever be “finished,” but rather it should be viewed as a continuing process for improvement.
Some Examples Let’s take a few examples and walk through the process. Instrument Calibration. Define the calibration requirements and intervals for each instrument. At the PM interval, track the “drift” from calibration. Using this information, determine statistically if it is possible to increase the interval, or necessary to reduce the interval, and retain proper instrument measurement requirements. Adjust PM intervals as appropriate, and/or seek new and improved methods or instruments. For example, at several of Beta’s plants it was found that once they started tracking calibrations, it appeared that many were “overdone,” and some were “underdone.” For example, if an instrument calibration was checked weekly, and yet in over a year only once required recalibration or adjustment, then this likely means that the calibration and inspection are being overdone. But, a word of caution, if this instrument is monitoring the reactor of a nuclear plant—then it may not be overdone. Failure consequence should also be considered. Similarly, if the instrument requires weekly calibration, it may not be suitable for its intended service and should be considered for replacement. Inspection, Test, and Surveillance. Define the basis for each inspection. Analyze the histories of equipment. Note: Use the “developed” histories from the brainstorming if equipment histories aren’t available. From the histories, determine a more optimal PM interval. Ideally you might use Weibull analysis, which can be used with the right data to develop definitive PM intervals. For example, if the data are available, you might decide that you want a 97% confidence level that vibration analyses performed on a certain basis will ensure pre-
271
272
MAKING COMMON SENSE COMMON PRACTICE
venting a major failure in your equipment. (Note also that this presumes that you have best practices in place for acquiring your data.) From this analysis you could set up optimal vibration analysis intervals. For example, at several of Beta’s plants, it was also found that after about a year of comprehensive vibration monitoring, doing vibration inspections monthly was probably more often than necessary for many machines. Most of the time, vibration levels did not change very much month to month. But, these plants also included operators taking better care and being more aware of any abrupt changes in the equipment, at which point they would alert the vibration analyst to check or increase the frequency of data collection. Similar analogies could be offered for other inspection PM. Manufacturer (and Staff) Defined. Determine when it was defined, and whether there have been any substantive modifications to the equipment that might impact PM practices and intervals. Update as appropriate for those modifications. Challenge the manufacturer as to the statistical basis for its recommended PM intervals, any failure modes and effects or RCM analysis, and the assumptions underlying its recommendations. Review your equipment histories and determine if they support the manufacturer recommendations. Apply predictive and proactive methods to increase, or optimize, current intervals and practices. PMs during Scheduled Outages or Turnarounds. Review equipment histories to understand indications of potential needs. Review equipment condition information, not just the predictive technologies but also operator knowledge and other plant staff knowledge. Actively seek their input. Establish a proactive process for performing the PM itself. Do only what is necessary, considering the risk of waiting until the next outage (planned or unplanned).
Case Histories Beta’s Abner Mountain plant was a very large facility, which had five key steps in the production process. One of these steps was a precipitation process that contained 10 separate vessels. A natural consequence of the production process was scaling of the vessels used in the precipitation process. Because of this, each vessel required descaling, which was currently being performed every 30 days. The question was posed concerning the 30-day descaling schedule—Why every 30 days? It seems that the descaling took 3
OPTIMIZING THE PREVENTIVE MAINTENANCE PROCESS
days, and there were 10 vessels, so the total time to clean all the vessels was 1 month. It was a matter of convenience. Unfortunately, the vessels were unaware of the convenience issue, because they didn’t operate according to the Gregorian calendar in use in the civilized world, but rather according to process conditions. On further discussion, it was found that the scale build-up could be correlated to the process. With that awareness in hand, the Abner Mountain plant began to measure scale build-up and correlate it to process condition, increasing the interval for descaling by over 10% and providing a commensurate increase in uptime, improving operating income by $3M per year. A point of interest here—Why do we schedule activities on a daily, weekly, monthly, quarterly, annual basis? Because that’s how we live our lives. Yet, as noted, the equipment in any plant does not operate with any link to the Gregorian calendar by which we live. Scheduling PM activities on a 30-day basis could just as well be scheduled on a 34.72-day basis. The point is that as a practical matter we must live on a daily, weekly, etc., basis, but our equipment doesn’t follow this regimen, and we must use better condition monitoring, equipment histories, understanding of process condition, and proactive methods to optimize our efforts, not simply follow the calendar on an arbitrary basis. Another point of interest—Why does plant equipment seem to fail when we’re not there? If we only work 40 hours a week, and there are 168 hours in a week, then there are 128 out of 168 hours that we’re not there, resulting in a 76.2% probability that we won’t be in the plant when the failure occurs, but will be called in for an emergency. Another good reason for condition monitoring technology is to help make sure we know about a pending failure well in advance so we can get the equipment corrected during normal hours. Beta’s Estill plant had 84 extrusion machines, which received a PM on a weekly basis and took some 4 hours to perform. Again, this weekly PM did seem to help improve uptime, but was not related to equipment histories and was not backed by equipment condition monitoring. It was arbitrarily based on the resources available and their ability to maintain the equipment on that schedule. The suggestion was made to increase the interval from every 7 to every 8 days. If no significant increase in downtime occurred after a suitable time period, then increase the period to 9 days, and so on. The recommendation was also made to put in place some condition
273
274
MAKING COMMON SENSE COMMON PRACTICE
monitoring technologies, including operator observations and control charts; to develop quick changeover techniques; and to proactively seek more reliable processes and equipment, such as precision balancing, alignment, and installation of the equipment, so that over the long term, the process could be truly optimized. Beta’s Estill plant has achieved substantial reduction in equipment downtime and continues to implement these methods. Other examples could be cited, but this should be adequate to illustrate the process.
Mechanical Integrity As noted in the beginning of this chapter, and discussed in additional detail in reference 6, the advent of OSHA regulations on process safety management, per 29 CFR 1910, Section 119j, Mechanical Integrity, has provided increased emphasis on mechanical integrity of equipment that contains hazardous materials. Indeed, it is easy to infer from Section 119j that run to failure is not an acceptable mode for maintenance practices, because run to failure carries a substantial increase in the risk of release of hazardous materials. The principal motivation for having good reliability in mechanical equipment should be to provide maximum capacity at a minimum cost. However, in many companies the reactive culture is so inherent in the behavior of people that simple economic motivators for a reliability strategy may not be sufficient, particularly when the benefit is often long term, and the pressure to “get the machine back on line” is intense. However, these regulations may represent an opportunity. If you do have hazardous materials on site, you can use that fact, combined with OSHA Mechanical Integrity requirements, to ensure good PM practices and provide a regulatory “driver” for optimizing the PM process. Section 119j requires a mechanical integrity program, which includes requirements for written procedures defining all test and inspection requirements, frequencies, record keeping, alerting to overdue inspections, training, quality assurance standards for installation, replacement parts, vendor/supplier qualifications, etc. Making comprehensive use of a computerized maintenance management system, applying a comprehensive reliability strategy, optimizing the PM process using the methodology previously described, will help
OPTIMIZING THE PREVENTIVE MAINTENANCE PROCESS
ensure mechanical integrity in a given plant’s equipment. Mechanical integrity is a natural consequence of applying good reliability and maintenance practices. Maintenance managers should apply that rationale to their advantage.
Summary The model provided for optimizing the PM process should be straightforward to apply, but, like most worthwhile efforts, will take considerable time. It involves systematic use of: 1. A CMMS to manage the maintenance and reliability process, including comprehensive development of histories for analysis of your equipment failure modes, consequences, and PM requirements. 2. Condition monitoring technology and operator information to validate the need for PM, validate the quality of the work done, and detect early onset of failure. 3. Proactive methodologies to eliminate the root cause of failures, minimize or mitigate the need for PM, and eliminate defects in equipment and production methods. In this manner you can position your plant to design, procure, store, install, operate, and maintain reliable equipment, ensuring optimal PM practices, and mechanical integrity.
References 1. Hudachek, R. J., and Dodd, V. R. “Progress and Payout of a Machinery Surveillance and Diagnostic Program,” Mechanical Engineering, Vol. 99, No. 1, 1977. 2. Carey, K. “Systematic Maintenance Strategies,” Institute of Industrial Engineer’s 11th International Maintenance Conference, Nashville, TN, October 1994. 3. Moore, R., Pardue, F., Pride, A., and Wilson, J. “The Reliability-Based Maintenance Strategy: A Vision for Improving
275
276
MAKING COMMON SENSE COMMON PRACTICE
Industrial Productivity,” CSI Industry Report, Knoxville, TN, 1993. 4. Moore, R. “Optimizing the PM Process,” Maintenance Technology, January 1996. 5. Kirby, K. E. “There Is Gold in Those Reliability and Maintenance (R&M) Practices,” Society for Maintenance and Reliability Professionals Newsletter, SMRP Solutions, Winter 2000. 6. Moore, R. “Reliability and Process Safety Management,” Facilities, Vol. 22, No. 5, September/October, 1995.
11
Implementing a Computerized Maintenance Management System
Profound knowledge comes from the outside, and by invitation. A system cannot know itself. W. Edwards Deming Most executives wouldn’t dream of running their companies without an effective computerized accounting and financial management system, and yet many routinely run their production equipment without an effective computerized asset management system for the maintenance and reliability (and cost) of their equipment, often valued in the hundreds of millions.1 This chapter describes how one of Beta’s divisions accomplished that task. Beta, like most medium to large manufacturing organizations, has put in place a computerized maintenance management system (CMMS) at most of its plants, with informal surveys indicating that over 90% of its plants have some form of a CMMS in place. However, a closer review of those plants and questions about the specific use of their systems revealed that most— 1. Are not using the CMMS for all appropriate equipment in their plant. For most plants some 25% to 75% of equipment is not in the database and therefore not maintained using the CMMS.
277
278
MAKING COMMON SENSE COMMON PRACTICE
2. Are using the CMMS primarily for issuing work orders and scheduling work, but often not using it for comprehensive planning, for development of equipment repair and cost histories, for performing Pareto analyses to help prioritize resources, etc. 3. May be poorly trained in the systematic use by all the appropriate staff of all the capabilities of a good CMMS. 4. Don’t fully apply many of the other tools available, e.g., stores and purchasing interface, resource allocation, document retrieval, etc. Many times, someone was told to go buy a system and use it—a kind of magic elixir for the ills of the maintenance department. This type of behavior reflects a symptomatic response to maintenance issues vs. a systematic approach, fairly typical with most manufacturers. The logic seems to be “If we just do . . ., then everything will get better.” You fill in the blank with the flavor of the month. It’s much like treating a cough with cough drops so the cough goes away, rather than understanding that the patient has pneumonia and requires long-term rehabilitation. Few companies at the beginning of a CMMS implementation effort recognize that a CMMS is no more than a very sophisticated software shell, which is empty at the beginning—users must fill the shell with data and then train most of their staff to ensure its effective use. Beta was no exception. Another way of thinking about this would be buying a word processing program and expecting that books would be more readily developed. Certainly they could be, but the word processing software is just a tool to facilitate the creative process. In fact, the way many of Beta’s maintenance departments were using their CMMS, they could achieve the same result with a word processing package, which usually has a calendar to schedule and issue work orders. This is not recommended, but it serves to highlight the effectiveness of many CMMSs in use. Something is wrong with this, considering the millions that are being spent for CMMS programs. The fault lies not in the vendors, so much as in the expectations and implementation process of the users. Users must understand that a CMMS is only a shell, which must be filled with data and then used in a comprehensive way for effective mainte-
IMPLEMENTING A COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM
nance management. A CMMS won’t solve equipment reliability problems any more than an accounting system will solve cash-flow problems. Both are systems that must be filled with data, data that are subsequently analyzed and used to good effect. A good CMMS is most effective when used for: 1. Work order management—scope, effort, trades, procedures, permits, schedule, etc. (more than just issuing work orders). 2. Scheduling of essentially all work, and planning major efforts, parts, tools, procedures, skills (not just scheduling). 3. Equipment management, especially cost and repair histories; and bills of material for each piece of equipment and their related spare parts. 4. Use of equipment histories for Pareto analyses, prioritization and allocation of resources, and identification of key equipment needing root cause failure analysis. 5. Purchasing and stores management for spares, “kitting,” planning, and reliability improvement. 6. Document control and ready access to manuals, schematics, and procedures. 7. Resource allocation for effective reliability, production, and cost management. Moreover, preventive maintenance in the classical sense of fixed interval tasks is typically nonoptimal—How many of your machines are truly “average”? How often are you over- or undermaintaining using a mean time between failure approach? As we’ve seen, the data, when available, typically suggest that less than 10% of most equipment life is near the average life for that class of equipment. As noted previously, preventive maintenance is best: 1. When used in conjunction with a strong statistical basis, e.g., low standard deviations from average for a strong wearrelated failure mode; and when condition assessment is used to validate the need for the PM. 2. When condition assessment is not practical or cost effective.
279
280
MAKING COMMON SENSE COMMON PRACTICE
3. For routine inspections and minor PMs, but even then could be optimized. 4. For instrument calibrations, but even then could be optimized by tracking calibration histories. 5. For some manufacturer PMs—that is, those the manufacturer can back up with definitive statistical data. 6. For some staff-defined PMs—likewise, those backed up with definitive statistical data. 7. For regulatory-driven requirements, e.g., OSHA 1910, 119j, code requirements, etc., but even then could be optimized with histories and effective condition monitoring. One last point before proceeding with Beta’s case history—a CMMS is no more than a tool, and tools don’t provide discipline— processes do! A disciplined maintenance process must be put in place for the tool to be effective, not vice versa.
Case History With this in mind, this section provides a case history of how Beta International’s specialty products division (SPD), with limited previous experience in computerized maintenance management systems, effectively implemented its CMMS to help ensure lower maintenance costs, higher staff productivity, and improved equipment reliability. SPD was faced with an extraordinary set of problems—reduced revenues, increased costs, lower capital and operating budgets, and a maintenance department with:
Expensive breakdowns
Little planning and scheduling
Excessive forms, paperwork, and bureaucracy
Limited equipment histories
Superficial replacement/repair criteria
Minimal equipment failure analysis
IMPLEMENTING A COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM
Ineffective resource allocation criteria
Limited control of maintenance costs
Limited measures of performance or effectiveness
Or, as the saying goes, “Other than that, they were pretty good.” Clearly, something had to be done to improve this situation, and it was viewed by the company as a major “opportunity” for improvement. A part of the solution to this opportunity was to develop a more effective maintenance function, one that provided greater control, improved reliability of the equipment, and promoted greater teamwork between maintenance and operations. The first decision made was to put in place a computerized maintenance management system—a fairly typical step. However, to ensure the effectiveness of this decision, SPD put together a strategic plan that defined: 1. A process for CMMS selection 2. Potential impediments and their resolution 3. A process for implementation 4. A method for measurement of results 5. A continuous improvement process that would ensure successful implementation beyond initial successes and that would not just be viewed as another “magic bullet”
Preliminary Authorization and Selection Initially, management gave tentative authorization to select and purchase a CMMS. Prior to making this decision, the selection committee made several site visits to review various vendor products in a real working environment. Its conclusion from this review was that it didn’t know how to effectively select a system. This led to hiring a consulting specialist for the selection team to facilitate the specification of the CMMS that would meet its needs. After considerable effort defining its needs and processes, the team finalized a specification and put forth a request for proposal (RFP) to several vendors.
281
282
MAKING COMMON SENSE COMMON PRACTICE
After reviewing the proposals, the team narrowed its selection process down to a few vendors that performed office demos of their products. From there the list was narrowed even further by performing site visits, without the vendor, to validate vendor representations, to see the systems in action, to use the systems in a working environment, and to ensure that the system they might choose was effectively employed by others and would meet the team’s requirements.
Selection Criteria The selection process and criteria went beyond the site visits and vendor demos. Specifically, the selection criteria included: Value: 1. Conformance to the RFP (including a client/server system architecture) 2. Features and functions (including using the system at a working site) 3. User-friendly (or hostile) characteristics (using the system at a working site) 4. Vendor support systems (including calling as a customer and evaluating the support) 5. Vendor training (including attending a training class before purchase) 6. User satisfaction with the system/results (vendor’s five, plus a random selection) 7. Vendor customer base (total number of customers served) 8. Vendor financial strength Cost: 1. Initial cost 2. Continuing costs—training, upgrades, phone support, etc.
IMPLEMENTING A COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM
Primary decision criteria: Value took priority over cost, so long as cost was within a reasonable range. Low bid was not the criterion. Finally, it was found that the actual implementation cost was approximately four times the initial software cost. Managers take note: The implementation process is considerably more expensive than the initial cost and should be considered in the decision-making process.
Impediments and Solutions There were several impediments to the project from the beginning, each of which was addressed effectively: 1. Culture of the organization: a. “If it ain’t broke . . .”; or its brother—“We’ve always done it this way.” b. Prospective job change requirements with the union. c. “You can’t plan maintenance.” 2. Management support and resources—staff and capital Clearly, from the discussion, “it” was broken, and with the loss of revenue and climbing costs, something had to be done. Other efforts were ongoing to improve market share, develop new technology, improve operations, etc. And, this effort was part of an overall effort for improved maintenance practices, which necessarily included predictive and proactive practices. The union was approached with the concept of using a CMMS for solving inherent system problems, to make jobs easier, to reduce the time spent at the parts counter, to have equipment isolated when it arrived, to have the tools available, etc. Reduced revenues, climbing costs, and “broken” processes were all basic drivers for change in the organization. Reluctantly, but with assurances that the process was not targeting head count reduction, management proceeded with implementing a CMMS. Any head count reduction requirements would be managed using attrition, reduced contract labor, lower overtime, and reduced contracting of small capital projects. Performance measures were finally concluded to be the only effec-
283
284
MAKING COMMON SENSE COMMON PRACTICE
tive way to measure the success of the implementation process, but only after considerable discussion and negotiation. One fear the union had was that the system, once implemented, would be used to “bid out” all the work to contractors, because it was now all defined in the CMMS. Union leaders were assured that this was not the case, and they required routine reassuring of this. This effort required a continued fostering of trust between union leadership and management. As noted, “it” was broken. Management was convinced that something had to be done and authorized the initial study to determine what, how, when, who, where, how much, potential benefit, etc. Following this study, it authorized proceeding with the principal project to specify, purchase, and install the system in a pilot plant, which would in turn be used as the model for the remaining six plants, a total of seven plants having the CMMS installed. More importantly, however, it was recognized that simply installing a system would not be effective without a systematic plan for implementation and continuous improvement, and achieving acceptance by the union and shop floor of the CMMS implementation process. Union support would be needed, because it was responsible for much of the data entry, work orders, repair codes, closing of jobs, etc. To that end, the following was set up:
Implementation at a pilot plant
User group, consisting of a representative from each plant
Newsletter—announcing purpose, successes, failures, actions
“Continuous” periodic training to update learning
Routine meetings to address gossip, rumors, perceptions, problems
Staged approach for easing the effects of change
Implementation committee for facilitating implementation
Pilot Plant Implementation One of the first steps was to select a pilot plant for implementation. After much consideration, and developing the confidence that the implementation would be successful, a mid-sized plant that had
IMPLEMENTING A COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM
processes similar to its largest plant was selected. This plant was more manageable and would permit proving the process and technology prior to full implementation. At the same time, there was one representative from each of the other plants joining the implementation committee from the beginning to ensure that other plant needs were met, and that when the representatives began the implementation process, they would have a first-hand understanding of all the major issues that had arisen at the pilot plant. The representative, in fact, would be responsible for implementation at his or her plant. Initial expectations and goals as to the timing of the implementation, the training required, the potential cost reduction available, and initial schedules for achieving these goals were established. Startup activities included: 1. A complete inventory of equipment 2. Definition of initial PMs and procedures 3. Solicitation of departmental support from management and skilled trades 4. Routine informational meetings Routine informational meetings were very important to explain the objectives and implementation process; to address any rumors, gossip, or misperceptions; to explain the training that would be done; and so on. Rumors and gossip could take on a life of their own and needed to be addressed quickly and definitively. The consulting specialist continued to facilitate the implementation process and to serve as an outside sounding board and objective mediator on issues of concern. Committees were created to discuss major issues, to back up informational meetings with written word about plans and processes, to discuss successes and “failures”—including the course of action related to any failure—and to generally inform all employees of the system.
Procedures PM procedures were developed using:
285
286
MAKING COMMON SENSE COMMON PRACTICE
Equipment manuals
Interviews with appropriate staff
Discussions with vendors
Better definition of work requirements
Better application of predictive technologies
Better definition of staffing and material/parts needs
Purchasing software was linked to the CMMS and included stores management. Material planning and an automated pick list and kitting were made a part of the work order process. Equipment Database. The criteria used to determine whether or not a particular piece of equipment was to be put into the CMMS database were: 1. Is a history of the equipment needed? 2. Is a PM required for the equipment? 3. Will an operator be able to readily identify the equipment? Based on these criteria, some 20,000+ pieces of equipment were put into the database for all plants. At the largest (and oldest) plant, some 8,000+ items were put into the database over a 2-year period using approximately 8,000 to 10,000 labor hours. Use of the System. Skilled trades routinely did their own time reporting, entered work requests directly, entered closing remarks, and closed jobs out. Backlog was reviewed routinely and used to manage trade work assignments, grouping jobs, planning resources, etc. Management routinely used the system to review equipment histories and to generally manage maintenance activities. Performance measures included backlog, completed work orders, overtime, aged backlog, work order status, work schedule status, actual/planned hours, PM effectiveness, maintenance cost by area, average cost per work order, contractor costs, etc. Other Issues. In retrospect, the selection of a client/server system architecture was essential to the successful implementation of the system, allowing for sharing of data systemwide. However, the ability to integrate various software packages effectively was more diffi-
IMPLEMENTING A COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM
cult than originally anticipated. This will be reviewed in some detail as part of any future systems implementation process.
Results The results were remarkable. When combined with other efforts, such as predictive and proactive methods and improved operational practices, they achieved: 1. Acceptance and use of the system by the skilled trades, by management, and by engineering 2. Better prioritization of work, scheduling, planning, backlog management, and resource allocation 3. Improved equipment histories for better repair/replace decisions, root cause failure analysis, accounting of costs by process and by machine type 4. Improved accountability at all levels where they were measuring performance 5. Improved stores management and lower inventory levels, and improved equipment reliability 6. Much more effective maintenance—better procedures, reduced paperwork, reduced reactive maintenance through preventive maintenance, and elimination of standing work orders (which had been a black hole for money) The pilot plant became the plant with the lowest unit cost of production among the seven plants: 1. Labor hours per job were reduced by 25%. 2. Job completion times were reduced by 20%. 3. Contract labor was reduced by 60%. 4. Work order volume was actually up 35%, with the elimination of standing work orders.
287
288
MAKING COMMON SENSE COMMON PRACTICE
Total Cost Savings Amounted to Well over $6,000,000 Clearly, effective implementation and use of a computerized maintenance management system can provide extraordinary benefits to a company. To achieve these benefits, however, a systematic process must be implemented to ensure proper system selection, implementation, and use. This process also must be combined with predictive and proactive methods, and must fully integrate the engineering, operations, and purchasing functions.
Reference 1. Moore, R. “Implementing a Computerized Maintenance Management System,” Reliability, Directory Issue, 1996.
Effective Use of Contractors in a Manufacturing Plant
12
The best laid schemes o’mice and men gang aft agley. Robert Burns At one of Beta’s major plants, consolidating maintenance contractors and simultaneously going through a downsizing did not achieve the results expected. Indeed, through no fault of the contractor per se, the results were significantly worse than the prior year’s performance, were no better than the prior 2 years, and, at last report, the plant was still struggling to meet expectations.1,2
The Reliability Improvement Program Three years ago Beta’s Maytown plant began a focused effort to improve manufacturing performance. Particular attention was given to improved plant reliability and its potential impact on improved uptime and lower maintenance costs. Revenues of several $100Ms were generated, and included a respectable profit on sales. Hence, the company was not faced with a financial crisis and was simply taking prudent action to ensure its long-term financial health. Over the next several months, certain performance measures were put in place and specific actions taken to improve maintenance performance. These included, among many efforts, measuring uptime and the causes of lost uptime, as well as implementation of
289
290
MAKING COMMON SENSE COMMON PRACTICE
certain maintenance improvement tools, such as: (1) maintenance planning and scheduling and routine PM activities; (2) specific predictive maintenance technologies; (3) more proactive efforts, such as root cause failure analysis, precision alignment, and balancing of critical equipment; (4) operator PM efforts to improve equipment basic care and to relieve maintenance of several routine tasks; (5) assignment of a manufacturing reliability engineer to facilitate the implementation of these practices in a comprehensive manner. In effect, this Beta plant used the reliability strategy described herein as part of a broad improvement process. These efforts showed substantial improvement over the following year, as shown in Figures 12-1 through 12-5. Note: (1) Some data were not available for certain months early in the process due to delays in implementing the measurement system and/or overlaps in different data collection systems; and (2) Some data may have a 1month or so lag time between the event and recording the information and hence may be slightly nonsynchronous. Uptime, Figure 12-1. Maintaining high uptime had become increasingly difficult, with uptimes typically 70% or higher, but rarely better than 80%. Following the implementation of the reliability program, uptimes improved steadily to a peak of 88%, averaging 82% from December through August of the following year. Maintenance Costs, Figure 12-2. Reviewing the 3-month rolling average chart for maintenance costs, from October to August of the following year, maintenance costs trended steadily downward after implementing reliability practices, going from $980K per month to $480K per month. Maintenance Efforts by Type, Figure 12-3. Reactive maintenance dropped from about 70% in mid year to about 30% in August the following year. The reduction in reactive maintenance, which typically costs twice that of planned maintenance, was a result of implementing specific preventive (time-based) maintenance, predictive (condition-based) maintenance, and proactive (root cause– based) maintenance practices in an integrated, comprehensive way. In particular, predictive, or condition based, was used to confirm the need for scheduled efforts, to trend the condition of equipment, to diagnose the root cause of certain repeated problems, and, in general, to balance the need for preventive and proactive maintenance, making it more optimal.
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
Maintenance Purchases/Stores Issues, Figure 12-4. Purchasing/stores was estimated to be trending downward between July and August of the following year, consistent with the trend in overall maintenance costs. Safety Performance, Figure 12-5. Considerable progress had been made in safety performance, and the company had achieved an OSHA recordable injury rate of ~ 3 per 200K labor hours through August. A peak rate of 9 in June was attributed to a shutdown and an extraordinarily high level of maintenance.
Consolidating Maintenance Contractors In parallel with the reliability program, the company’s purchasing department in the second quarter of year 1 made a determination that considerable money could be saved by consolidating contractors on site, reducing the administrative and management effort associated with those contractors. It was felt that this was particularly true for maintenance and related contractors, such as minor capital and construction projects. The decision was made in June and the consolidation process began in September.
Figure 12-1. Uptime.
291
292
MAKING COMMON SENSE COMMON PRACTICE
Figure 12-2. Maintenance costs.
Figure 12-3. Maintenance efforts by type.
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
Figure 12-4. Maintenance purchases/stores issues.
Figure 12-5. Safety performance.
293
294
MAKING COMMON SENSE COMMON PRACTICE
Simultaneously, at the corporate level, considerable benchmarking had been performed with the conclusion, among other things, that maintenance costs were too high and that productivity (units of product per employee) was too low. After considerable debate among management, who apparently were unable to wait long enough to realize the full benefit of the reliability improvement process that was already established, the decision was made to cut the number of employees, with maintenance employees reduced by about half. Following these decisions, performance did not improve. Indeed, performance initially deteriorated and is now only back to about where it was some 2 years ago. Figures 12-6 through 12-11 provide details of “before and after” performance. Uptime, Figure 12-6. Uptime dropped immediately to about 65%, and then rose gradually to near 86%, dropping thereafter to an average of about 75% for the period September to June. However, it should be highlighted that the immediate drop in uptime in September was related to two specific equipment failure events (a large motor and a heat exchanger), and those specific events should not be attributed to the consolidation effort, or to the downsizing effort. As luck would have it, however, the timing of these failures was very inopportune. And, nonetheless, the plant had great difficulty recovering to levels of performance prior to the downsizing and contractor consolidation. Overall, uptime dropped from about 82% to 75% following these actions. Maintenance Costs, Figure 12-7. In the first quarter of maintenance contractor consolidation, maintenance costs soared to over $1,080K by December, over twice August and September’s levels, and even significantly above costs a year prior when the reliability improvement program began. As of June ’03, maintenance costs were still at $880K per month, at a comparable level to where they were in October ’01, nearly 2 years prior. In effect, after nearly 2 years of effort, maintenance costs had not improved and, in fact, had increased substantially from about a year prior. Maintenance Levels by Type, Figure 12-8. Likewise, the level of reactive maintenance had trended downward until August ’02, even continuing downward into November to some 20%. However, as the contractor increased its staffing levels and work efforts, predictive maintenance was essentially eliminated, giving way to timebased, or preventive, maintenance, because the contractor was
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
apparently not familiar with condition-based maintenance methods for trending, diagnosing, commissioning, etc., for improved maintenance performance. Coincidentally, reactive levels also began to rise, peaking in May ’03 at over 40%, comparable to where they had been in early ’02, in spite of substantially increased PM efforts. Indeed, some studies indicate a 10% to 20% probability of introducing defects into equipment using a preventive, or time-based, maintenance approach, as opposed to condition based. This occurs when equipment that is not in need of overhaul or repair is, in fact, overhauled on a time-based approach, and defects are introduced in the process. Hence, sharply reducing the predictive efforts in favor of preventive (time-based) maintenance appears to have had the effect of increasing reactive maintenance and maintenance costs. Materials Expenditures, Figure 12-9. Stores issues and purchase orders for maintenance parts rose substantially from ’02 levels, or about double what they had been, to some $300K per month. Further, purchase order expenses now represent about $150K of the total, or about half. This is considered to reflect contractors not understanding the stores system preferring use of purchase orders, stores not having the material required, or some combination.
Figure 12-6. Uptime.
295
296
MAKING COMMON SENSE COMMON PRACTICE
Figure 12-7. Maintenance costs.
Figure 12-8. Maintenance levels by type.
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
Figure 12-9. Materials expenditures.
Figure 12-10. Safety performance.
297
298
MAKING COMMON SENSE COMMON PRACTICE
Figure 12-11. Scheduled maintenance.
Safety Performance, Figure 12-10. Contractor injury rate initially soared to five times that of employees, but has since improved to three times worse than that for employees. Scheduled Maintenance, Figure 12-11. Maintenance that was scheduled was initially less than 20% but rose substantially to a peak of nearly 75%, thereafter deteriorating to nearly 65%. All in all, performance has been substantially below expectations by almost any measure—worse than a year ago before the consolidated contractor came on board, and no better than 2 years ago, when reliability concepts were first introduced to the company.
What Happened? There were many factors at work, many of which were beyond the control of the contractor. The basis for bringing in the new contractor was driven by a need to consolidate contractors to save money, and perhaps to a lesser extent a perception among some that labor costs for contractors were cheaper than in-house labor. In fact, costs actually increased dramatically following consolidation of contractors. The factors associated with this are discussed in the following text. No particular order of importance or magnitude of importance
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
should be attached to the order of presentation. Rather, it should be viewed as the confluence of a set of these circumstances, which led to less than adequate performance, and should be seriously considered when evaluating the use of contractors in the future. Contractor Experience. The contractor’s staff had limited experience with the plant or equipment that it was maintaining. As a result, a substantial learning curve was necessary for becoming familiar with plant equipment, policies, procedures, practices, location, etc. Further, the contractors had limited experience with use of condition monitoring (predictive maintenance) to help optimize time-based maintenance—not too soon or too late. As a result, they relied primarily on a preventive, or time-based, approach to maintenance, which is typically not optimal in and of itself. This is evidenced by the increase in reactive maintenance levels, which occurred concurrent with (and in spite of) a substantial increase in time-based maintenance. Condition monitoring should be routinely applied to verify that the maintenance task is necessary, to delay doing some tasks, and to validate the quality of the work performed during startup and commissioning. Finally, many of the contractor staff were taken from the construction ranks, where different work processes and methods are practiced, resulting in a more difficult transition into an operating plant. Displaced Contractors. Many existing contractors, who were in effect being eliminated, remained on the job for about 3 months after bringing in the new contractor who would be consolidating contract maintenance. Their motivation in assisting the new contractor was limited. Not surprisingly, existing contractor charges increased substantially during this “transition period” prior to their departure. A transition period may be prudent, but better management may have helped mitigate these costs. Shutdown Impact. Just prior to the advent of the new consolidated contractor, a major shutdown of the entire plant had occurred, which included major maintenance efforts, as well as bringing on line a major new production process. The startup was difficult and time consuming, leaving few resources to manage the integration of the new consolidated maintenance contractor. All this was concurrent with a major downsizing and the attendant morale problems inherent in such a change. Confusion was substantial, and morale and productivity were low.
299
300
MAKING COMMON SENSE COMMON PRACTICE
Transition Process. The process for introducing and integrating the contractor with the plant’s work management process, methods, practices, etc., was poor. There was inadequate understanding of the work order process used at the plant, of the planning and scheduling process, of the CMMS currently in use, etc. There was insufficient management interface, because of the shutdown and startup efforts, to allow for adequate communication and management of the integration effort. Major Equipment Failures. Two major equipment failures occurred just as the consolidation and downsizing processes were beginning, resulting in a large drop in uptime. The timing of the failures was at best inopportune, likely resulting in delays and additional costs during the transition process. Indeed, as noted, the plant struggled to fully recover to previous uptime levels. Loss of Key Management Staff. Concurrent with the arrival of the contractor, two key managers at the plant transferred, exacerbating the overall management process and contractor integration. Loss of Skilled Staff. Many of the employees lost in the downsizing were skilled in maintaining the plant equipment, but began to leave at just about the same time as the contractor consolidation began. Morale was quite low, and the enthusiasm for working with and “training” the new contractors was limited.
Current Situation As the old saying goes, “You can’t go back.” The plant has gone through the effort of bringing in, training, and integrating a consolidated contractor, whose people are now at last familiar with the equipment, work process, policies, etc., to perform in a reasonably effective manner. However, costs are still substantially above what they should be for world-class performance; are above what they were a year ago when the contractor arrived; and are above what the contractor targeted for achievement at the time of its arrival. Given this, the Maytown plant will (and should) continue to use the contractor to support world-class performance. However, in the short term more regular and intensive meetings will occur with the contractor to establish and continuously improve on the relationship and to create clearer expectations. At this point, making the contractor an integral part of the operation is a necessity, not an option; and
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
establishing expectations and performance measures targeted at world-class performance is also a necessity, not an option. These performance measures will be “value added” type measures, described later. The guiding principle behind these measures is that the contractor must deliver an effect, not simply supply a service for a fee. Effects desired include improved equipment life, higher availability and uptime, lower maintenance costs per unit of contractor supply (e.g., normalized to account for issues such as total assets under its care, or total product produced), excellent safety performance, etc. In other words, the contractor will be held to the same high standards as the balance of the organization and become a genuine partner in the plant’s success. For example, measures of effect (and success) will include:
Uptime (increasing)
Production losses (units, $) resulting from equipment downtime (decreasing)
Maintenance cost as a percent of unit cost of product (decreasing)
Maintenance cost as a percent of asset replacement value (decreasing)
Mean time between repairs, e.g., average equipment life (increasing)
Overtime hours percentage (decreasing)
Rework effort, nonconformance reports, warranty claims (decreasing)
Safety record, injury rate (decreasing)
Housekeeping (improving)
Permit failures, work rule violations (decreasing)
Grievances (decreasing)
Turnover (decreasing)
The contractor will also be required to provide certification of skill type and level for each person provided, as well as ensure that personnel are properly trained in safety practices and policies.
301
302
MAKING COMMON SENSE COMMON PRACTICE
A fresh look at the plant’s operation will be taken, and firm leadership will be exercised, particularly regarding expectations of contractor performance. Maintenance excellence is not an option, it is a requirement. To achieve this (which is not just doing a lot of PM), a balanced maintenance program will be reestablished, including greater application of predictive and proactive methods. Greater teamwork between maintenance and operations will also be established, and teamwork will be required between both functions with a view to work collectively to eliminate losses from ideal. Too much is at stake for the success of the business. Therefore, the plant, in cooperation with the contractor, will ensure that: 1. A strong predictive, or condition-based, maintenance program, including operator input of equipment and process condition, is reestablished. 2. Existing PM tactics are reviewed with the intent to optimize them. Maintenance planning and scheduling will be tempered with good condition monitoring, and the quality of the maintenance effort will be validated with a commissioning process. 3. Operations personnel will become much more involved in the operational reliability and the things they can do to improve performance, including operator PM as appropriate. Operational excellence is also a requirement, not an option.
Lessons Learned Beta’s lessons learned are substantial, and may be summarized as follows: 1. Maintenance is a core competency and may be difficult to consolidate in contractors. 2. When bringing in a contractor, a specific integration process must be established that includes training and indoctrination of the contractor in and holding the contractor accountable for:
General company policies and procedures
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
Company work practices, use of its CMMS, planning and scheduling, etc.
Continuation of and/or use of certain technologies and methods, e.g., predictive maintenance tools, alignment and balancing, etc.
Certification of skills determined by the company to be required for the contract
Company safety performance
Use of company procedures for material and parts procurement
This process should include a specific individual on site who is held accountable for ensuring a smooth integration of the contractor into the work processes, e.g., a project manager. 3. Success-based performance measures and expectations must be established at the front end that define the effect that the contractor will deliver. 4. Consolidation of contractors is a substantial change in the organization, and should generally not be done concurrently with other major changes, e.g., immediately following a major shutdown and during a startup period, transfer of key management personnel (for other than disciplinary reasons). 5. Displacing existing contractors also needs a process for integration of and/or elimination of the need for those contractors that are being displaced. It is not sufficient to allow the contractors to define the integration process. This task should also be the responsibility of the project manager for the contractor consolidation process. 6. If contractors are consolidated during a downsizing, recognize that the employees who are leaving are not likely to be enthused about transferring their knowledge. Morale will likely decline, and this is a “cost” that must be considered. Even in the best of circumstances, major changes to any “system” will always result in transitory effects—the system will generally get worse before it gets better. This too is a cost that should be considered in the decision-making process. This case study provides an order of magnitude estimate of the cost effects that could occur and
303
304
MAKING COMMON SENSE COMMON PRACTICE
will help avoid those costs. Perhaps the best policy would be to avoid the costs in the first place. Simply replacing one set of warm bodies with another will not necessarily deliver the effect desired. The processes that result in improved performance must be put in place. Beta has learned from this exercise, and as a result has developed a policy statement regarding the best use of contractors.
Best Use of Contractors In many organizations today, the heavy focus on cost cutting has led to an increased emphasis on the use of contractors or, in some cases, contractor consolidation as a potential solution for reducing overall costs. Moreover, in recent years maintenance in particular has been the subject of increased attention for using maintenance contractors for the replacement of maintenance employees. While that was not the intention in this case, replacing employees with contractors has occurred at several Beta plants worldwide. As you might expect, the use of contractors in many of Beta’s manufacturing plants and facilities has been a sore point with the skilled trades, particularly in a strong union environment. While some of this may be normal tension between the “shop floor” and contractors, particularly during a time of downsizing, the intensity appears to be growing as more contractors come into use. Further, many skilled trades working in Beta’s chemical plants or other hazardous areas express concerns similar to those related to the Value Jet crash, which, according to the Wall Street Journal,3 “. . . raised troubling questions about the safety implications of such penny-pinching practices as contracting out maintenance, hiring less-experienced workers, and focusing less intensely on training.” This clearly begs the question—Are contractors appropriate for a given manufacturing organization? The answer is clearly yes, for in almost all organizations contractors play a critical, if not essential, role. However, given the experience cited, caution is urged for all organizations and how they ensure effective use of contractors. Lower head count and/or lower charge rates from contractors (consolidated or otherwise) may not be the only issue to consider. The real question about the use of contractors is how they support corporate goals related to manufacturing excellence, such as uptime; unit cost of production; safety performance; maintenance cost as a % of plant replacement value;
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
etc. What is the risk in using (or not using) contractors relative to these goals? What specific roles are best suited for contractors? What processes are used to integrate contractors into an overall business strategy? Does a simple cost-cutting strategy work? As outlined in Chapter 1, a simple cost-cutting strategy has a low probability of providing the manufacturing improvements desired. Further, the better companies have concluded that equipment reliability and uptime should take priority over cost-cutting and that maintenance contribution to uptime is worth 10 times the potential for cost reduction. But, the expectation is still clear—lower costs will result from applying best practice. Moreover, Schuyler4 reports that reducing maintenance costs, with no change in maintenance strategy, results in a reduction in mechanical availability (and, presumably, a reduction in plant uptime and/or increase in plant costs), and states that “. . . the only way to simultaneously reduce maintenance costs and increase mechanical availability is to move toward more proactive maintenance strategies.” See Figure 9-11 for an illustration of this concept. Replacing the maintenance function (or any function for that matter) with contractors may not be the proper strategic decision. For example, would you replace your operations function with contractors? Your engineering function? Your accounting function? Are they “core”? Why would Beta not view the years (perhaps decades) of experience developed by its maintenance function as part of a core competency, much the same as it does the other functions? At Beta, as well as many other manufacturing organizations, maintenance is indeed coming to be viewed as a core competency for manufacturing excellence, which is as it should be. Given this, what is the proper use of contractors? Good contractors have a place in most organizations, and the following policy is Beta’s model for when contractors should be considered and/or used: 1. For doing the low-skill jobs such as landscaping, custodial duties, etc., which are not part of the company’s core competency. 2. For doing the high-skill jobs, e.g., turbine generator balancing, infrared thermography, machine tool installation and qualification, etc., where the skill is not routinely used, and cannot be justified on a routine cost basis (and the individu-
305
306
MAKING COMMON SENSE COMMON PRACTICE
als with these skills often leave to make more money working for contractors). 3. For supporting major overhauls and turnarounds, when keeping the level of staff required to support annual or biannual efforts is not economically justified. 4. For emergency situations, when the workload overwhelms the existing capability. 5. For other situations, where the use of contractors is clearly in the best interest of the business.
Contractor Selection When selecting contractors, Beta will also use the following model for selecting and managing contractors, including keeping records of contractor performance in key areas:
Define the scope of work in light of the specific type of contract to be implemented, e.g., lump sum fixed price, time and material, cost plus incentive fee, etc.
Define the experience or expertise required, including any licenses or skill certifications.
Determine the contractor’s track record with the organization, as well as other skills required: Administration Project management QA Safety Planning and scheduling Cost control Timely completion Subcontractor relationships Internal conflicts Housekeeping, especially at job completion Engineering and reliability improvement skills
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
Quality of work—equipment life, unit cost, etc.
Define working relationships: Functional level Flexibility Personnel compatibility (at multiple levels) Cultural compatibility
Internal and external union agreements compatibility: Track record on harmony or disharmony, e.g., grievances, strikes, etc. Process for handling conflicts
Preaward: Capability Availability
Terms and conditions: Design in ownership for results, warranty of quality Payment terms
Scope of work: Basic requirements Effect desired—value added Boundaries
Financial issues: Contractor’s reputation regarding contract disputes, add-ons Contractor’s financial strength Basis for resolving financial disputes
Safety, health, and environment issues: Historical performance record Current policies and practices Current safety training
Compatibility with company administrative systems: Time sheets
307
308
MAKING COMMON SENSE COMMON PRACTICE
Work orders Accounts payable Reporting systems
Accreditation: Systems in use for company Systems in use for individual employees
Attitudes: Supportive of company business goals—value added Measurement systems in place for this
Ownership of intellectual property: Drawings Process technologies Patents
Further, contractors will be held to the same high standards as employees for: (1) safety performance (in my experience, contractor safety performance is typically much poorer than employee safety performance); (2) installation and commissioning in verifying the quality of their work; and (3) housekeeping at the conclusion of a job. Beta recognizes that contractors have a potential inherent conflict, particularly in maintenance. One of the goals with any business is to grow the business (“grow or die”), and it is reasonable to conclude that a given contractor’s goal is to grow its company’s business by increasing revenue. As a result, there could be considerable temptation to increase the maintenance effort, rather than focus on longterm equipment reliability for reducing the maintenance effort (and revenues) over the long haul. At the very least, enthusiasm for reducing the long-term maintenance level of effort could be diminished by the desire to grow the business. This is not to say there is anything dishonest or wrong with this inherent desire. Indeed, in many organizations the maintenance department often puts forth a large backlog as proof of the need to retain a given number of employees, sometimes without a careful analysis of why such a large backlog exists, and of how to eliminate the need for the maintenance that makes up the backlog. It is just human nature to want to protect
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
one’s job and/or to grow a business, and caution should be exercised regarding this issue when hiring contractors. Moreover, Beta believes that a contractor may be less likely to have the same level of loyalty to the company as its employees, particularly if Beta’s management exercises clear leadership and creates an environment supportive of mutual loyalty—company to employee and employee to company. Such a condition requires mutual trust and respect, something in need of improvement at Beta, where cost-cutting had become a way of life. While mutual loyalty may seem like an outdated concept in today’s world of downsizing and the apparent lack of mutual loyalty in many companies, it is now and will always be a concept that merits consideration and fostering. Properly done, a good operations and maintenance organization can offer greater loyalty to the company and can outperform a contractor in core competencies over the long term. Measuring uptime or OEE and defining the causes of losses from ideal performance in a team environment with maintenance and operations working with a common sense of purpose and toward common goals is more likely to be successful, especially when both operations and maintenance are viewed as core competencies. Concurrently, however, employees within Beta must recognize that they are indeed competing with contractors and therefore must constantly seek to add greater value, e.g., greater equipment reliability, higher uptime, lower unit cost of production, better safety, etc., than might be done by a contractor. Let’s consider the situation at a nonBeta plant. The plant was not particularly well run, but not necessarily because of current plant management. Over several years, and under considerable political pressure from senior management to avoid a strike and keep the plant running at all costs, the plant came to have 14 unions operating within it. As you might expect, each union had its rules, its contract, its “piece of turf” that it wanted to protect. Job actions were threatened, grievances were filed, and overall the plant just was not operated very efficiently. Its unit cost of production was quite high compared with comparable companies in the US. At the same time, the plant was faced with reduced revenues and increased costs. Clearly, in this circumstance, contracting the maintenance and operations functions becomes an option, even at the peril of the strife that will likely ensue at the plant. Changes will occur at this plant. It’s only a matter of time.
309
310
MAKING COMMON SENSE COMMON PRACTICE
Finally, it is recognized that this experience does not cover all circumstances wherein contracting any given function may be appropriate, such as at a new plant that is essentially a “green field,” when costs are truly extraordinary, or in a situation of intransigence with several different unions at one site, etc. For example, at one of Beta’s plants, the machine repair shop was contracted out after management concluded that bringing its own staff to a world-class level of performance would take some 3 years or more, because the staff lacked adequate experience in machinery repair. Contracting the function was expected to reduce the time by 18 to 24 months, bringing substantial incremental benefit. Applied judiciously and integrated with existing maintenance employees, contractors can make an exceptional addition to any manufacturing team, but they must be held to the same high standards as the balance of the organization, one that is driven to achieve world-class performance.
Summary Beta has learned an enormous amount through this experience and has developed a solid understanding for more effectively employing contractors. Contractors play a key role within most all organizations, but consolidation of contractors requires a specific set of objectives and processes. Further, caution is urged relative to the temptation to replace core competencies such as maintenance (or operations, or any other function) with contractors, and the models and policies developed from Beta’s experience are offered as bases for developing policies and practices for the use of contractors. Contractors must be conditioned to deliver an effect, not simply supply warm bodies for a fee. Finally, most maintenance departments, and employees for that matter, must recognize that they are competing with contractors and therefore must add more value to the operation than a contractor otherwise would, in increased performance or reduced unit cost. To do so ensures greater probability of continued employment. To not do so creates greater risk of loss of employment to a contractor.
References 1. Moore, R. “Switch in Contract Maintenance Proves Costly,” Maintenance Technology, Vol. II, No. 2, February 1998.
EFFECTIVE USE OF CONTRACTORS IN A MANUFACTURING PLANT
2. Moore, R. “Using Maintenance Contractors Effectively,” Maintenance Technology, Vol. II, No. 3, March 1998. 3. The Wall Street Journal, June 20, 1996, New York. 4. Schuyler, R. L. III. “A Systems Approach to Improving Process Plant Performance,” Maintenance and Reliability Conference, University of Tennessee, Knoxville, TN, May 1997.
311
This Page Intentionally Left Blank
Total Productive and ReliabilityCentered Maintenance
13
Put all machinery in the best possible condition, keep it that way, and insist on absolute cleanliness everywhere in order that a man may learn to respect his tools, his surroundings, and himself. Henry Ford, Today and Tomorrow A recent experience at one of Beta’s large discrete parts manufacturing plants shows how combining total productive maintenance, 1 or TPM, and reliability-centered maintenance,2-4 or RCM, increased teamwork between the maintenance and production functions, improved equipment reliability and uptime, and lowered operating costs. Among the tools applied at Beta’s Allen Central plant are TPM, with a particular focus on operator care and minor PM, or, as some people would say, TLC—“Tender Loving Care”—in the form of actions such as tightening, lubricating, and cleaning. Other tools are also being introduced and applied in an integrated way, including continuous improvement teams, improved cell design, pull systems, process mapping, etc., but the focus of this case history is how TPM was integrated with RCM.5 The application of TPM at this plant, and other plants, has focused most of its attention on operator PM and basic care, operator “condition monitoring,” etc. These practices are essential for ensuring manufacturing excellence, but used alone are not likely to be sufficient. In this case, the focus on TPM did not adequately consider other, perhaps equally valid or even more advanced, methodol-
313
314
MAKING COMMON SENSE COMMON PRACTICE
ogies, such as reliability-centered maintenance, predictive maintenance, root cause analysis, maintenance planning, etc. This view was confirmed by the maintenance manager for the business, who felt that while TPM was an effective tool for ensuring basic care for the equipment, for detecting the onset of failures, and often for preventing failures in the first place, it frequently overlooked other maintenance tools and requirements. This, in turn, often resulted in equipment breakdowns and in frequent reactive maintenance, not to mention the largest loss—reduced production capacity. As a result, Beta embarked upon an effort to combine the best of TPM and RCM to provide the most effective processes for both maintenance and production. In the process we expected to be able to provide manufacturing excellence—maximum uptime, minimum unit cost of production, maximum equipment reliability. Each methodology is discussed individually and then the process for combining them is provided.
Total Productive Maintenance Principles—TPM Total productive maintenance, or TPM as it is commonly called, is a strategy for improving productivity through improved maintenance and related practices. It has come to be recognized as an excellent tool for improving productivity, capacity, and teamwork within a manufacturing company. However, the Japanese cultural environment in which the TPM strategy was developed may be different from the culture in other manufacturing plants worldwide, particularly in US plants, and as such may require additional consideration. A particular point worth highlighting is that in the TPM culture, maintenance is about maintaining plant and equipment function, and not so much about fixing things after they break, as it is in the US and other western plants. This simple point is critical in ensuring manufacturing excellence. If we view maintenance as a repair function, fixing things after they break, then, as Senge said, we are doomed to reactiveness. On the other hand, if we view maintenance as maintaining the plant and equipment function, that is, prevent its breaking, then we are more likely to eliminate the defects that cause the failures and their attendant costs and lost production. A better translation for TPM might have been total productive manufacturing, with its focus on maintaining the plant’s function.
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
TPM was published in Japan by Seiichi Nakajima.1 With its Japanese origins, the strategy places a high value on teamwork, consensus building, and continuous improvement; it tends to be more structured in its cultural style—everyone understands his or her role and generally acts according to an understood protocol. Teamwork is a highly prized virtue, whereas individualism may be frowned upon. This basic underlying genesis of the Japanese TPM strategy is a significant issue to be understood when applying TPM to a given manufacturing plant. This may be especially true in a US manufacturing plant, because US culture tends to value individualism more, and to value people who are good at crisis management, who rise to the occasion to take on seemingly insurmountable challenges and prevail. We tend to reward those who respond quickly to crises and solve them. We tend to ignore those who simply “do a good job.” They are not particularly visible—no squeaking and therefore no grease. This “hero worship” and individualism may be an inherent part of our culture, reinforcing the behavior associated with the high levels of reactive maintenance at most plants, possibly making the implementation of TPM more problematic. This is not to say that TPM faces overly serious impediments or is an ineffective tool in non-Japanese manufacturing plants. On the contrary, when an organization’s leadership has made it clear that the success of the organization is more important than the individual, while still recognizing individual contributions, a team-oriented corporate culture develops that transcends the tendency for the individualistic culture, and success is more likely. Many plants worldwide have used TPM effectively, and most plants have tremendous need for improved communication and teamwork that could be facilitated using the TPM methodology. Most would be better off with fewer heroes and more reliable production capacity. Considerable progress has been made through programs and strategies like TPM, but it is still evident that in many plants there still exist strong barriers to communication and teamwork. How often have you heard from operations people, “If only maintenance would fix the equipment right, then we could make more product!” and from maintenance people, “If operations wouldn’t run the equipment into the ground, and would give the time to do the job right, we could make more product!” or from engineering people, “If they’d just operate and maintain the equipment properly, the equipment is designed properly (he thinks) to make more product!” and so on. The truth lies in “the middle,” with
315
316
MAKING COMMON SENSE COMMON PRACTICE
“the middle” being a condition of teamwork, combined with individual contribution and responsibility and effective communication. This chapter provides a case history of one of Beta’s plants for effecting this “middle,” particularly as it relates to combining two strategies, TPM and RCM, which sometimes appear to be in conflict when applied at the same plant. Total productive maintenance implies that all maintenance activities must be productive and that they should yield gains in productivity. Reliability-centered maintenance implies that the maintenance function must be focused on ensuring reliability in equipment and systems. As we’ll see, RCM also calls for an analysis for determining maintenance needs. Properly combined, the two work well together. If we follow TPM principles, we would conclude that when the equipment is new and just installed, it is as bad as it will ever be. This differs from most companies that generally believe that when the equipment is new, it is as good as it will ever be. This view that equipment is as bad as it will ever be when it is new would serve most companies exceptionally well and instill in them a deep commitment to continuous improvement. TPM calls for measuring all losses from ideal and managing them. As we discussed in Chapter 1, knowing our losses from ideal is exceptionally beneficial in helping us tactically manage and prioritize day-to-day production losses, and strategically manage the potential for supporting additional market share without incremental capital investment, understanding more fully those capital investment needs. The measure used is defined as overall equipment effectiveness (OEE), and OEE = Availability × Rate × Quality. For example, suppose our equipment was 95% available for production, but when it was available and running, it only ran at 95% of ideal rate; when it ran at this rate, it only produced 95% first-pass, firstquality product. Then our net rate would be 85.7% of ideal (0.95 × 0.95 × 0.95). If we simultaneously had three production units set up in a series to make our final product, and each production unit was asynchronously experiencing these losses from ideal, then our effect production rate would be 63% (0.857 × 0.857 × 0.857). This makes it exceptionally important to run every step in the production process exceptionally well, so that we avoid the cumulative losses in a production line. To do this we must know where and why the losses are occurring.
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
The basic pillars of TPM and some thoughts on their relationship to an RCM strategy are:
TPM calls for restoring equipment to a like-new condition. Operators and production staff can contribute substantially to this process. At the same time, according to RCM studies, up to 68% of equipment failures can occur in the infant mortality mode—at installation and startup—or shortly thereafter. Good TPM practices will help minimize this through restoration of equipment to like-new condition and operator basic care. However, it is often the case that more advanced practices may need to be applied, e.g., a stringent commissioning of the equipment as well as the process, and using condition-monitoring tools and standards for methods such as vibration, oil, infrared, analysis instruments, and software to verify this like-new condition. It may also be especially important to understand failure modes and effects to take steps in both operations and production to mitigate or eliminate those failure modes. Many will say that TPM calls for application of predictive maintenance, or condition monitoring. However, this tends to be quite limited and usually only involves operator “condition monitoring,” e.g., look, touch, feel, etc., certainly good practice, but at times insufficient. Further, condition monitoring as practiced under TPM may not have a strong foundation, such as a failure modes analysis, which drives the specific technology being applied. For example, reasoning such as, “It’s a bearing and we do vibration analysis on bearings” without defining whether the failure modes require spectral, overall, and/or shock pulse techniques, may not provide an adequate analysis method for the machinery in question.
TPM calls for operator involvement in maintaining equipment. This is a must in a modern manufacturing plant. However, the operator often needs to be able to call upon specialists in more advanced technologies when a problem starts developing in the equipment. These specialists can use RCM principles such as failure modes and effects analysis, as well as condition monitoring tools, such as vibration analysis, to identify and prioritize problems and get to their root cause.
317
318
MAKING COMMON SENSE COMMON PRACTICE
TPM calls for improving maintenance efficiency and effectiveness. This is also a hallmark of RCM. Many plants make extensive use of preventive maintenance or so-called PMs. However, while inspection and minor PMs are appropriate, intrusive PMs for equipment overhaul may not be, unless validated by equipment condition review, because, according to RCM studies, little equipment is truly average. RCM helps determine which PM is most effective, which should be done by operators, which should be done by maintenance, and which deserve attention from design and procurement. PMs become more effective because they are based on sound analysis, using appropriate methods.
TPM calls for training people to improve their job skills. RCM helps identify the failure modes that are driven by poorly qualified staff, and hence identify the areas for additional training. In some cases it may actually eliminate the failure mode entirely, thus potentially eliminating the need for training in that area. RCM is highly supportive of TPM, because training needs can be more effectively and specifically identified and performed.
TPM calls for equipment management and maintenance prevention. This is inherent in RCM principles by identifying failure modes and avoiding them. Equipment is thus more effectively managed through standards for reliability at purchase (or overhaul), during storage, installation, during operation and maintenance, and in a continuous cycle that feeds the design process for reliability improvement. Maintenance is prevented by doing things that increase equipment life and maximize maintenance intervals, by avoiding unnecessary PMs through condition knowledge, and by constantly being proactive in seeking to improve reliability.
TPM calls for the effective use of preventive and predictive maintenance technology. RCM methods will help identify when and how to most effectively use preventive and predictive maintenance through a failure modes analysis to determine the most appropriate method to detect onset of failure, e.g., using operators as “condition monitors,” or using a more traditional approach in predictive tools.
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
One final thought is that many plants have come to think of TPM as operators doing PM and measuring OEE. Certainly that’s an essential part of it. However, keep in mind that in a TPM philosophy, maintenance is about maintaining equipment function, not repairing equipment. This is a huge philosophical difference and requires a different maintenance ethos, one that, though perhaps contrary to the TPM name, is focused on eliminating defects that result in the repair requirement—it’s about maintenance prevention, not preventive maintenance!
Reliability-Centered Maintenance Principles—RCM The primary objective of RCM is to preserve system function, as opposed to equipment function. This implies that if the system function can continue even after failure of a piece of equipment, then preserving this equipment may not be necessary, or run to failure may be acceptable. The methodology itself can be summarized as follows:2-4 1. Identify your systems, their boundaries, and their functions. 2. Identify the failure modes that can result in any loss of system function. 3. Prioritize the functional needs using a criticality analysis, which includes effects and consequences. 4. Select the applicable PM tasks, or other actions, that preserve system function. In doing the analysis, equipment histories are needed, and teamwork is also necessary to gather the appropriate information for applying these steps. However, not having equipment histories in a database should not negate the ability to do an RCM analysis. As demonstrated in the following text, equipment histories can be found in the minds of the operators and technicians. Moreover, operators can help detect the onset of failure, and take action to avoid these failures. Similar to TPM, RCM describes maintenance in four categories: preventive, predictive, failure finding, and run to failure. At times the difference between these can be elusive.
319
320
MAKING COMMON SENSE COMMON PRACTICE
RCM analysis as traditionally practiced can require lots of paperwork—it’s very systematic and can be document intensive. It has been shown to be very successful in several industries, particularly in airlines and nuclear industries, which have an inherent requirement for high reliability and a very low tolerance for the risk of functional failure. This intolerance for risk of functional failure has also led to redundant equipment and systems to ensure system function, a principle that, when applied to industrial manufacturing, could put considerable pressure on manufacturing capital and operating budgets. When applying RCM methods, the system selection criteria typically include a Pareto analysis of those systems that have a large impact on capacity, high maintenance costs, frequent failures and/or corrective maintenance, safety, and the environment. Within a system, components, failure modes, failure causes, and failure effects are systematically defined at the local, system, and plant levels. In turn, this information is used to establish PM requirements. Typical failure mode descriptors might be words such as worn, bent, dirty, binding, burned, cut, corroded, cracked, delaminated, jammed, melted, pitted, punctured, loose, twisted, etc. RCM is a good, disciplined methodology in that it documents processes, focuses effort on function, facilitates PM optimization (don’t do what you don’t need to do to prevent failures—you may introduce more than you eliminate), facilitates teamwork, and facilitates equipment histories and the use of a computerized maintenance management system. There are, however, some potential RCM pitfalls that are addressed through the proper application of RCM, and by the better practitioners. However, a word of caution may be in order. For example, if backup equipment exists, it is important not to simply assume that it will be available when needed. It is often the case that backup equipment will have failure modes different from operating equipment, and so will require a different maintenance and operating strategy. Take the case of a primary and backup pump. Failure modes associated with the primary pump might be related to cavitation, vibration, poor lubrication, or corrosion. The backup, however, might fail because of brinneling, moisture accumulation in the motor windings, sedimentation buildup in the casing due to a leaking isolation valve, or other causes. Further, its startup and shutdown procedures must be developed so as to avoid any failure
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
modes and then followed rigorously. Otherwise, you may actually induce more failures, and risk and cost, with the primary/backup arrangement than you would with a single pump. The various failure modes associated with each operating state, along with its consequence of failure, must be analyzed to accurately characterize the pump’s functional requirements and resultant operating and maintenance strategy. Moreover, its traditional or historical focus has tended to primarily be on PM activities, versus a more proactive, integrated approach, which includes the effects of product mix, production practices, procurement practices, installation practices, commissioning practices, stores practices, etc. The more advanced application by most current practitioners does include these effects, and caution should be exercised to ensure that these issues are included. Another caution is related to the experience of Beta’s Wayland plant. RCM uses a failure modes and effects analysis method to analyze potential failures. Using this information then, for example, PMs are put in place to avoid or mitigate the failure mode. At Beta’s Wayland plant a fairly thorough FMEA, or failure modes and effects analysis, was performed. The results of this analysis, which was done not long before the plant began operation, provided a rank order of the criticality of the systems in place at Wayland. However, after several years of operation, the actual problems, as measured by system downtime, differed substantially from the original analysis. See Figure 13-1. The reasons for these differences are not clear. It could be that once the analysis was completed, people tended to spend a great deal of energy making sure that Plant System No. 1 was superbly operated and maintained, perhaps ignoring Systems Nos. 7, 8, and 9, to their great detriment. It could be that the analysis lacked complete data regarding equipment operation and maintenance practices. It could be other factors were involved. In any event, Figure 13-1 highlights the fact that a FMEA, which tends to be “forward thinking about what the problems could be,” may not necessarily agree with what problems are actually experienced. However, it does represent a good learning and systems thinking process and should be used in that light. Most RCM-type analysis done at Beta has relied heavily on actual historical operating experience. The primary objective of RCM is to preserve system function. It calls for a systematic process for definition of system boundaries and
321
322
MAKING COMMON SENSE COMMON PRACTICE
Figure 13-1. Comparison of FMEA analysis to actual plant problems.
functions, for the analysis of failure modes that result in loss of function, and for putting in place those tasks that preserve system function. It can be an excellent part of an overall maintenance and manufacturing strategy.
Case Study—Applying FMEA/RCM Principles at a Business System Level At Beta’s Allen Central plant, the first step in applying failure modes and effects analysis (FMEA) and reliability-centered management (RCM) principles at a business system level was to bring together a cross-functional team of people to review a critical production line with the goal of identifying the failures resulting in a loss of function. Because Beta was having difficulty meeting production demand, and could sell all it was making, loss of production capacity was a huge functional failure. This team included production supervisors, operators, maintenance supervisors, engineers, mechanics, technicians, electricians, etc.—people who knew the production process and the equipment. As necessary, support staff such as purchasing and stores people were brought in to help in defining and eliminating failures. From the outset, however, a functional failure in the system (the production line) was defined as anything that resulted in loss of production output, or resulted in incurring extraordinary costs. That is
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
to say, they did not restrict themselves to a functional failure of equipment, but rather focused on the production line as a system experiencing functional failures, e.g., no production. They also looked at the frequency of these functional failures and their effects, principally their financial effect as measured by the value of lost uptime or extra costs. Using the uptime improvement model described in Chapter 2, the team initially focused on the first production step, say step A, but once they finished with identifying all the major functional failures (loss of production and/or extraordinary costs) in step A, they looked downstream and asked questions: Are failures in step B causing any failures in step A? Are failures in utilities causing any failures in step A? Are any failures in purchasing or personnel causing failures in step A? And so on. They walked through each step in the production line looking for areas where actions (or failures to act) were resulting in production losses or major costs (functional failures). They also made sure that all the support functions were encouraged to communicate with the team regarding how the team could help the support functions more effectively perform their jobs through better communication. A point worth mentioning is that this is likely to be an imprecise process, bordering on controlled chaos, and, given the inability to accurately measure the losses, we’re often only “guesstimating” at their value. However, these estimates will be made by those who should be in a position to know best, and they can be validated later. Perhaps more importantly, an additional benefit is that they have staff working as a team to understand each other’s issues, and they are using this information to focus on common goals— improving process and equipment reliability, reducing costs, improving uptime, and, in the final analysis, improving the company’s financial performance. For example, they found that: 1. Raw material quality was a contributor to functional failures, e.g., lost uptime, lost quality, poor process yields, etc. Operations and maintenance have little control to correct this problem, but they can advise others of the need to correct it. 2. Gearbox failures were a contributor to mechanical failures. There’s generally little that an operator can do to detect a
323
324
MAKING COMMON SENSE COMMON PRACTICE
gearbox failure with the typical operator PM, if the gearbox was poorly installed or specified. 3. Operator inexperience and lack of training were also concluded as significant contributors to production losses. The problems resulting from inexperience were often logged as maintenance downtime. For example, on one machine it was initially felt that electrical problems were the major source of maintenance problems. However, on review, it turned out that electricians (under a work order) were coming to the machines to train inexperienced operators in machine functions and operation. 4. Production planning was driven by a sales force lacking concern as to its impact on production and the inherent “functional failures” that occur when equipment is required to make frequent changeovers. While their decisions may be right strategically, it almost always requires a more comprehensive review and better integration of the marketing and manufacturing strategies, as discussed in Chapter 3. 5. Spare parts were frequently not available, or of poor suitability or quality. Purchasing was driven to keep inventory low, without sufficient consideration as to the impact of a lack of spares on production. Better specifications and understanding of maintenance needs were required, and low bid should not be the only criterion. 6. Inherent design features (or lack thereof) made maintenance a difficult and time-consuming effort, e.g., insufficient pull and lay-down space, etc. Lowest installed cost was the principal criterion for capital projects, versus a more proactive lowest life-cycle cost. 7. Poor power quality was resulting in potential electronic problems and was believed to be causing reduced electrical equipment life. Power quality hadn’t been considered by the engineers as a factor in equipment and process reliability. 8. Lubrication requirements had been delegated to the operators without adequate training, procedures, and supervision. 9. Mechanics were in need of training on critical equipment and/or precision mechanical skills. A few needed a review course on the equipment itself and in bearing handling and installation.
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
10. Lay-down and pull space for maintenance was not adequate in some cases, resulting in additional downtime and maintenance costs. This was due to the focused factory and lean manufacturing efforts that had reduced floor space substantially for all manufacturing cells, without giving due consideration to maintenance requirements. 11. Last, and perhaps most importantly from an equipment reliability standpoint, precision alignment of the machine tools was sorely lacking and if implemented should dramatically improve machinery reliability and hence reduce system failures. Beyond these general findings about functional failures in the system, they also found that three separate sets of production equipment were key to improving the overall system (production line) function. Functional failures in this equipment were resulting in the bottlenecking of the production line. It varied from day to day as to which equipment in the production process was the “bottleneck,” depending on which equipment was down. Therefore, all three steps in the production process and equipment were put through a further RCM analysis to develop the next level of detail. At this next level, a method was established for assessing the criticality of the equipment by creating a scoring system associated with problem severity, frequency, and detectability. This is shown in Table 13-1. Table 13-1 Criticality Ranking. Severity Rating
Who
Repair Time
Estimated Frequency
Problem Detectability
1
Operator
na
Quarterly
Operator, little inspection required
2
Maintenance
<1 shift
Monthly
Operator, w/ considerable inspection
3
Maintenance
>1 shift
Weekly
Operator normally unable to detect
4
Maintenance
>1 day
Daily
na
325
326
MAKING COMMON SENSE COMMON PRACTICE
The score for a given problem was the multiple of the three factors, Severity × Frequency × Detectability. If a given problem were given a score of more than 4 by the group, then it was considered to require additional attention. Scores of less than 4 were those that operators could routinely handle and/or were of lesser consequence. For example, suppose a functional failure was detectable by the operator, e.g., broken drill bit. Further, suppose it was occurring daily and could be repaired in one shift. Then it would receive a score of 1× 2 × 3, or a fairly serious problem. Very few problems occurred weekly, required more than one shift to correct, or were not detectable by an operator. Finally, this scoring system could obviously be further refined to provide greater definition on a given system or set of problems. You are encouraged to develop your own models and to use models already existing in your organizations for product or process FMEAs. One finding of this review process was that a considerable amount of equipment really needed a complete overhaul. That is, it needed to be restored to “like-new” condition (a TPM principle), but it was found using RCM methods as we looked at the failure modes and effects associated with the system that defeated function. This equipment subsequently went through a “resurrection” phase, wherein a team of people—operators, electricians, electronic technicians, mechanics, and engineers—thoroughly examined the equipment and determined the key requirements for an overhaul, including the key steps for verifying that the overhaul had been successful. Less intensive, but equally valuable (and summarizing), the following model was found to be effective: ------------------------Failure-------------------Component/ Process Function Mode Effect
Rating Cause: Prevention, Detection Action
(Severity × Frequency)
S×F×D
This model was then used to analyze the equipment and assign a criticality rating that dictated the priority of action required for resolution of the problems being experienced. See Tables 13-2 and 13-3 for specific examples of results of this analysis.
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
Table 13-2 Example of Three-Hole Drill Analysis. Failure Component/ Process Function
Mode Effect
Cause: Prevention (Severity; Frequency)
Action Rating ×D ×F× Detection S×
3-Hole Drill Drill 3 holes
dull bit
bit material: new bits (cycle time; weekly)
operator 1 × 3 × 1 -easy
3-Hole Drill Drill 3 holes
broken bits
broken, poor alignment: align (shut down; daily)
operator 2 × 4 × 1 -easy
Notes: (1) Even though the dull drill bits have a score of 3, a quick call to purchasing eliminated this problem—no substitutes! (2) The three-hole drill was often a bottleneck process.
The criticality ranking was very useful in terms of setting priorities, but it was supplemented at times using the model shown in Chapter 11, particularly for the lower-ranking issues, e.g., those scoring less than 4.
Table 13-3 Example of TPM/RCM Results.* Detection
Prevention
Action
Bearing failures
Noisy vibration Improved installation, Maintenance, analysis specs, operation training, care, tools, commissioning
Quill damaged
Visual, noisy
Part concentricity
Quality control, Operator inspection, visual improved design
Routine inspection, frequent replacement
Leaky hydraulics
Visual, pressure Operator tightening, gauge better design
Operator tightening, improved installation
Operator cleaning, lubrication, improved design
Improved installation, operator PM
*Notice that a combination of operations and maintenance, as well as design and purchasing actions, was often required to truly address the problems. TPM and RCM principles were routinely applied, but often extended beyond this to get better application of existing methods, such as root cause analysis, and better tools, such as alignment fixtures and tools.
327
328
MAKING COMMON SENSE COMMON PRACTICE
Figure 13-2. Decision-making model.
For example, in the model shown in Figure 13-2, we can compare value with difficulty and make a further judgment about our priorities. If the value is high, and the difficulty is low, this is typically a Priority 1. If the value is low, but the difficulty is low, this is typically a Priority 2—we can get a quick return on so-called low-hanging fruit. If the value is high, but the difficulty is high, this is typically a Priority 3—it may be worth a lot, but it’s going to take longer to realize that value. Finally, if the value is low, and the difficulty is high, this is typically a Priority 4, and is not likely to ever get done. Beta used this model, for example, in making a decision not to allow substitute drill bits for the three-hole drill. This was accomplished with a phone call to purchasing—the action had a relatively low value, but even lower difficulty.
Some Additional Benefits Further, as they went through this analysis, they began to determine where to best apply certain technologies. For example, precision alignment of the machine tools turned out to be of critical importance throughout the plant, because in the view of the staff, much of the production losses were a result of failures caused by poor alignment. Further, it was also determined that if poor alignment were causing extraordinary downtime and costs, in the short term they could use vibration analysis (a so-called predictive technology) to confirm proper alignment and to anticipate problems and be pre-
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
pared to respond to them in a planned, organized way. In the long term, the engineers had to look for more constructive solutions by improving the basic design (a more proactive approach) and to incorporate into the purchasing standards requirements for better alignment fixtures and capability. Further, they considered how best to prioritize their production and maintenance planning efforts, anticipating where resources were best applied. What also came from the analysis was that they were doing preventive maintenance to little effect—either overdoing it on some equipment and achieving little uptime improvement, or underdoing it on other equipment and experiencing unplanned equipment downtime. In this manner they began to optimize their PM practices. The point is that if you don’t understand where the major opportunities are, then it is much more difficult to apply the appropriate technologies and methods to improve your performance in a rational way. By using a TPM/RCM approach the team found these opportunities more quickly. Finally, as a consequence, the new “model” for behavior among the staff is now changing to “fixed forever” as opposed to “forever fixing.” They also found that it was critical to their success to begin to develop better equipment histories, to plan and schedule maintenance, and to be far more proactive in eliminating defects from the operation, regardless of whether they were rooted in process, equipment, or people issues. This was all done with a view of not seeking to place blame, but seeking to eliminate defects. All problems were viewed as opportunities for improvement, not searches for the guilty. Using this approach, it was much easier to develop a sense of teamwork for problem resolution.
Results Process. The results thus far have been very good and are getting better. The cross-functional teams have identified areas wherein operators through their actions can avoid, minimize, or detect developing failures early, so that maintenance requirements are minimized and equipment and process reliability are improved. Moreover, more effective application of maintenance resources is now being applied to ensure that they are involved in those areas that truly require strong mechanical, electrical, etc., expertise in getting to the more serious and difficult issues. The process is in fact similar to how we maintain our automobiles, that is, we, as opera-
329
330
MAKING COMMON SENSE COMMON PRACTICE
tors, do routine monitoring, observation, and detect developing problems long before they become serious. As we detect problems developing, we make changes in the way we operate and/or we have a discussion with a mechanic. As necessary, we bring our car to the mechanic, describing the symptoms for a more in-depth diagnosis and resolution of the problem. Similarly, we, as operators of our automobiles, can preclude failures and extend equipment life by applying basic care such as routine filter and oil changes, which don’t require much mechanical expertise, leaving the mechanic to do the more serious and complex jobs, such as replacing the rings, seals, transmission, etc. Equipment. The machines to which the methodology has been applied have been more available. For example, before this method was applied to one production area it was common that 6 out of 16 machines would be unavailable for production, with only 1 of those typically being down for planned maintenance. After the process was applied, 15 of 16 machines are now routinely available, with 1 machine still typically down for planned maintenance. This represents an increase of 50% in equipment availability. In another area maintenance was routinely called in for unplanned downtime. After application of this method production personnel were trained in routine operational practices that essentially eliminated the need for maintenance to “come to the rescue,” eliminating many unnecessary work orders, improving equipment availability, and reducing costs. The methodology continues to be applied in the plant with continuing improvement.
Case Study—Improving Packaging Performance Beta’s Omega plant was expanding its business rapidly and had recently announced plans for a major “rollout” of a new series of products, which would be introduced into several existing, as well as a few new, distribution channels. This rollout had necessarily placed a significant demand for increased production at the Omega plant. After reviewing the production capacities at the bottleneck, historical performance, staffing levels, raw materials suppliers, and so on, the plant manager had concluded that Beta could accommodate the added production demand. Unfortunately, the plant encountered substantial difficulty with meeting the increased production and was a month behind in pro-
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
ducing the inventory need for the rollout after just 2 months of production. After an analysis, the bottleneck, which was originally thought to be the extrusion process, turned out to be at packaging. The packaging line was significantly overdesigned and had a capacity rating of 50 units per minute, or more than twice what was needed to support the rollout. Unfortunately, it was only producing an average of about 10 units per minute, with the actual rate varying wildly from day to day, and never more than 20 units per minute. And, as you might expect, downtime levels were exceptionally high. The prevailing opinion was that the packaging line was very unreliable, and the production and maintenance managers were both under considerable pressure over the line’s performance. A common, and critical, question of the maintenance manager was, “Couldn’t he and his staff just ‘fix it’ right?” Coincidentally, Omega was having a manufacturing excellence audit done at all its plants, applying principles of reliability and lean manufacturing. As you might expect, one of the primary areas for applying these principles was the packaging line.6
Business System–Level Failure Modes and Effects Analysis (FMEA) As with the previous case study, the technique applied in resolving the packaging line problems was one adapted from the failure modes and effects analysis (FMEA) methods or, perhaps more properly, reliability-centered maintenance (RCM) methods. Since insufficient information was available about the causes for production losses, other than anecdotes and “swags” at the nature of the problems, it was decided to be more structured in the analysis. As with the previous analysis, the adaptation of FMEA/RCM methods was to look at the packaging line as a business system, and to define a functional failure of the system as anything that resulted in production losses. A cross-functional team was assembled for the analysis that consisted of two operators (senior and junior), two maintenance technicians (electrical and mechanical), an engineer, a packaging line vendor representative, and a maintenance supervisor. The cross-functional team was introduced to reliability and lean manufacturing principles, as well as RCM and TPM principles, during an initial workshop. This was followed with a group exercise, where the following questions were posed:
331
332
MAKING COMMON SENSE COMMON PRACTICE
1. What are the major failure modes that result in production losses? Note: These were analyzed one at a time. 2. What’s the approximate frequency of occurrence of each? Guesses were OK, but they had to get general agreement among the cross-functional team, and occasionally did some external validation of their initial estimates. 3. For each failure, what’s the approximate typical consequence or effect on lost production per year in hours? And/or pounds of product? 4. Are there any extraordinary repair costs for each of these failure modes? 5. What are the potential causes of these failures? Note that at this point they did not want to do a root cause analysis, but rather make a listing of potential causes for later analysis. 6. Can they detect onset of these failures to help avoid them, or better manage them? This process is depicted graphically in Figure 13-3.
Figure 13-3. Operational performance analysis using the FMEA/RCM Model.
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
With this information in hand they developed preliminary priorities as to which of these failures to address most urgently, i.e., those resulting in the greatest production loss.
Application of TPM Principles We also began to apply TPM principles by asking the following questions of the cross-functional team relative to each of the significant failure modes: 1. Does any of the equipment on the line require restoration to like-new condition? 2. What operator care and PM practices do we need to establish, e.g., inspection, calibration, condition monitoring, lubrication, or other basic care? 3. What maintenance PM practices do we need to establish, e.g., inspection, predictive maintenance or condition monitoring, calibration, lubrication, or other basic care? 4. Are there any specific maintenance prevention actions that we need to take in terms of modified operating or maintenance practices, design issues, etc? 5. Do we need to do any additional training of operators or maintenance technicians to help us avoid failures? 6. How do we measure our success? This process is depicted in Figure 13-4. Coincidentally, one simple measure that indicated our success was the number of bags of product shipped compared with the number of bags actually consumed per month. The difference represented how well the packaging line was performing relative to ideal. A high percentage of discarded bags indicated poor packaging line performance. The discarded bags were also a very high percentage of the cost of manufacturing the product.
333
334
MAKING COMMON SENSE COMMON PRACTICE
Figure 13-4. Application of TPM principle.
Clean, Inspect, and Restore the Packaging Line After completing the initial review using the methodology depicted in Figures 13-3 and 13-4, and before doing any further analysis, they then proceeded with doing a “clean, inspect, and restore” effort on the packaging line, in light of what they had learned in reviewing the various production failure modes, their consequences, potential causes, and potential actions for operators and maintainers. This type of effort follows TPM principles and is similar to “detailing” a car. Of course, they first had to assemble their cleaning tools and think through the logistics of the cleaning effort, as well as assemble their cleaning tools, e.g., vacuum cleaner, rags, solvents, etc. Then, the team cleaned the machine, inspecting it for anything that might be creating a problem, e.g., loose bolts, miscentered guides, belts off track, dirty grease fittings, etc. In particular, this cleaning and inspection was done with the failure modes in mind! In doing this they found many things that could be corrected immediately, often as a team. Other things required a formal work order for later implementation, and still other things required design modifications, training, better procedures, or changes by their suppliers.
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
As they were cleaning and inspecting the packaging line, they found the following, for which a “hit list” was kept by the operators and technicians and consolidated by the maintenance supervisor. 1. The tare weight control had been severely contaminated with product, making it very unreliable. It had not been calibrated in some time and was not on any routine calibration PM. Operators had been routinely bypassing the “interrupt signal” resulting from the load cell faults in order to avoid stopping production, and there was a lot of pressure not to stop production. But this was also resulting in over- and underfills, routine minor stops, and product buildup, which in turn often slowed or stopped the entire production line. 2. The bag magazine had bent loading racks, resulting in improper alignment of the bags and jamming. Photo eyes and reflectors on the bag magazine were also in need of alignment to avoid fault readings and minor stoppages. 3. The two cutters/trimmers for the bag tops were different types, had different settings, different springs, and were “gummed” up with lacquer from the bags. Apparently improper spares and/or training had resulted in the use of different parts for the cutters, resulting in different settings to make them “work,” and no routine PM had been established for cleaning the cutters or replacing them when they were worn. Nor was there any standard calibration setting for the different style bags in use. Each was rigged daily to make it work, and a lot of lube spray was used to keep the cutting edges from sticking. The result of all this was nonsquared cuts, improper folds, poor glue lines, and generally a lot of rejected product. But pressure was so high to run the production line that downtime to correct this was not allowed. 4. Lacquer rubbing off the bags and accumulating at major wear points was a general problem throughout the production line, resulting in increased friction and routine jamming, but there were no routine operator or maintenance PMs for cleaning the lacquer to avoid this problem
335
336
MAKING COMMON SENSE COMMON PRACTICE
5. The glue pot was set 10 degrees too high, and the glue nozzles had accumulated dried glue, all of which resulted in improper glue lines and bonding, and rejected product. 6. The mechanical spreading fingers were not properly set; one set was bent and had a bad sleeve bearing, resulting in improper spreading and folding, as well as gluing, and therefore frequent rejected product. 7. Several of the cams that control the advancement of the bags and other settings were caked over with grease, resulting in progressively deteriorating settings as the bags moved down the packaging line, increasing the probability of jamming. 8. Several guides were bent and/or misaligned, and bolts were loose or missing, resulting in misplacement of the bags for packaging, increasing the probability of jamming. 9. Several chains were loose, resulting in a jerking motion as the packaging progressed. 10. Several bearings were found to be worn out, unlubricated, underlubricated, or overlubricated. 11. The palletizer had misaligned or broken limit switches that were not calibrated, loose chains, and a photo cells and reflectors faulty solenoid. Additional influencing factors included poor conveyor tracking and poor knockdown bar and shrink-wrap settings. In addition, further discussion suggested the following: 1. Bag specifications and bag quality were a problem. Marketing had insisted on a highly varied mix of colors, lacquer coatings, sizes, and thicknesses to target specific markets and customers. This is all well and good, except the bag supplier was often unable to meet the bag specifications for each, resulting in spats of jamming, rapid lacquer buildup along the line, inadequate bonding of the glue due to excess lacquer on the bags, etc. 2. Related to bag variety were additional internal factors in that there were no specific settings for the line for each product type. As a result, changeovers and startups after a shutdown tended to be a huge hit or miss effort. Run it, and if it
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE
doesn’t work, try a new setting until you get something that does work. Of course some operators were better at this than others. And, of course, the production plan changed regularly to meet the revised perception of market demand. In one instance 2 hours were spent starting up a production run but produced only 36 bags of product, something we should have been able to do in 1 minute, not 120 minutes. 3. The hopper feeding the packaging machine appeared to be undersized for the duty now required by the rollout, resulting in inconsistent feed rates. 4. There were considerable differences between shifts as to exactly how each operator ran the packaging line. Given the lack of standards and the poor condition of the equipment, this was not surprising. Each operator was trying to accommodate the faults in the machine along with an extensive number of product types, leading to a wide range of operating practices. Or, as the saying goes, “Other than this, everything was running pretty good!”
The Result As they were cleaning, they managed to correct many of these problems on the spot, e.g., recalibrate the tare weight control, align the bag magazine, reset and calibrate all the photo cells and reflectors, clean the lacquer buildup, reset the glue pot temperature and clean the nozzles, clean the cams, tighten the chains, and so on. Other efforts required a work order and spare parts, e.g., restoring the cutters and folders to like-new condition and replacing certain bearings. Still other things required additional engineering review, e.g., the hopper size and rates; PM requirements, including new procedures, training, and schedules for PM; and the development of specific settings on the line for packaging various products. In just two short days—one day of review and analysis and another day of cleaning, inspecting, and restoring—production output was doubled! While it was momentarily painful for the plant and production managers to take the packaging line out of produc-
337
338
MAKING COMMON SENSE COMMON PRACTICE
tion, it was essential to identify and solve the problems that had been identified. However, this was just the beginning. They now had to develop and agree to:
1. A specific operator care and PM program, one that involved operators doing basic care activities—tighten, lubricate, clean, calibrate, and monitor—to avoid or detect developing problems. One point to be stressed was that the operators had to own the reliability of the equipment, much the same as you and I own the reliability of our cars. To do otherwise is like expecting the mechanics at the garage to own the reliability of our cars—they can’t and won’t. They can help with diagnosis, repairs and restoration, PM, and so on, but we have to own the reliability of our cars. Likewise, in production units, operators must own the reliability of the equipment. 2. A specific maintenance PM program, one that involved the maintenance technicians, restoring the assets to like-new condition, ensuring consistency and adequacy of spares, doing loop calibrations, planning for shutdowns and startups, doing the more difficult and/or intrusive PM, and generally working with the operators to ensure reliability and manufacturing excellence in the packaging line. Note that the two PM programs were actually developed and agreed on by operations and maintenance. 3. A specific set of instructions for the PM program, one that included taking digital pictures of each PM area, adding arrows/circles, instructions, key steps, cautions, and so on. This PM instruction set was to be kept near each machine for ease of use by operators and maintenance technicians. 4. A specific manual of setup instructions for each area of the line for each of the different products packaged in each type of bag, i.e., for each stock keeping unit (SKU). The model for this could be characterized as follows:
TOTAL PRODUCTIVE AND RELIABILITY-CENTERED MAINTENANCE Line Settings by Area SKU
Magazine
SKU No. 1
A, B, C, etc. D, E, F, etc.
Bagger
G, H, I, etc. J, K, L, etc.
SKU No. 2
A, C, etc.
G, H, etc.
E, F, etc.
Cutter
Sealer J, M, etc.
Each area would have specific settings for each SKU, which would be used for all changeovers as the reference point. It would also be required that each shift follow the same settings and procedures, or to log and justify the reason for not following the settings. Settings would be upgraded with improvements and changes to SKUs. 5. A training program to train all the operators and maintenance technicians in the proper PM, operation, maintenance, and setups for the line. All in all a good result was achieved in just 2 days, and a good plan was developed for continuing with the improvement and ensuring manufacturing excellence.
Summary The first step in combining TPM and RCM principles is to perform a streamlined FMEA/RCM analysis of a given production line, that is, at a business system level. The FMEA/RCM portion of the analysis defines a functional failure as anything that causes loss of production capacity or results in extraordinary costs. It is focused on failure modes, frequencies, and effects, and is extended to identify those failure modes that would be readily detected and prevented by proper operator action. It also details those failure modes and effects that require more advanced methodology and techniques, such as predictive maintenance, better specifications, better repair and overhaul practices, better installation procedures, etc., so as to avoid the defects from being introduced in the first place. The next step is to apply TPM principles related to restoring equipment to like-new condition, having operators provide basic care (TLC) in tightening, lubricating and cleaning, and applying more effectively preventive and predictive techniques. Operators represent the best in basic care and condition monitoring, but very often they need the support of more sophisticated problem-detection and problem-solving tech-
339
340
MAKING COMMON SENSE COMMON PRACTICE
niques. These are facilitated by integration of TPM and RCM methods. In closing, it must be said that methodologies such as TPM, RCM, TQM, RCFA, etc., all work when consistently applied. However, as a practical matter, each methodology appears to have its focus or strength. For example, TPM tends to focus on OEE/loss accounting, maintenance prevention, and operator care. RCM focuses on failure modes and effects and ensuring system function. Both are good methodologies. Both work. However, in this instance, Beta found that combining the two actually led to a better process and to improvements in teamwork and cooperation at the production level, leading to improved performance and output and lower operating costs.
References 1. Nakajima, S. Total Productive Maintenance, Productivity Press, Portland, OR, 1993. 2. Nowlan, F. S. “Reliability-Centered Maintenance,” United Airlines, NTIS Document No. AD/A066 579, December 1978. 3. Smith, A. M. Reliability-Centered Maintenance, McGrawHill, New York, 1993. 4. Mourbray, J. Reliability-Centered Maintenance, Butterworth-Heinemann Ltd, Oxford, England, 1991. 5. Moore, R., and Rath, R. “Combining TPM and RCM,” Maintenance Technology, Vol. II, No. 6, June 1998. 6. Moore, R. “Clean, Inspect, Restore,” Maintenance Technology, September 2001.
Implementation of Reliability Processes
14
Enumerate a few basic principles and then permit great amounts of autonomy. People support what they create. Margaret J. Wheatley If we are to successfully implement best practices for ensuring a highly reliable manufacturing process, then we must achieve buy-in and ownership of the process from top to bottom within a given organization. At the senior level, executives must “enumerate a few basic principles and then permit great amounts of autonomy,” and, as Wheatley1 also states, “. . . minimizing the description of tasks, and focusing on the facilitation of processes for following those basic principles.” Through this new-found autonomy, people will support what they create, because their participation in the creation process ensures their commitment. It is also essential that we understand where we are and where we want to be (through benchmarking), creating a kind of cognitive dissonance. And, understanding the gaps, we must develop our improvement plant, execute it with a team of committed people, and measure our performance along the way. These are fairly simple, yet powerful concepts, which are far more difficult in practice. Yet we have little choice in reality. Where to start? Beta executives have measured their plants’ performance as compared with benchmark plants, and have identified areas for improvement. (I hope you’ve been doing something similar as you’ve read this book.) This effort must be recognized as
341
342
MAKING COMMON SENSE COMMON PRACTICE
only the beginning, not the end, and must not be followed by decision making that involves an arbitrary ax. Any dummy can wield an ax. It takes a real leader to define the basic principles and then facilitate others’ success. Once these benchmarks are established, best practices must be understood that support achieving benchmark performance. This is key to the success of the implementation effort. Most people in most organizations understand what best practice is. However, common sense just isn’t common practice, and most are not “allowed” to actually do best practice. I hope this book will lend credibility to what is already generally known by those who are actually required to do the work in the plants. In particular, Beta manufacturing executives have asked: 1. What is my asset utilization rate? As compared with benchmark? 2. What are the causes of losses from ideal rate? 3. What are we doing to eliminate these causes? 4. What is our unit cost of production? (Less so than total costs) 5. What is my personal responsibility for facilitating improvement? Note that this requires a comprehensive measure of asset utilization and losses from ideal utilization. Figure 1-5 provides a model for this measurement. With a model such as this, Beta is now beginning to systematically measure its losses and use those measurements to take steps to mitigate or eliminate those losses. If we measure it, we’ll manage it. Indeed, these measurements should drive the creation of our reliability improvement plan, which outlines our losses and our steps for resolution. A typical Beta reliability improvement plan could be outlined as follows:
Executive summary Value of the improvements Key steps to ensure achievement
IMPLEMENTATION OF RELIABILITY PROCESSES
Investment required ($ and people)
Operation’s rating relative to benchmark measures
Analysis of strengths and weaknesses
The plan
Improvement opportunities
Next steps, prioritized by importance, including milestones Technologies to be applied Team members, roles Resources required (people, cost, tools) Specific performance measures Expected results
Basis for periodic reviews and updates
The following text describes a sample reliability improvement plan, reflecting the Whamadyne experience. Beta has found that the implementation process typically requires 2 to 4 years to accomplish, and the first year is normally the most difficult. New processes are being implemented, new roles are being established, and chaos may be common. However, if the few key principles and goals have been established, this effort will actually tend to be self-organizing. Establishing common goals using a common strategy will result in organizational alignment toward those goals. People can, in times of great need, discern what is important and act quickly upon it, particularly if the leadership has made those needs clear. Figure 14-1 depicts what may happen to maintenance costs during this 1- to 2-year period.2 It is common that maintenance costs will actually rise during the first year or so. This is because of any number of needs. For example, it is likely that (1) some equipment needs to be restored to proper operational condition, (2) training in various tools and methods is being deployed, (3) new systems are being implemented, and/ or (4) contractors may be required in some cases to help with the implementation process. Considerable effort is required during this period, and a rise in maintenance costs of some 5% to 20% is not uncommon. Only a few companies are known to have actually gotten most of this accomplished within the 1-year business cycle and
343
344
MAKING COMMON SENSE COMMON PRACTICE
Figure 14-1. Effect on maintenance costs.
not incur incrementally higher maintenance costs (the bow-wave effect). However, this is considered the exception, and most executives would be at risk in “hanging their hat on that nail.” However, some relatively good news is also available, and that is, as shown in Figure 14-2, as these practices are implemented, the total opportunity cost of plant operation (the sum of maintenance costs and production losses) is reduced. Ormandy3 experienced similar results, but total costs were defined as the sum of maintenance costs plus production losses that had been occurring. As the practices were implemented, maintenance costs increased initially by some 15%, but with time were reduced by 20%. More importantly, however, production losses were reduced dramatically, yielding a net increase in gross margin contribution that was five times the increase in maintenance costs in the first year. During the next 4 years, production losses were cut to 25% of their original level. Of course, as losses decreased and output increased, unit cost of production came down and customer deliveries improved, allowing for greater profits and return on net assets, as well as better market positioning. Ormandy3 also reports that a pure cost-cutting strategy without changing basic practices also results in increased production losses. See Figure 14-3. Kelley4 reports results achieved by 500 companies that had implemented reliability and defect elimination programs as follows:
IMPLEMENTATION OF RELIABILITY PROCESSES
Figure 14-2. Total operational costs.
Repair expenses increased by 30% during the first 12–18 months, then declined to 50% to 80% below original levels after 24 to 30 months.
Machinery breakdowns declined by 50% to 60%.
Machinery downtime declined by 60% to 80%.
Spare parts inventory declined by 20% to 30%.
Productivity increased by 20% to 30%.
Kelley also reports (verbally) that as these practices and results were being realized at his company, sales increased by 60%. One final point on this, and that is that the typical bow-wave may last 1 to 2 years before dropping below levels at the initiation of a reliability improvement effort. Several companies have worked with the shop floor to create mini-bow-waves, that is, small efforts that provide a quick return, often within less than 3 months. The leaders of all organizations are encouraged to seek these small efforts, localized on the shop floor, such that the bow-wave is a series of small ripples. Indeed, using both techniques is critical to maximize the prospective gain within a plant. Using both ensures strategic man-
345
346
MAKING COMMON SENSE COMMON PRACTICE
Figure 14-3. Effect of cost-cutting on production losses.
agement support, as well as shop floor support, shifting the culture of the company. The uptime optimization model described in Chapter 2, and demonstrated again in Chapters 10 and 12, provides a process for getting the implementation quickly started and for beginning the development of your continuous improvement plan. Implementation of these practices requires extraordinary leadership to enumerate a few basic principles and then encourage people to follow those principles to ensure the company’s success. Let’s look at the details of Beta’s Whamadyne plant and the process used to bring the plant to world-class performance.
Whamadyne’s Journey to Manufacturing Excellence Beta’s Whamadyne plant, a continuous process operation, was essentially sold out, that is, it could sell all the product that it could make. It had relatively good operating performance at 86% uptime, and yet, because demand was quite high, was having to work very hard to deliver on all customer orders in a timely manner. The failure of a bearing in one of its mixing reactors (and the loss of 2 days production valued at nearly $2M) led to a reliability review of the entire plant’s operation. As a result, an intensive reliability improvement process was established. This improvement
IMPLEMENTATION OF RELIABILITY PROCESSES
process identified nearly $20M per year in improved gross margin contribution, most of which would convert into operating income. Minimal capital investment was required for ensuring this improvement. Small increases were required in the operating and maintenance budgets (a small bow-wave). After being presented with this opportunity, as well as a plan for its implementation, the president of the division said: 1. “Any questions?” (There were none.) 2. “Is there any reason why we can’t do this?” (None was offered.) 3. “Let’s get on with it.” And well they did. As the following indicates, the results were remarkable.
Whamadyne’s Operational Characteristics Whamadyne is a continuous process plant that manufactures a chemical for industrial use in other basic manufacturing requirements. It makes about 50 million pounds of product per year, generating some $322M per year in sales and contributing a very healthy $197M per year in gross margin to the company. Table 14-1 summarizes Whamadyne’s sales, costs, and gross margin contribution. Most of its manufacturing costs, about 55%, as shown in Table 14-2, are made up of raw materials. Other costs are typical of those expected in a manufacturing plant, including labor and fringes, overhead, energy, etc. It is interesting to note that when each cost category’s labor is actually broken out, only 15% to 20% of Whamadyne’s costs are directly associated with labor and fringes. This is highlighted because most managers seem to be highly focused on head count (reduction). Certainly, no one wants to employ more people than necessary, but a Pareto analysis of these data indicates that a head count is not the place to begin. Rather, it may be more important to determine what is best for the business as a whole, or, as discussed in Chapter 1, to look at issues related to market share, uptime, return on net assets, and to put in place those practices and
347
348
MAKING COMMON SENSE COMMON PRACTICE
strategies that yield success. That done, head count will come down as a result and not be a principal driver. Whamadyne’s maintenance operation is characteristic of what might be expected in most process manufacturing plants, with costs running at 10% of total manufacturing costs, a high level of reactive maintenance, and overtime (Table 14-3). Preventive and predictive maintenance are about average, and very little proactive effort is being exerted. Costs are split about evenly between labor/fringes and the sum of contractors and parts. Inventory turns on spare parts is 1. As shown in Table 2-1, this is pretty typical. Whamadyne is currently operating at 86% uptime, and, because it is sold out, there are no losses for lack of market demand (Table 144). Hence, its uptime is equal to its asset utilization rate, depicted in Figure 1-5. This uptime is actually better than the average value reported for most continuous manufacturing plants of about 80%. However, Whamadyne must achieve 92% to achieve its financial goals, and become the low-cost producer of its product, at least as low as is reasonably achievable. This 6% difference between actual and required represents 525 hours over a 1-year period, or 3.3M pounds of product lost, which otherwise could have been sold at a profit. Let’s consider this 525 hours for a moment. That’s a little less than 1 to 2 hours per day over a year. It’s also over 3 weeks during that same year. At Whamadyne, it had always been relatively easy to rationalize that it’s no big deal to give up 20 minutes during one shift in delayed product changeover, a half hour during another, or an hour for downtime because of a missing part, a few percentage points lower than peak ideal rate, etc. These seemed to be daily occurrences. Being pretty good could be hindering Whamadyne from striving to be the best. Table 14-1 Whamadyne Plant Business Profile. Throughput
50M lb/yr
Average price
$6.45/lb
Average cost of goods manufactured
$3.87/lb
Average gross margin
$2.58/lb
Annual sales
$322M/yr
Annual gross margin contribution
$129M/yr
350
MAKING COMMON SENSE COMMON PRACTICE
Few managers would “give up” 3 weeks of lost production. Then why are many plants so willing to give up this 3 weeks per year a half hour at a time? Let’s not. Whamadyne isn’t today. The potential gain from best practices was estimated to be: 1. Increase in production from 86% to 92%, or 3.3M pounds additional throughput. 2. A reduction in overtime from 15% to 5%. 3. A dramatic reduction in rework losses from 0.5M pounds per year to 0.1M pounds. 4. A minimum reduction in maintenance costs of some $1.9M—prevent the maintenance from having to be done through reliability, don’t just do preventive maintenance. Rolling this information into a pro forma calculation results in the gains shown in Table 14-5. It illustrates that some costs are likely to remain constant, i.e., depreciation, environmental, overhead, and other. While environmental treatment costs may rise slightly, they were assumed as constant for this pro forma, likely offset by a slight reduction in “other” costs. However, if they achieve greater throughput, then unit costs will be reduced. Material costs increase proportionally to increased production. Note that this does not consider other initiatives at Beta to work with suppliers to develop strategic alliances and help them increase their plant reliability and performance, sharing in those gains. This initiative had only just begun when the reliability improvement effort began. Utility costs also rise proportionally with production, and this increase has not included any cost savings expected from a program to eliminate compressed air and nitrogen leaks, as well as steam leaks. Maintenance costs are expected to be reduced a minimum of 10%. As noted in Chapters 2 and 9, at some plants maintenance costs have been reduced by 30% to 50% and more with improved practices over a 3- to 5-year period. Whamadyne’s improvement opportunities were expected to result in cost reductions of 10% to 20%, and possibly more, but credit was only taken for 10%. Finally, direct labor costs were expected to be reduced by some 9%, primarily through reduced overtime. No downsizing was anticipated as a result of this effort.
IMPLEMENTATION OF RELIABILITY PROCESSES
Table 14-4 Production Performance Profile.* Planned production uptime:
92%
Actual production uptime:
86%
Lost production:
525 hr
Lost production:
3.26M lb/yr
Other production rework losses:
0.5M lb/yr
*Since Whamadyne was sold out, uptime and asset utilization rate were synonymous.
Finally, all the team members who participated in this effort concluded that these gains were readily achievable, perhaps even conservative. However, as we all know, things happen, and the team wanted to present a plan in which it had high confidence.
Table 14-5 Pro Forma of Business Gains from Best Practice. Percent
Before
After
Material
55%
$106.4
$113.4
Equipment depreciation
7%
$13.5
$13.5
Maintenance
10%
$19.4
$17.5
Utilities
6%
$11.6
$12.0
Direct labor
7%
$13.5
$12.3
Environment
5%
$9.7
$9.7
Overhead
8%
$15.5
$15.5
Other
2%
$3.9
$3.9
100%
$193.5
$197.8
Sales
$322.5
$346.4
Gross margin
$129.0
$148.6
Prospective gain
$19.6M
351
352
MAKING COMMON SENSE COMMON PRACTICE
Whamadyne’s Reliability Improvement Plan With this nearly $20M opportunity in hand, let’s look at the improvement plan that Beta’s staff developed and implemented for ensuring that these gains would be realized.
Background A benchmarking exercise produced the following conclusions about Whamadyne: 1. The plant was in the middle to upper quartile in cost performance, plant availability, plant efficiency, and product quality—a pretty good operation, but not world class. 2. Within the maintenance department, there was insufficient application of predictive, or condition-based, maintenance, reliability improvement teams, operator involvement in basic care, and PM. Preventive maintenance, while on a sound foundation, needed considerable improvement. As noted, some 45% of maintenance was reactive. 3. The plant was running at 50% greater than nameplate capacity, achieved through a series of debottlenecking efforts, and further increases were under way. 4. The plant had a strong commitment to continuous improvement, good teamwork at the plant management team level, and had good leadership from the plant manager and his staff. As noted previously, the plant was also sold out, and could sell everything it could make, when an unplanned downtime event (the bearing failure) led to a reliability processes review. Perhaps as importantly, the company had a new CEO in Bob Neurath, who was insistent on operational excellence to support new thrusts into marketing and R&D excellence. He had made it clear that “the company is competing in world markets where capacity almost always exceeds demand, necessitating that Beta be the low-cost producer of its products.” Beyond operational excellence, Bob Neurath also, of course, insisted that the company put together a good, sound corporate strategy, and, interestingly, he held a very strong
IMPLEMENTATION OF RELIABILITY PROCESSES
view that once the strategy was established, “a bias for action would be far better than continued rumination about the strategy.” His view was, “Let’s get on with it”—marketing, R&D, and operational excellence, fully integrated. His expectations were quite high, and his staff rose to the occasion. In Whamadyne’s (and Beta’s) view, operational excellence required that their plants be designed and operated using the reliability strategy depicted in Figure 1-4, that is:
Design equipment for reliability, uptime, and lowest lifecycle costs (not installed cost).
Buy equipment for reliability and uptime using good specs, histories, and alliances.
Store equipment to retain its reliability, and operate stores like a store business.
Install equipment reliably, with great care and precision and validating quality work.
Operate the plant with great care and precision, using SPC, operator care, etc.
Maintain the plant with great care and precision, integrating preventive, predictive, and proactive methods, with each other, as well as with design, procurement, and operating practices.
In particular, at existing operating plants, Beta must use the knowledge base in operations and maintenance to continuously improve practices in each area. Beta must also foster teamwork and communication; must ensure training in the practices that will lead to operational excellence, and must regularly measure and display its performance against ideal standards. As one of its executives noted, “Performance measures must also expose our weaknesses.”
353
354
MAKING COMMON SENSE COMMON PRACTICE
Improvement Opportunities Notwithstanding its good performance, there were any number of opportunities, as the saying goes, otherwise disguised as problems, at Whamadyne, such as: 1. Performance measures focused on downgraded product vs. “lost” product. Whamadyne strongly subscribed to the product quality mantra and had a heavy focus on downgraded product as a measure. However, this ignored other quality and reliability issues related to the total process and losses from ideal. The value of product losses from ideal was 20:1 greater than downgraded product. Quality of processes, not just product, needed attention. 2. Condition monitoring, both of the process (precision process control) and of the equipment (predictive maintenance), was insufficient for world-class performance. Additional effort was needed in application of best practices and associated training, staffing, and technology. 3. Critical equipment had been identified, but required detailed spares and histories at the component level to support Pareto and root cause analysis. 4. Improvement was needed in standards and procedures to improve reliability in:
New and rebuilt equipment Equipment installation and commissioning Balancing and alignment or rotating machinery Root cause failure analysis (as a routine, not an exception)
5. Spare parts and capital spares inventory management needed a reliability-based focus. At the time there was a strong drive to make better use of working capital and to improve stock turns. This led to predetermined parts reduction goals, which often ignored the plant requirements. Spares categorization and analysis (Chapter 6) led to a better strategy that supported reliability and capacity objectives. 6. While the plant was already operating at 50% greater than nameplate, even higher production rates were being planned to bring it to 100% greater than nameplate. There was con-
IMPLEMENTATION OF RELIABILITY PROCESSES
cern that this would affect equipment reliability. Mitigating efforts included:
Reviewing the mechanical design in light of new operating conditions and loads Baselining equipment condition using predictive technology prior to the upgrade Recommissioning at the higher rates Monitoring and trending, thereafter to verify adequacy of the upgrade
7. PM/overhaul practices were reasonably well planned, but needed documentation in an improved procedure, to include:
PM/overhaul requirements driven more by equipment condition Before and after condition review Improved planning to minimize outage duration
8. Staffing levels and expertise were insufficient to support a world-class reliability program. And, of course, capacity and reliability improvements were valued at ~$20M per year.
Recommendations The reliability improvement team put forth the following recommendations, which were subsequently approved, and an action plan was developed to include resources, responsibilities, and schedules:
Performance measures
Expand performance measures to include asset utilization rate and losses from ideal Measure and display unit cost of production Focus on minimizing the value of lost production (vs. just downgraded product) Develop and display graphical trends of key monthly business performance measures
Process monitoring and control
Develop and implement SPC and process conformance methods
355
356
MAKING COMMON SENSE COMMON PRACTICE
Predictive measurements and techniques
Provide additional staffing for condition monitoring, including a reliability engineer Improve predictive practices, training, instruments, software, and support Improve communications; follow up on reliability activities
Critical equipment prioritization
Explore, develop, and implement advanced process control methods Develop and implement greater operator basic care and PM Integrate process and equipment condition monitoring for improved reliability
Continue to develop detailed listing of equipment to the component level; track machine failure histories, parts use, and lead times
Proactive approach to minimize outage time
Develop and implement reliability-based procurement standards and specifications: – New and rebuilt equipment
– Bearing specs, gearbox specs – Installation and commissioning – Develop methods for reducing duration of outages without affecting basic reliability, e.g., early insulation removal
Spare parts/capital requirements
Complete categorization of MRO inventory Develop strategy for MRO reduction in each area, factoring in: – Equipment and parts use histories
– Reliability, capacity, and turns objectives – Supplier alliances and reliability – Enhance procurement standards to include: physical data, PM intervals and basis, bearing data and B10 (L10) life, gearbox service factor, vibration disks, lubricants needed, etc.
IMPLEMENTATION OF RELIABILITY PROCESSES
Implement stores/motor shop supplier performance measures, including receipt inspection program; improve storage practices
PMs (routine)
Higher rate effects on reliability/future requirements
Review basis for PMs, revise; optimize PMs and align PMs to equipment condition Mitigate effects of higher rates through design review of equipment and application of predictive and proactive tools, e.g., baseline, trend, recommission Proactively use information to revise operation and/or design to mitigate effects
Overhaul
Document overhaul planning and execution process into a formal procedure Develop an improved machinery condition monitoring capability to establish details of overhaul work required and success of overhauls/PM
Set up separate review team to develop CMMS action plan
Manpower level and expertise
Provide reliability engineer and technician resources Transfer nonreliability responsibilities out of reliability function, e.g., safety officer Provide intensive training in equipment reliability-based principles/technologies: – Predictive maintenance technologies
– Proactive methods
Investment required—The incremental investment beyond existing capital and operating budgets was actually anticipated to be minimal, in light of the prospective gain. These included:
Allocation of existing resources to develop standards and procedures and lead the improvement effort Capital investment, instruments, and software: $100K (capital) Operating budget for training, startup, etc.: $150K (start-up)
357
358
MAKING COMMON SENSE COMMON PRACTICE
Reliability engineer and technician required: $150K per year
Amortizing the capital and startup costs over a 5-year period, given an estimated annual investment of: ~$200K/year. Further, the benefits of the improvement process were expected to accrue over 6 to 24 months. Finally, perhaps the largest investment for Whamadyne was not in the budgets indicated, but rather in the allocation of existing resources to change the processes in place. This “intellectual and leadership capital” is perhaps the most expensive and difficult to come by. As Andrew Fraser opined, “Changing the habits of a lifetime is a very difficult process.” Leadership at the plant in setting high expectations and in creating superordinate goals will ensure a common focus and facilitate changing these habits.
Unexpected Obstacle An interesting issue developed shortly after the president of the division said, “Let’s get on with it,” and the management team proceeded accordingly. As indicated, the plan presented to the division president included hiring two additional staff: a reliability engineer and a senior technician. Prior to the presentation, there was considerable concern that they were even being requested, because this same president had declared a freeze on new hiring. During the presentation, it was pointed out that there were no people within the plant who had the requisite skills to do the job of a reliability engineer; nor was there anyone adequately trained in vibration analysis. The training was estimated to take 1 to 2 years to accomplish, substantially delaying the improvement process. Unfortunately, however, the human resources department, as the saying goes, didn’t get the word that the division president had said, “Let’s get on with it.” Resolution of this issue took several months and delayed the program implementation effort. In the interim the maintenance manager served as the reliability engineer, and contractors supported the vibration analysis effort.
IMPLEMENTATION OF RELIABILITY PROCESSES
Results—World-Class Performance The results of the effort were remarkable: 1. Uptime improved to 93%. 2. Whamadyne became the low-cost producer (as low as reasonably achievable). 3. Production losses due to maintenance downtime were reduced to <1%. 4. Maintenance costs were reduced by over 20%. 5. Maintenance costs as a percentage of plant replacement value were near world class.
Constraints to the Improvement Process In the Whamadyne example, much of the change and improvement was being driven from the top, making the change much easier. What happens when it’s not being driven from the top, or even if it is, what if you don’t agree with a particular change? What should you do? In The Fifth Discipline, Senge5 suggests that there are three basic constraints to organizational change: 1. Lacking the power to act 2. Lacking the organizational support 3. Lacking the deep personal commitment While there may be others, here are some thoughts on how to overcome these constraints in a manufacturing environment. Lacking the power to act. If you feel that you lack the power to act, then go ask for it. A statement of the obvious. However, many are intimidated by doing the obvious things that may simply have not been done before. This action should consist of developing an improvement idea or plan, determining the benefit of the improvement, defining the key next steps that support achievement of the improvement, and estimating the investment required—people,
359
360
MAKING COMMON SENSE COMMON PRACTICE
money, time, etc. Take this to your boss, and/or go with your boss to your superiors and ask for the power to act on it. Lacking the organizational support. Once you have approval, or even tentative agreement that your plan may be of value, seek the opinion and support of others. Modify your plan accordingly to suit their input. Getting some agreement on some issues is not likely to make the entire plan without merit. Don’t let what you can’t do be an excuse not to do what you can do. With that support, return to your boss or superiors and continue to take down the obstacles. Lacking the deep personal commitment. This is perhaps the most difficult. Look in the mirror. Recognize there will be difficulties, especially if the plan is outside the norm. Keep your objective firm, even as you may be deterred in its achievement. Be a leader by the example you set every day.
Summary Finally, which is as it should be, continuing improvements in process at Beta’s Whamadyne plant have created an environment that ensured organizational success through: 1. Knowledge—of manufacturing best practices 2. Training—in best practices to ensure knowledge and understanding 3. Teamwork—to ensure communication and understanding 4. Focus—on the right goals for business success 5. Planning—to create a road map for knowing where you are and where you want to be 6. Measurements—to provide feedback and control 7. Leadership—to guide, direct, and sustain a continuous improvement environment
References 1. Wheatley, M. Leadership and the New Science, BerrettKoehler, San Francisco, CA, 1992.
IMPLEMENTATION OF RELIABILITY PROCESSES
2. Thompson, R. et al. “Taking the Forties Field to 2010,” SPE International Offshore European Conference, Aberdeen, Scotland, September 1993. 3. Ormandy, D. “Achieving the Optimum Maintenance and Asset Strategy and Ensuring They Are Aligned with Business Goals,” IIR Conference, London, England, September 1997. 4. Kelley, C. D. “Leadership for Change: Developing a Reliability Improvement Strategy,” Society for Maintenance and Reliability Professionals Annual Conference, Chicago, IL, October 1995. 5. Senge, P. The Fifth Discipline, Currency Doubleday, New York, 1990.
361
This Page Intentionally Left Blank
Leadership and Organizational Behavior and Structure
15
Ubuntu—A Zulu word that describes a group characterized by dignity, mutual respect, and unity of purpose. From a South African Company’s Mission Statement
Leadership Leadership, it is generally agreed, is an essential and even dominant element in a company’s success. So, what is it, and how do we get it? Can we just go out and buy some, send our people to leadership school, set up mentors in leadership, read the latest books on leadership, or what? As we all know, it’s not that simple. Leadership is a bit like the description of pornography offered by Justice Potter Stewart, “It’s hard to describe, but I know it when I see it.” Defining leadership and learning to be a leader can be a very difficult task. Yet, when we face difficult tasks, we should not shirk the task. Even if we fail, we will have learned something in failing. John Widner said, “I write to teach myself.” So, at this point in the book, I’d like to take a bit of a diversion in the format of the book and personalize it more. I’ll be writing about leadership through an amalgam of my personal experiences—as a cadet at West Point; as a manager; as president of a company; as a parent of six children (one of my greatest teachers); as an observer of other leaders, both good and bad; and, finally, as one who advises others on leadership and manage-
363
364
MAKING COMMON SENSE COMMON PRACTICE
ment. In articulating this experience, I’ll be teaching myself along the way, and hopefully helping us both learn in the process. My definition of leadership is as follows: Leadership is the art of getting ordinary people to consistently perform at an extraordinary level. This is not to suggest anything pejorative in the use of the word ordinary, but rather the opposite. Ordinary in this context is more in line with the principles illustrated in the book Citizen Soldier, where ordinary citizens became soldiers during World War II and defeated professional armies. These citizen soldiers won because they were inspired by a higher sense of purpose related to freedom and our American way of life. Many sacrificed everything, much the same as our founding fathers. Within organizations, these same principles and ideals apply. People want to feel that they are part of something greater than themselves. This enables a certain pride that goes beyond their personal performance, and reinforces and sustains that performance. They must also enjoy their work, perhaps not as in seeing a good movie, but as in taking satisfaction in a job well done, particularly the difficult ones. Finally, and most importantly, they must trust that their leaders are acting in the best interest of every employee and are always dealing openly and honestly with them, particularly during the difficult times. The opposite of this is their believing that management is only interested in management and the shareholders. So, pride, enjoyment, and, most of all, trust are paramount in a good leadership model. How do we help employees feel a part of something greater than themselves? A friend of mine noted that we need superordinate goals—lofty goals that are greater than the individual, but which the individual feels a part of. This is best illustrated by a story I once heard: Three men were working in a rock quarry, pounding rocks. It was very hard, sweaty, arduous work. A fourth man happened through the quarry, and, coming upon the first man, asked, “What are you doing?” To which the first man replied
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
in an irritated tone, “What, you blind? Poundin’ rocks, that’s what they pay me to do.” Sensing that the conversation had ended, the fourth man walked on, and came across a second man, and politely asked the same question, “What are you doing?” “Oh, I’m making a living for my family,” was the reply. “One day my son will go to college, and hopefully he will have a better life than mine.” The fourth man went on, finally coming to the third man working in the rock quarry, and, of course, asked, “What are you doing?” To which he replied proudly, “Oh, I’m helping build a temple for the greater glory of God. It’s really an exciting effort, this is.” Which man had superordinate goals and felt a part of something much greater than himself? They’re all pounding rocks, but each has a different sense of purpose about his efforts. Clearly, the third man has superordinate goals, and even the second saw the fruits of his labor building a future for his family. So, what are your company’s “superordinate” goals? Are they clear? Does each employee feel part of a greater purpose? How can you as a leader create and build the sense of purpose that goes beyond the day-to-day humdrum that most jobs involve? How can you sustain it? How can you remove the obstacles for their success? While president of a technology company, we developed a mission statement that we “lived” (more on mission statements later), and that mission was, “Changing the way the world performs maintenance.” We were a growing company, we focused on ensuring the success of our customers, and we were changing the way the world performed maintenance. I think most of our employees felt as if they were part of something greater than themselves. Certainly I did. I also think that most felt a sense of pride, enjoyment, and trust in their leadership. And we were a very successful company, not so much because of any individual or element, but because we were aligned in a common purpose of being the best maintenance technology company in the world, changing the way the world performed maintenance. A few other personal points regarding leadership, not necessarily in any order of importance. First, while president of that company, I assumed a personal responsibility for the well-being of every employee, and particularly the newer ones. They were depending on my good judgment, along with the other members of the manage-
365
366
MAKING COMMON SENSE COMMON PRACTICE
ment team, to make sure the company succeeded, so that they could too. My bad judgment would lead to bad things happening in their lives, for which I felt responsible. Most had family responsibilities, husbands and wives, children, mortgages, and generally responsibilities beyond their employment, but which would be severely impacted by their employment, or unemployment. I had also personally been laid off, had felt the pain involved in that experience, and wanted to avoid that in our employees. I think it would be a good thing if all CEOs had personally been laid off at one point in their lives, a point where their income was vital to their family’s wellbeing. It would provide them with a different perspective. Second, from the CEO to the shop floor, the most telling form of leadership is the example we set. When raising my children, I did on at least one occasion wash my children’s mouth out with soap as punishment for saying “dirty words.” One day while I was working on my car, I busted my knuckles, and the children were in earshot as I started using every “dirty word” in my rather full repertoire. In the first instance, I was managing their language. In the second instance, I was “leading” them to use certain language. Which do you think they were most likely to follow? Visible examples are clearly the most powerful tool for exercising leadership. Finally, in closing this section, I’d like to return to the beginning, and repeat my definition of leadership, that is, “inspiring ordinary people to consistently perform at an extraordinary level.” Suppose we surveyed all the people in a given company and asked them a fundamental question—“In your peer group, would you rate yourself as ‘below average, average, or above average’?” Most, and perhaps as much as 80%, would reply that they are “above average” in their peer group. Clearly, this is a statistical impossibility. But what it does suggest is that people have confidence. What we as leaders must supply is direction, training, tools, expectations, measurements, and that greater sense of purpose that lets them live up to their beliefs that they are above average, and they will meet those expectations. Let’s move on to other models for leadership. There are any number of books written on leadership,1-3 including one by Warren Bennis.4 In his book, Bennis provides the following comparative model for leadership and management, which demonstrates considerable insight into how they differ:
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
Leaders:
Managers:
Challenge status quo
Accept the status quo
Trust
Control
Innovate and develop
Administer and maintain
Ask what and why
Ask when and how
Do the right things
Do things right
Watch the horizon
Watch the bottom line
and we might add: Set the example
Make public examples
Which is more important, leadership or management? Could we do without one or the other? It should be fairly clear that leadership is usually more important than management, since management is more easily taught, and most companies typically have more managers than leaders. However, both are necessary to achieve excellence. The ultimate question might be—What can you do to lead your company, department, or position to operational excellence? And still manage day-to-day operations. Other models for leadership are discussed in the following text, including several suggestions on how leaders can align the organization, and a summary of what these models have in common.
Vision, Reality, Courage, and Ethics Peter Koestenbaum, in Leadership—The Inner Side of Greatness,5 describes leadership as being the four points of a “diamond,” each point being characterized by the concepts of vision, reality, courage, and ethics. He suggests that leaders must have vision to know where they want to “lead” the company; but this vision must be compared with reality in order to assess the gap between the vision and the reality, and then develop a plan for closing those gaps; that the leader must have the courage to make the changes needed to close those gaps—risk is inherent in any change process, and there will always be those who are opposed to the change since they have prospered in the old order of things; and that all this must be based
367
368
MAKING COMMON SENSE COMMON PRACTICE
on the ethical treatment of people. This is less about noncriminal behavior than it is about being honest with people and developing mutual trust, and treating others with appreciation, dignity, and respect.
Personal Humility and Professional Resolve Jim Collins in Good to Great6 offers the leadership model outlined in the following text, which was developed in a review of several hundred companies, performance over a period of 30 years, and characterized the leadership of the most successful of these companies. He states that the Level 5 Executive Leader (the very best) builds enduring greatness through a paradoxical combination of personal humility and fierce professional resolve. The very best leaders attend to the people first, and the strategy second. They recognize that good to great transformations take time. They understand three intersecting circles: (1) the area that the company can be the best in the world; (2) the business economics around what works best in the business; and (3) what best ignites the passion of its people. Finally, he also indicates they have very disciplined cultures: disciplined people, which requires less hierarchy; disciplined thought, which requires less bureaucracy; and disciplined action, which requires fewer controls.
Building Character through Principles Larry Donnithorne, in The West Point Way of Leadership,2 identifies the key components of leadership as being courage, determination, integrity, and self-discipline. Courage, for example, would involve choosing the harder right instead of the easier wrong, and putting the needs of the organization ahead of personal desires. He characterizes integrity as living by one’s word, and building character through principled leadership, that is, basic principles and values take precedent over personal desires. Determination, of course, helps one to work hard and stay focused; and, finally, self-discipline is a hallmark of the West Point model. Self-discipline facilitates focus and resolve to the goals at hand. Having been through the West Point experience, I would like to add something that has had a key influence in my career. One key concept that was stressed there was that the unit commander is responsible for
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
everything his or her unit does, or fails to do. My first reaction to this was that it was unfair. How could I control the behavior of 10, 100, 1,000, or 10,000 individuals? What if some “nut case” went berserk and destroyed my unit’s performance? Was I responsible for this? With time, however, I came to realize that in spite of these misgivings about this principle, it forced me to be proactive in anticipating problems, looking ahead for potential failures in performance, and creating an environment that engaged the people in doing the things that needed to be done, and, perhaps more importantly, looking for ways to anticipate problems and generally improve the organization. No, I couldn’t control everything in the absolute sense, but I could leverage the knowledge of the people to minimize the possibility of failure and maximize the opportunity for success.
Aligning the Organization Peter Wickens, in The Ascendant Organisation,7 states that the greatest risk for an effective leader is to be surrounded by mediocrity, and the greatest responsibility is to appoint good people; that leaders must have deep personal ambition, an ability to think ahead, and a drive to ensure that their ambitions, both personal and business, are realized; and that it is paramount for leaders to align the organization so that all are working together to achieve the same objectives. I couldn’t agree more with Wickens. How do we align the organization? My suggestions are provided here. While you may not use all of these, and you may have additional ones that are even better suited to your organization, these will at least get the thinking started and perhaps validate some current practices: 1. Create a greater purpose or vision for the organization— we’re going to be the best at . . . 2. Align this vision to a common strategy with common goals and respective roles; communicate this often. 3. View manufacturing excellence, safety, and reliability as an integrated process—they’re indivisible. 4. Ensure that operating units “own” the reliability of the plant; maintenance supports reliability.
369
370
MAKING COMMON SENSE COMMON PRACTICE
5. Foster a partnership for manufacturing excellence between production, maintenance, and others, including a partnership agreement. 6. Establish several common measures for production and maintenance, e.g., OEE, downtime, maintenance costs, PM compliance, etc., for creating a greater sense of teamwork and common purpose. 7. Have at least one “away day” for the management team per quarter—review strategy, progress, successes, failures, etc.; update your plan. 8. Foster empowerment: Set up routine structured improvement time for cross-functional teams. 9. Establish a Manufacturing Excellence Forum: a diagonal slice of the organization to communicate best practices and identify any obstacles to their implementation. 10. “Make it easy to do the right thing; and hard to do the wrong thing” (Winston Ledet). 11. Ensure rewards—focus on area/plant success and achievement more so than the individual, still recognizing individual contribution as appropriate. 12. Have measurement systems that identify your weaknesses; they’re your improvement opportunities. 13. Do NOT focus on head count reduction. Only 5% to 25% of costs are typically labor; ask, “How do we use our people more effectively to reduce the other 75% to 95% of our costs?” 14. Understand that organizational culture is more important than structure; avoid the “silo” mentality. 15. Develop a set of questions for executives to ask when visiting plants to reinforce all these principles.
Common Leadership Characteristics These models, as well as others, have several common characteristics:
Leadership requires vision, or a greater sense of purpose, albeit grounded in reality.
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
Leaders put people first and treat them with trust, dignity, respect, and appreciation.
Leaders are trustworthy, true to their word and principles. Leaders have a passion for excellence; set high work and ethical standards; and create a caring, disciplined, proud environment.
Leaders set the example and have the courage to support their basic values and principles.
Leaders align the organization with these values.
Leadership may be in the eyes of the beholder, and hard to describe, but you know it when you see it. I hope these common traits help you see your way to becoming a better leader, while still being a good manager.
Organizational Issues Experience has shown that almost any organizational structure within reason will work, if direction and expectations are clearly articulated by the leadership to create an environment for teamwork, which encompasses a common strategy and common goals among all within the organization. Most of us recall reading about the Roman soldier who was lamenting about reorganizing the army every time it came into a serious problem: We trained hard, but it seemed that every time we were beginning to form up into teams we would be reorganized. I was to learn later in life that we tend to meet any new situation by reorganizing; and a wonderful method it can be for creating the illusion of progress while producing confusion, inefficiency, and demoralization. Petronius Arbiter, Satyricon, First Century A.D.
In many companies that seems the case today. Reorganizing seems to be the thing to do, but often without adequate consideration for the results desired, or, perhaps more importantly, having the patience to achieve those results.
371
372
MAKING COMMON SENSE COMMON PRACTICE
Next, we’ll review some experiences and models used by Beta to ensure good organizational behavior. Granted, there are times when organizational structure requires modification to more clearly define roles and responsibilities. But most companies are in a reasonably good position to make those judgments, hopefully not just to give the illusion of progress. The way the boxes are arranged in an organizational chart is far less important to an organization’s success than is organizational behavior, which is driven by the leadership, who must develop and communicate a common strategy, create a common sense of purpose, and ensure focus on common superordinate goals.
Empowerment In the last several years and even decades, the autocratic, top-down management style has given way to a more team-based approach for management. Most are probably thankful for this. We all have likely heard and used concepts such as self-directed work teams, empowerment, teamwork, high-performance work systems, etc. All these have the common attribute of teamwork, or working collectively toward a common goal. However, there have been several circumstances at Beta where the concept of teamwork has not been effective, and several where it has been. Let’s look at their experience. Empowerment appears to have come into popularity, and then waned, not because the concept isn’t valid, but because it has been improperly applied. At one of Beta’s small divisions the president routinely advised all the employees that they were closer to the problems than he or his management team, and as such they should solve them. This worked very well, save for a notable exception. One of the employees felt that the health insurance policy for the company had several flaws. So he took it upon himself to rewrite the policy and distribute it to the entire company, without consulting the president, the human resources department, or anyone else, except perhaps his immediate associates. The employee was admonished, but not disciplined. Why not you may ask? The employee had taken the president literally and acted. Hence, he exposed a common problem with empowerment. As someone once remarked about empowerment, “The way we do empowerment, we get dumb decisions a lot faster.” Based on the Beta experience, to be effective, empowerment requires the following:
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
1. Clear purpose and direction 2. Relatively clear boundaries within which this power can be exercised 3. Confirmation that those empowered have the skills to effectively exercise this power 4. Performance measures in the context of the empowerment that measure its success 5. A feedback loop to routinely review the successes, problems, changes needed 6. Flexibility to allow the boundaries to grow and contract as necessary 7. Leadership to sustain clear direction for those empowered This view is entirely consistent with recent studies, which indicate that workers are driven by work satisfaction, not pay. In a 2001 study by Randstad North America and Roper Starch Worldwide10, the top 10 definitions for success in the workplace, according to the workers surveyed, were as follows: Definitions for Success in the Workplace Being trusted to get a job done
91%
Getting the opportunity to do the work you like
84%
Having power to make decisions that affect your work
81%
Finding a company where you want to work a long time
76%
Getting raises
74%
Having flexibility
67%
Knowing you have many job opportunities
67%
Getting promotions
66%
Getting praise and recognition
65%
Being asked to lead/manage other people
60%
373
374
MAKING COMMON SENSE COMMON PRACTICE
Empowerment as a Disabler Beta’s Hueysville plant went through a team-building and delayering exercise espoused by many. Unfortunately, this exercise did not have the effect desired. Uptime and unit costs did not improve, and a strong case could be made that it actually deteriorated. As a result of this delayering and team-building exercise, one manager had 26 people reporting to him, another had over 30 reporting to him. After all, the logic said, self-directed work teams could figure out the problems and solve them, so there was no need for a lot of management structure. That would be true, except for some serious obstacles at Hueysville: 1. Work processes were not standardized in adequate detail to ensure consistency of process among the staff. 2. Worker skills were at different levels, and, as you might expect, understanding of work processes varied, resulting in different performance and outputs. 3. Many team leaders had little or no supervisory or leadership training or experience. 4. The plant management team changed completely about every 2 to 3 years, meaning that every 6 months, someone new (with new ideas and direction) would arrive. 5. A clear strategy and set of goals had not been communicated, other than something to the effect of, “Do more better.” Until these issues could be addressed, supervisors were reinstated, a common strategy was created, clearer goals were established, and the corporation made the strategic decision to keep its manufacturing staff in their positions for longer periods to create more stability in the organization. Staying in a management position for too short a time can result in confusion about expectations and direction. Staying in a position too long can result in stagnation and a lack of new ideas. What’s optimal? It depends. Generally, however, it is suggested that it’s more than 2 to 3 years but less than 8 to 10 years. At Beta, it had long been held that it should hire the brightest minds from the best universities for its engineering and management ranks. Nothing is fundamentally wrong with this. Over time, Beta
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
had also developed the practice of moving its bright new people into plant management positions fairly quickly. Many of these young managers viewed this 2- to 3-year stint in manufacturing management as part of a career path toward an executive position. Few expressed the desire to stay in manufacturing as a career. Many even expressed the opinion that if they stayed in a direct manufacturing position more than 4 years their career was at a dead end. How can manufacturing excellence be established within a company where most of the best managers don’t view manufacturing as a career, but rather as a stepping stone? Beta is working hard to change this practice, which has evolved over the years, including bringing the pay scale for manufacturing to a par with other career positions, and reemphasizing the requirement for manufacturing excellence to ensure business excellence. This institutionalized view, however, will take some time to change.
Teams Most organizations profess to believe in teams and profess to recognize that teamwork and cooperation will more likely facilitate the organization’s success than will operating in a “silo” mentality. As Margaret Wheatley observed, “Information must be freely generated and freely exchanged if an organization is to prosper.” One method for creating this free flow of information is creating a culture for teamwork. As someone once said, “If you want teamwork, then you have to give the team work to do, work that benefits the company, work that is challenging and rewarding (or perhaps rewarded).” Following are several models for creating a culture of teamwork, based on an aggregate picture of Beta International. Beta has yet to achieve the full sense of teamwork described here, but it is working hard to do so. Teams need common goals, strategies, rules, practices, and a leader or coach who can guide the team, foster a teamwork culture, and ultimately ensure the team’s success. The reliability process depicted graphically in Figure 1-4, and described in previous chapters, is fact based, goal oriented, and requires extensive cooperation between maintenance, engineering, production, purchasing, and human resources. It represents a strategy for creating teams and integrating the various functions into a comprehensive manufacturing process. As we’ve seen, the strategy itself consists of optimizing
375
376
MAKING COMMON SENSE COMMON PRACTICE
uptime and minimizing losses through the comprehensive integration of the processes in the way we design, buy, store, install, operate, and maintain our plants. Achieving excellence in this continuum requires that our staff work as a team, with the goal being to maximize production capacity and minimize production costs, allowing the company to be the low-cost producer of its products.
An Analogy Basketball is a team sport that makes a good analogy for developing a manufacturing team. If you don’t like or understand basketball, feel free to pick your own sport and develop an analogy. At the broadest level, in basketball the primary goal is to score more points (measures) than the opposition in a given period of time. Of course, there are other supporting measures, such as percent shooting, rebounding, turnovers, etc. Achieving this goal requires an offensive and defensive strategy, such as a “run and gun” offense, a man-toman defense, etc. The rules (boundaries) that provide a process for playing the game are generally well known, such as fouls, lane violations, 3-point lines, etc. The practices (roles and responsibilities) relate to execution of the strategy, such as high post, 3-2 offense, fast break, pick and roll, etc. But in the end the team must work effectively together to maximize the opportunity for success. Team members recognize that they must do both—execute their positions well and support their team members’ success through assists, rebounds, screens, guarding after a pick, etc. If each member doesn’t play effectively at his or her position and as a part of a team, the team is not effective. For example, if each individual operates as if once the ball is passed, his or her responsibility is over; or as if the entire game rests on a single individual, then the team is more likely to be mediocre. Each team member recognizes a responsibility to emphasize his or her strengths, mitigate his or her teammates’ weaknesses, and focus on the objective of winning. Finally, all teams have a coach or leader to whom they must report and be accountable, even those that are self-directed, such as a basketball team. Likewise, in the best plants, the staff who makes up self-directed work teams recognizes the need for goals, measures, boundaries and rules, strategies, practices, and a reporting structure for accountability. And, even in the best companies, few argue with the coach, even when they are self-directed.
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
Current Teamwork Practices Teams in a manufacturing environment should have much in common with basketball teams, but in most plants, the teamwork needs considerable improvement. Let’s take an extreme, although not uncommon, example. The goal is to produce as much product as possible, at the lowest possible cost (measures). For that, production is the star—the principal burden of the company’s success rests on production. The strategy is that each department has a specific function, and hand-offs occur between functions. Feedback and communication are frequently limited to crises. Production produces. Engineering engineers (generally in new capital projects). Purchasing purchases (getting orders placed and parts and material delivered). Maintenance maintains (as quickly as possible to support production; some companies have even set up maintenance “swat” teams to quickly get equipment back on line). If everyone does his or her job right the first time, production could make all the product necessary. The rules are generally known, according to the culture of the organization, that is, certain things are done, others are not. The processes and practices tend to be historical and institutionalized—“That’s the way we’ve always done it.” The coach is the plant manager, who generally (and perhaps rightfully so) is closely aligned to the production manager, because his or her biggest interest is production and production costs, including maintenance. This approach leaves several opportunities for improvement. Most people, most teams, are imperfect, and the problems associated with imperfection tend to be cumulative, having a compounding effect. Having very discrete definitions of job function, which do not include the interface and backup responsibilities, can lead to unfulfilled expectations. Simply doing my job, or assuming that once I’ve completed my task and have passed the ball that I’m finished, ignores all the things that can happen in a complex environment. The best plants (1) follow up on the hand-off; (2) catch errors on the rebound; (3) signal a specific plan, etc.; and (4) constantly focus on working as a team to achieve the objective of winning. When job descriptions are too definitive, however, it also creates a box mentality—“I can’t stray very far from this box, or trouble can develop just the same as when job descriptions are too lax.” Consider, for example, the maintenance box, whose job is to repair
377
378
MAKING COMMON SENSE COMMON PRACTICE
things; the purchasing box, whose job is to buy things; the engineering box, whose job is to engineer things. How often have we heard, “If only maintenance would fix things right in the first place, we could make more product”; “If production wouldn’t run things into the ground, and would give us enough time to do the job right in the first place, we wouldn’t have these problems”; “If purchasing would just buy the parts we need, we wouldn’t have these problems”; “If engineering had included maintenance in the design, we wouldn’t have these problems”; and so on. This box mentality can lead to poor teamwork and mediocre performance. A word of caution regarding teams and teamwork is also in order before going on to the next section. Empowerment and teamwork are not a panacea. As noted above, empowerment requires certain elements to be effective. Lencioni also offers five cautions regarding how teams can become dysfunctional.8 These dysfunctions begin with an absence of trust at the base of his pyramid representation, continuing upwardly to the next level: fear of conflict, then lack of commitment and avoidance of accountability, and finally inattention to results. Let’s review his points, beginning with the last one. As with empowerment, teams require clear purpose and direction. Getting results are typically the sole purpose for the team’s being, and a lack of attention here should be unacceptable. If we do have a clear purpose (getting specified results), we’re more likely to be held accountable. However, being held accountable requires that each team member hold each other member accountable, and question any perceived lack of accountability, or commitment, as the team makes progress (or not) in getting its results. The ability to raise these questions regarding accountability and commitment requires that each team member be able to deal with conflict with minimal fear in a courteous, well-reasoned manner (some fear of conflict is inherent but should not be paralyzing). To achieve all of this requires trust among all the team members, trust that each member is dealing with the other honestly. The team leader must establish this as the basis for all that the team does, addressing each of the potential dysfunctions as an inherent part of leading the team.
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
The Reliability Process—A Better Way The approach described would be better if defined in a slightly different, team-based relationship. The goal is to be the low-cost producer for the company’s products (the cost of quality is included in this concept). This might require an uptime level of 95%, and inventory turns on spare parts of greater than 2, with an OSHA recordable injury rate of less than 1. Alternatively, you may ask: 1. What unit cost of production do I need to achieve to ensure superior performance? 2. What uptime will ensure that? What are my losses from ideal uptime? 3. What other fixed and variable cost reductions are necessary to ensure this? 4. Where are my next steps for eliminating the losses? 5. What is my personal responsibility for achieving this? 6. What do I need to provide to others, and from others, to ensure meeting my responsibility? Note that the last question is new and bridges the gap for ensuring teamwork among individuals. The strategy is to maximize equipment and process reliability, and therefore to maximize manufacturing capacity, as a consequence to achieve minimum manufacturing cost. It is important to emphasize that when the primary focus is on maximizing equipment and process reliability, then minimizing costs will generally result as a consequence, especially when those expectations are created and supported. The rules (boundaries and processes) are simple: Focus on the goal of being the low-cost producer through maximum uptime, and do everything possible to work with the other team members to achieve that objective, even at times deferring to the greater good of the company what might otherwise be good for you or your department.
379
380
MAKING COMMON SENSE COMMON PRACTICE
The practices (roles and responsibilities discussed in the following text) involve doing those things that foster teamwork to achieve the objective of becoming the low-cost producer, and applying best practices so the goal of low-cost producer is achieved. Precision repair becomes inculcated with reliability practices. Fundamental to this is that everyone is focused on this objective. Under the constraints of any given plant then: 1. Maintenance’s job becomes that of supplying maximum reliable production capacity. Repair becomes inculcated with reliability practices. With help from production, engineering, purchasing, and human resources, maintenance becomes focused on preventing maintenance, as opposed to preventive maintenance, putting reliability into the plant. 2. Purchasing’s job becomes that of providing the most reliable, cost-effective equipment and supplies possible. Machine histories, for identifying suppliers with low reliability, are used as part of the purchasing process, and standards that include reliability requirements are defined. Low bid becomes secondary to value, as defined with the help of production, engineering, maintenance, and human resources. 3. Engineering’s job becomes that of designing the most reliable (and maintainable) cost-effective system possible. Low cost becomes secondary to life-cycle value, as defined with the help of production, purchasing, maintenance, and human resources. 4. Production facilitates the production process, with the plant manager coaching. Operators become involved in the reliability of their equipment through a sense of ownership (like owning a car and wanting it to run 300,000 miles); by doing simple routine maintenance; and by working with maintenance, engineering, and purchasing to get to the root cause of problems. Production is still king, but everyone is an integral part of the king’s team, and the king is fairly benevolent. Production no longer carries the entire burden of being “the star.” Every member of the team supports the success of the organization and focuses on the objective—low-cost producer, which is achieved by high capacity,
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
which is achieved by high reliability. Rather than having attitudes related to “If only they would just do their jobs,” the team members are focused on supporting each other and providing constructive feedback for improvement. If they make an error, then we work with them to develop appropriate communications and solutions for these opportunities (often disguised as problems).
Reliability Improvement Teams We’ve already seen reliability best practices and understand that the best plants have integrated the various functions and departments to achieve superior performance. Let’s review the details of these reliability improvement teams and practices that lead to this. Note that not all Beta’s plants had all the practices, so the following discussion is a composite—“the best of the best.” First, we might view the concept of teamwork and organizational structure as the three circles in Figure 15-1. The organization depicted has three functions (it could have more)—maintenance, production, and engineering. Each function has a zone in which it operates autonomously without help from others. Each function also has zones wherein there are overlapping responsibilities and communication is necessary—the teamwork zone. Finally, there is a common goal for all the functions, one that must be met by all the functions—making money is the goal—even if an individual function has to defer for the greater good of the organizational goal of making money. Further, there are times when the production function plays a stronger role or has stronger influence, as in Figure 151b. It still needs support and communication from engineering and maintenance. There are also times when maintenance may play a larger role, as in a shutdown or turnaround, and there are times when engineering may play a larger role, as in a major plant expansion or other capital project. The size of the circles and the overlap between circles will pulsate between two different sizes and amount of overlap, depending on the circumstances. The organization is dynamic, but is always focused on the goal of making the most money over the long term.
381
382
MAKING COMMON SENSE COMMON PRACTICE
Figure 15-1. Managing change.
Some Rules It is important in any team-based environment that the team members assigned to a given task understand the following: 1. Their individual and team roles and responsibilities 2. Who the team leader is to facilitate the focus and effort 3. The time table for their efforts 4. The reporting mechanism for their progress
IMPLEMENTATION OF RELIABILITY PROCESSES
Table 14-2 Annual Manufacturing Costs. Description
Percent
Material
Cost ($M)
55%
$106.4
7%
$ 13.5
10%
$ 19.4
Utilities
6%
$ 11.6
Direct labor
7%
$ 13.5
Environmental
5%
$ 9.7
Overhead
8%
$ 15.5
Other
2%
$ 3.9
Totals
100%
$193.5
Equipment depreciation Maintenance
Table 14-3 Maintenance Performance Profile. Budget: Consisting of:
$19.4M/yr $9.4M labor $5.0M parts $5.0M contractors
Employees:
170
Work orders per week:
350
Maintenance practices: Reactive
45%
Preventive
35%
Predictive
15%
Proactive
5%
Overtime rate: Parts inventory turns ratio:
15% 1.0
349
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
5. The measures for determining their success (more on this in the next chapter) 6. Their common goals and common rewards These “rules” will help create a teamwork culture for ensuring world-class performance. It is emphasized that teams facilitate teamwork, but teamwork is more important than teams per se. We could go into substantial detail for particular kinds of teams—rapid response teams, repair teams, area teams, the teams previously described, etc. Most organizational structures will work, and most teams will work, so long as teamwork permeates the mentality of the organization; so long as adequate, fact-based information is available to make informed decisions; and so long as clear goals are understood throughout the organization. The coach(es) as leader(s) must ensure that this culture is established, set those goals, develop a team spirit, and foster teamwork throughout.
Organizational Structure As we said, organizational behavior and culture are generally more important than organizational structure. Before the purists debate the issue, however, there are certainly times when structure matters, some of which follow.
Centralized Manufacturing Support Specialists At the corporate level, Beta has been served well by a group of manufacturing specialists that provide support in the key manufacturing methods, e.g., statistical process control; advanced process control, all the predictive and proactive maintenance methods; TPM, RCM, and RCFA methods; as well as softer issues, such as training methodologies, team building, project management, concurrent design, benchmarking, etc. For example, this department of specialists has one or more experts in each of the methods or technologies, who are recognized as leaders in their specialties, and facilitate the implementation of the technologies throughout Beta. Their specific functions would include, for example, surveying a given plant to ensure that the technology applies, assessing the prospective benefit, and
383
384
MAKING COMMON SENSE COMMON PRACTICE
working with the plant to set up a program—equipment, database, commissioning, trending, integration with other technologies and with process monitoring, continuing quality control and upgrade, problem solving and diagnostics, and, last but not least, training in the technology to ensure excellence in its use. Further, they would also seek and demonstrate new or improved technologies, identify common problems and opportunities, communicate successes and improved methods, facilitate the virtual centers of excellence discussed in the following section, and overall work as a facilitator for improved communication and implementation of a common strategy. These individuals would not necessarily do the technology, except perhaps in a startup mode, but rather would facilitate getting it done, and may from time to time identify and use contractors for certain support functions. In other words, these specialists would facilitate the review and implementation of appropriate methods and technologies that are considered to provide benefit to a given manufacturing plant. Finally, this corporate-level function also conducts routine assessments of plants against a best-practice standard. The plant manager and staff are also participants in the assessment process and commit to an improvement plan after each assessment. Reports and improvement plans go to the vice-president of manufacturing, signed off by the plant manager, both of whom follow up to ensure implementation of the improvements.
Virtual Centers of Excellence At one of Beta’s divisions so-called virtual centers of excellence have been created to facilitate process improvement. This method involves defining four or five key areas that must be addressed corporatewide for that division to achieve manufacturing excellence. Note that this process can also be used for marketing and sales or other functional requirements. In manufacturing, these might be process control, capital projects, maintenance practices, and stores practices. One representative from each plant is selected to serve on the center of excellence for that issue. Meetings are conducted on a periodic asneeded basis to define what the major issues are, what best practices are, what technologies will help ensure best practice, what the obstacles are, etc. Each individual is expected to contribute to help answer these questions and formulating best practices, and, at the plant, for
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
ensuring that best practice is implemented. Best practice and a summary of the division’s position relative to best practice are provided to the plant manager and the vice-president of manufacturing. The plant manager has the responsibility to follow up on the implementation of best practice through the designated representative and, working with his or her staff, to integrate these issues with those from the periodic assessment and improvement plan for his or her plant. The vice-president oversees the entire improvement process.
Reliability Managers/Engineers At one of its divisions Beta is currently training a minimum of one reliability engineer for each of its plants to meet the job description summarized here and detailed in Appendix B. This individual is being trained in manufacturing reliability best practice to include use of uptime/OEE measurement and loss accounting; RCFA, RCM, and TPM methods; maintenance planning and scheduling; predictive technologies; stores management; proactive methods (described in Chapter 9); and operational best practice. This individual’s job is not necessarily to do the effort identified, but rather to understand bestpractice applications and act as a facilitator and communicator for their implementation at the plant level, and, in particular, between production, design, and maintenance for eliminating the losses that result from poor practices. As time passes, it is expected that this position may become unnecessary, or certainly will change in form, as best practices become inculcated in the organization.
Predictive Maintenance Finally, at Beta’s best plants, all the predictive maintenance technologies were under the leadership of one manager, often the reliability engineer. In large plants this was typically a department of some five to six people, typically two to three vibration analysts, a tribologist (lube specialist), thermographer, and an ultrasonics specialist. In some cases one individual had multiple skills, e.g., thermographer/ ultrasonics. In smaller plants this might be one individual who managed contractors. His or her job was to ensure that the appropriate technologies were being applied to determine equipment condition and to use that knowledge to ensure maximum reliability. Specific tasks included equipment commissioning, routine monitoring and
385
386
MAKING COMMON SENSE COMMON PRACTICE
trending, prevention of catastrophic failures, and diagnostics for root cause failure analysis. This information would be routinely shared with the maintenance planning and scheduling people for better planning. Further, they would routinely review equipment histories and work with the maintenance planner to better allocate resources for improved reliability and higher uptime. In plants that had the technologies scattered throughout the plant, that is, the vibration analyst in mechanical, the thermographer in electrical, the oil analysis in the laboratory, etc., the data that resulted were not used nearly as effectively as when they were in one group, whose job it was to know equipment condition and to use that knowledge to improve plant reliability.
Centralized vs. Decentralized Organizations Focused factory concepts, as well as agile manufacturing and its typically attendant decentralization, have recently demonstrated their value in manufacturing organizations, including Beta’s batch and discrete manufacturing plants, particularly when the marketing and product mix have been well integrated with the focused factory and when maintenance has been developed as a hybrid with some functions centralized and some decentralized (see Chapters 3 and 9). However, the concepts do not appear to be as effective at Beta’s large, continuous process plants. For example, it’s difficult to have focused factories at a process chemical plant, a refinery, an electric power plant, a paper plant, etc. What you can have at these facilities is teamwork, as previously described. Taking a given task or activity to the lowest competent level in the organization, the level at which the task is performed, is generally a good idea. Whether to centralize or decentralize depends on the circumstances, the competency at which decentralization is desired, and a host of issues. Centralizing the manufacturing support specialist’s function at Beta is generally a good idea. Centralizing the virtual centers of excellence is generally a good idea. Decentralizing, and aligning along business product lines and markets, is generally a good idea and provides better focus. And so on. It depends on your objectives, business and personnel competency, business processes, etc. What Beta has found, in general, is that some hybrid form, which fluctuates with time and circumstances, appears to work best.
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
Key, however, in any organization is the leadership’s ability to create a common sense of purpose and goals, and, contrary to some harsher philosophies today, to “pet” employees a bit, that is, expect and instill pride and craftsmanship, to encourage people to enjoy their work, and to develop a sense of trust between management and the shop floor that everyone has the same goal—to prosper over the long term—at the personal level, at the functional level, at the plant level, and at the corporate level. If any one level fails, the risk that they all fail is greater.
Mission Statements Properly done, mission and vision statements create a common sense of purpose and a common set of values within an organization. They also force senior management to articulate its “vision” of the company in 5 to 10 years. However, the reality may be more akin to the Dilbert cartoon character’s concept of a mission statement—management obfuscation that demonstrates its inability to think clearly. While it’s not that bad, and it is important to articulate company goals and issues, roughly 1% of Beta employees in each of its divisions know its mission statement. When asked if Beta has a mission statement, most all reply yes. When asked what it is, about 99% cannot state it. They know it’s on the wall, or they know it has something to do with . . . (fill in the blank), but they’re just not sure what it is. How do we create a common sense of purpose in an organization where only 1% of the people know what it is? The mission of the United States Army Infantry is “To close with and destroy the enemy.” There’s nothing ambiguous in this statement. The sense of purpose is understood. Notice that it doesn’t talk about weapons, tactics, strategy, teamwork, etc., because that’s covered elsewhere; or as businesses might, it doesn’t mention the environment, shareholders, safety, profits, quality, community, etc. Of course you want to make money. Of course you want to be a good corporate citizen. Of course you want to do any number of things associated with running a world-class business. Articulate those important issues, and state them elsewhere, perhaps in your vision statement. Those may also change from year to year, in spite of your best vision. The mission should
387
388
MAKING COMMON SENSE COMMON PRACTICE
not. Certainly it shouldn’t change very often (as measured in decades or generations). It should also be short, to the point, and create that sense of purpose. Everyone in the organization should know the mission statement. At one of Beta’s smaller divisions, the new president had written the division’s new mission statement. Each new president apparently has to make his or her mark. He was quite proud of it. It had all the right words, had been compared with several other mission statements, and was, at least in his view, better than all the rest. Good stuff! This president (a people person) was walking through the manufacturing area and stopped to talk with one of the shop floor employees about a new process and its product. The process was working well, lower costs and higher quality were being met, etc. Toward the end of the discussion, which, of course was very pleasing, the president asked the employee what he thought of the new mission statement. The employee replied that it was good, with surprising enthusiasm. Next, the president asked pointedly, “What is it?” To which the employee replied that it was, “Uh, to uh, make the uh, best, uh, product, uh . . .” It was clear that the employee, a long-standing, loyal, highly competent individual, didn’t know. Rather than embarrass the employee further, the president brushed it aside and said, “Don’t worry about it, it’s not that important. Keep up the good work here.” Walking away, the president reflected that the mission statement wasn’t important, at least not to some of the best employees in the company. After some thought, he concluded that the mission statement had to be changed to a short, concise statement that captured the essence of the company’s mission. After much discussion with the board, many of whom liked the existing statement, it was finally resolved that the mission would be, “To be the preferred supplier.” After that, most all people in the company knew it, because it was easy to remember, and knew how they could help ensure achieving it by what they did in their daily jobs. They lived the company’s mission statement every day. Finally, it also seems that this same president’s father was a lifelong member of the International Brotherhood of Electrical Workers and that his mother was a clerical worker all her life. His philosophy in management was to treat everyone, especially those on the shop floor, with the same respect, care, and dignity that he would treat his
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
mom and dad. That said, however, he also expected them to work as hard as his mom and dad did all their lives for the success of the company. It seems to have made a good philosophy for motivating and instilling that sense of pride, enjoyment, and trust throughout the organization.
Communication of Performance Expectations Perhaps it goes without saying, but we’re going to say it anyway. Performance expectations must be communicated. Some additional methods for doing this, beyond those inferred or stated previously, or perhaps made more explicit, include:
Prominent graphical display of current plant and area key performance indicators, including trends and targets.
Daily and shiftly plant performance reviews and immediate corrective actions needed. Note that this should not encourage reactive behavior but rather facilitate teamwork.
Job profiles defined and reviewed regularly. These profiles should have adequate flexibility to allow for teamwork and application of cross-training skills.
Routine personal development and review process. This should be a positive experience.
Routine communications from the leadership team regarding company performance, expectations, and major developments.
No doubt there are others. These are offered to make sure that they are at least covered and to get the thinking started.
Implementing Reliability and Manufacturing Excellence in a Union or Nonunion Plant Beta has a slightly higher proportion of union plants than nonunion, but all things considered, there does not appear to be a substantial difference between the two in getting best practices in place. More crucial is the leadership and expectation for best practices focus than
389
390
MAKING COMMON SENSE COMMON PRACTICE
whether a plant is one or the other. The history and culture of the plant is also important. There are both union and nonunion plants that have a long-standing history and culture for “that’s the way it’s always been” and a reluctance to change. That said, however, the improvement process may require slightly more effort in a union environment, primarily because existing work rules and contract issues must be addressed. In this light it is critical that the union leadership be brought in at the beginning of the improvement process. The business issues and objectives must be explained, and their advice and counsel must be sought. You won’t agree on everything, but you will earn their respect in the process. The more common objectives can be articulated, the more likely the buy-in from the union. It is also important, if possible, that you pledge to the union that the improvement process will not be an exercise in head count reduction. It should be made clear, however, that you expect to need fewer people per unit of production as a result of the process and that you will manage that need through natural attrition, reductions in contractors, reallocation of resources to a modified plant functionality, improved business position with more volume, etc. Take the high road and work with the unions to ensure their security through business success. Finally, additional publications on organizational behavior and team building may be of benefit. Senge9 takes a systems approach to organizational behavior. Frangos10 goes through a team-building exercise and case history at a business unit of a Fortune 500 company. Galbraith and Lawler11 review anticipated changes in concepts related to learning organizations, value adding efforts, and team building. Donnithorne2 stresses character, honor, teamwork, and commitment to group and institutional goals. All provide good food for thought in developing leadership and teamwork within any organization.
Compensation Compensation is a difficult issue in most companies. Most employees feel overworked and underpaid. Most senior managers believe the opposite: that employees are overpaid and underworked. The truth is elusive, and anecdotal evidence abounds to support either
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
position. A solid position to take as a compensation policy is the following simple statement: Compensation must be internally equitable and externally competitive. Being internally equitable does not mean being internally equal. Differences can exist to provide for differences in performance, experience, skill levels, and so on. What it does mean is that there must be general agreement that compensation is equitable or fair in light of the factors that affect pay. To do otherwise invites grousing, low morale, and even vindictiveness, and ultimately will result in some employees either leaving or underperforming, neither of which is likely to be good for the company. Compensation also has to be externally competitive. If we underpay our employees relative to market rates, then they’re more likely to quit, leaving us without a valuable resource. If we overpay, then the company will be less competitive, ultimately putting everyone at risk—the employees as well as the shareholders. Perhaps a better suggestion is that compensation should be tied to productivity. But even this has its drawbacks. Productivity of the individual? Of the department? Of the company? The short answer is yes, all of these. However, greater weight must be given to the productivity and competitive position of the company, integrating measures for overall success, while recognizing the contribution of individuals and departments. Pay also, as Herzberg pointed out a long time ago, is generally a so-called “hygienic” factor in employment. That is, you’re not likely to produce any more for higher pay, but you’ll leave because of poor, or internally inequitable pay. The exception to this, of course, is people who are paid “by the piece.” Just to make the point, suppose tomorrow everyone in the company received a 20% pay raise. Would output go up 20%? Not likely, and indeed the euphoria of the pay raise would likely be gone in 30 days or less. On the other hand, suppose everyone in the company was given a 20% pay cut. Would output go down 20%? Much more likely it would, while employees are demoralized and looking for a new employer. Finally, don’t “reward,” tolerate, or subsidize bad behavior— you’ll get more of it! Examples include overtime pay for routinely
391
392
MAKING COMMON SENSE COMMON PRACTICE
solving problems that should have been eliminated, or a bonus for solving a major crisis. Make no mistake, you should take care of your people and show your appreciation when they do extraordinary things, so you may take exception to this rule. But, reward the things that you want more of, and you’ll get more of them. Other examples of rewards that reinforce good behavior include genuine verbal praise, notes of appreciation from the boss or higher, an article in the company newsletter, compliments from others repeated, a plaque, or other memorable honor. Be careful, though, that these rewards are given with a genuine sense of appreciation, dignity, and respect, and are not viewed as patronizing. Otherwise, they backfire and demotivate! And, last but not least, team-based and teamrewarded efforts generally provide the greatest return to the company. As noted, the key success factors in the minds of employees are, in order of highest response: (1) being trusted, (2) doing work you like, (3) having power to make decisions affecting your work, (4) finding a company you plan to stay with, and, finally, (5) getting raises.
Managing Change Improving organizational performance will require change. All the improvement methods and suggestions outlined thus far in this book have not really delved into the process of managing these major changes. The following is a model offered for successfully managing change and implementing the improvement processes discussed. In The Prince, Machiavelli observed that there is nothing more difficult to take in hand, more perilous to conduct, and more uncertain of its success than to take the lead in the introduction of a new order of things. He went on to observe that the innovator has many enemies, those who have done well under the old order, and only lukewarm defenders in those who will do well under the new order. Even those who will do well in the new order recognize the risk involved in change and tend to wait to see how things play out before they commit to the new order. Put another way, significant change is risky and requires a compelling reason to support the risk being taken. Hence, major change can be extremely difficult. Suppose, for example, you were asked to secure a lasting peace in the Middle East. Managing that change would be an extraordinary accomplish-
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
ment, since it seems no one has been able to do it thus far. Managing change can also be almost trivial. Suppose, on the other hand, you were asked to get your granddaughter, age 3, off the bottle and on to drinking from a cup. Well, maybe it wouldn’t be trivial, but if you could suffer through a couple days of whining, you could get that done in short order. The kind of change management covered in this book is more in the middle, that is, difficult, but not insurmountable. How do we make significant change to an organization, particularly in a culture that has steeped in its own history for decades? With some difficulty. Outlined in the following text is a model for managing such a change. It’s not a perfect model, and probably won’t fit all circumstances, but it does address most of the major issues in change management. I hope it provides ideas for helping you manage the changes needed in your organization. That said, feel free to modify it to suit your particular situation. My suggestions for key steps in managing change are: 1. Articulate a compelling reason for change. 2. Apply leadership and management principles. 3. Communicate clearly your strategy for achieving the change and the goals desired. 4. Facilitate employee support in developing and implementing the change process. 5. Measure the results—reinforce good behavior; punish bad behavior. 6. Stabilize the organization in the new order. 7. Go to step one—change is a continuing process. These steps are shown in Figure 15-2.
Articulate a Compelling Reason for Change This is perhaps the most important step in the change process. If there is no compelling reason for change, then change is not very likely. The lack of a compelling reason will allow those who prospered in the old order to hold sway over those who are only luke-
393
394
MAKING COMMON SENSE COMMON PRACTICE
Figure 15-2. Managing change.
warm in their support. Change is risky and getting more than lukewarm support requires a compelling reason. What are some of those compelling reasons? One of the more obvious ones is that if we don’t change, we’ll be shut down. That’s pretty compelling. Another might be a recent, serious accident, or a death that was the result of poor practices. Another might be that the CEO has demanded change, or heads will roll. Though this one will have some immediate impact, it’s less likely to be sustainable, particularly at the shop floor level. They’ve been around for 20 to 30 years and have seen it all. This new CEO is just another pretty face in a long line of pretty faces. While they may be irritated in the short term, many will outlast the new CEO. It may better if we have very positive reasons. For example, becoming the best in the world at what we do; earning the right to new capital (and a brighter future); or having a greater purpose, e.g., we save lives at our pharmaceutical company! It may also be that you start with a negative, compelling reason—shut down—and, as you improve, you shift to a positive, compelling reason: We’re going to become the best in the world. Recognizing the time to make this shift is important in changing the culture of the organization. The less compelling the reason for change, the longer the change will take. The bigger the organization, the longer the change will
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
take. That may be just fine. We live in a world where most seek instant gratification—fast food at the restaurant, quarterly profits, and so on. Lasting change takes time, and creating an organization that expects change and continuous improvement takes time. As Jim Collins has noted in Good to Great,6 transformations take time, and, in the truly great companies, can take as much as 10 to 15 years. So, if our cash flow will support it, let’s take the time to make the changes properly, and make them lasting.
Apply Leadership and Management Principles Let’s consider how Warren Bennis’s model for leadership and management may be applied in managing change.4 As you may recall, his model suggests that leaders and managers behave differently: Leaders:
Managers:
Challenge status quo
Accept the status quo
Trust
Control
Innovate and develop
Administer and maintain
Ask what and why
Ask when and how
Do the right things
Do things right
Watch the horizon
Watch the bottom line
In Bennis’s model, leaders foster change and create an environment where change is the norm, whereas managers stabilize the changes, ensuring that they are well implemented and sustained. Both types of behaviors are necessary to achieve excellence in managing change. However, people who are more prone to have Bennis’s leadership characteristics are better at fostering change, whereas people prone to having manager characteristics are better at stabilizing and sustaining the changes once they’re made. Does this characterization suggest that individuals will behave exclusively in one mode or the other? Hardly. My experience has been quite the contrary. That is, our behavior is context dependent, and good “managers” can move from one mode to another. What we must do, though, is recognize that different approaches are needed, depending on where you are in the change process. Change management demands tolerance for risk and uncertainty, and even chaos, and the need for leaders to create the environment
395
396
MAKING COMMON SENSE COMMON PRACTICE
for change. It demands strength and determination to sway those who have prospered under the old order and coaching and encouraging of those who are lukewarm in their support of the new order. Once the changes have been reasonably established, they need to be standardized and sustained. Once that’s done, someone needs to be looking for the next change, the next improvement, and the cycle begins again. As you make these kinds of decisions, try to make sure that the right people are involved in the appropriate leadership or management roles and/or that you or others assume the appropriate role for the stage of the change process.
Communicate Your Strategy and Goals All companies have areas where they are not satisfied with their current state. That creates a compelling reason for change to some future state. Once you’ve identified the reason for change, then a strategy and set of goals must be developed and communicated so that it gives people a clear view of the changes needed, the compelling reasons for the change, the strategy for making the changes, and the ultimate goal or future state. If appropriate, you may want to make it clear that not achieving the desired goals is not an option. Exactly how and when they are achieved is an option and is something that you’ll seek the support of everyone in developing. Of course, we’ll need to be able to measure against those goals to help ensure that we’ve met the goals, and some suggestions are provided on that here. A word of caution may be appropriate. The strategy needs to be reasonably well defined so that it can be understood by the employees and can ensure alignment of the organization to the strategy and goals. As we’ve discussed previously, one of the most important things a leader can do is to align the organization to a greater purpose and its related strategy and goals. However, it must not be so detailed that it leaves little room for change as we proceed with its implementation, and each step of the change process should have a feedback loop, which allows for improving the process itself. Von Clausewitz observed that developing a strategy without understanding the tactics and capabilities of the troops was folly. So it is with our strategy and goals. We must fully understand the tactics supporting our strategy and the capability of our people, and we must be fluid enough to allow changes to our plan as new information
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
arises. Our people will implement the strategy. They must have a strong hand in developing the day-to-day tactics for its implementation, ensuring their ownership. And, as noted, there must be a feedback loop, which allows us to accommodate changes to the strategy based on what we learn as we implement those changes. In setting goals, one of the more common techniques is to use benchmark data to determine gaps in current performance and then seek to close the gap by achieving “benchmark” performance. Be careful. Using benchmarks is an excellent way to determine the gaps in your performance as it relates to certain measurements. What it often fails to do is provide an understanding of how that performance was achieved, or a process for closing the gap. It may be that closing the gap you’re worried about is not one that’s appropriate at this point in your business. An example might make this more clear. One of the more common “big gaps” in benchmark performance in manufacturing plants is maintenance overtime. A common maintenance overtime rate is 10% to 20%, and sometimes more. So-called benchmark performance is in the range of 3% to 5%. So, it’s easy enough to achieve benchmark performance on overtime. We order people to minimize or eliminate overtime. However, the consequence of that may be that work doesn’t get done and product doesn’t get delivered. It’s better to ask how we stop the defects that are creating the failures that are creating the need for the maintenance overtime. We need to be systems thinkers and understand the consequence of our decisions as we’re implementing the change process. One of the best sources of information on the consequence of our decisions is the shop floor. Shop floor personnel deal with these consequences every day.
Facilitate Employee Implementation of the Change Process We’ve heard many times that people don’t want to change. That may or may not be true. What I’ve found is that people will change if given a compelling reason to do so, and if they participate in creating the change process. Or, as Margaret Wheatley, said, “People own what they create.” Once a compelling reason has been established, facilitating participation in defining the specifics of the changes provides them with a sense of ownership and control in the change process. Having a bit of ownership and control is a lot less stressful too!
397
398
MAKING COMMON SENSE COMMON PRACTICE
Sustaining that sense of ownership takes time. Winston Ledet observed that to ensure lasting cultural change, every employee must participate in at least one improvement team at least once per year for at least 3 years. Lasting change, and creating an environment where positive change is the norm, takes time. Hence, you need a process or processes by which you facilitate change and create ownership for the change process at the shop floor level. This process must include sufficient funding; time; training; and, in some cases, a facilitating organization, such as an internal or external consultant, or both. You may want to consider using an external organization initially. As Deming said, “Profound knowledge comes from the outside and by invitation. A system cannot know itself.” There are several tools and methods for facilitating improvement and ownership and for implementing your strategy. These include lean manufacturing, six sigma, high-performance work teams, total productive maintenance, reliability-centered maintenance, and root cause analysis, among others. All are good. All work. And, properly understood, they can work well together under one strategy. Make sure you have a good understanding of how these methods and tools work and how they support your strategy before introducing your strategy and goals. Some will argue that these methods are strategies themselves, and they can be, depending on the level in the organization at which they are applied. It’s important to first articulate your business strategy and then incorporate the methods that best support your strategy. Otherwise, you may let the method drive the strategy instead of the strategy driving the method. Be careful about inducing the “program of the month” aura in your improvement efforts. Lots of companies “launch” an improvement effort with banners and slogans and intensive training, and when it doesn’t deliver the changes expected in a year or so, the enthusiasm dies off. The shop floor acts much like a rock at the shore—they see the “next wave” coming and just sit tight. This too will pass. The change process should have a constancy of purpose associated with it, and introducing new methods needs to be done with discussion of how each fits into the overall strategy. Celebrate the early successes, and learn from the early failures, since, as we know, the best learning often comes from failure; continue to foster and reward a continuous improvement environment. But recognize that the concept of “low-hanging fruit” is more often
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
than not largely a myth, since the improvement targets are constantly moving. The low-hanging fruit you might pick today may end up being replaced by other fruit tomorrow, so you must have a process for sustaining the improvements you achieve, and then looking for the next opportunity. As in athletics, you have to work really hard just to keep what you have. You have to work even harder to improve.
Measure the Results: Reward Good Behavior; “Punish” Bad Behavior As Joseph Juran said, “If you don’t measure, you can’t manage it.” On the other hand, don’t measure it if you don’t intend to make it better. The measure loses credibility. Once the strategy is established and its commensurate goals, we must set up a set of measures that ensure we’re moving toward the appropriate goals. Start with highlevel business measures, which reflect accomplishment of the goals related to the strategy. Some examples include unit cost of production, return on net assets, gross profit per product line, and so on. These will be lagging indicators, and a bit like looking in the rearview mirror to see where you’ve been. As such, these lagging indicators should be “cascaded” down through middle management to the shop floor, where you’ll need some leading indicators. These are the things that you must do to improve the lagging indicators. Some examples here include process conformance, PM compliance, average component life, and so on. For example, if I want my unit costs to go down to get my gross profit and return on net assets up, I have to spend less. To spend less, I have to eliminate the defects that are resulting in the failures that induce the extra costs. To do that I have to improve the consistency of my processes—less variability, higher quality, less waste. To do that I have to measure and improve my process conformance. Leading indicators measure your ability to do the right things day to day. Lagging indicators measure the results of doing the right things. Finally, whether a measure is leading or lagging can depend on where you are in the organization. Leading for one might be lagging for another. Reward good behavior. As we’ve discussed, whatever you reward, tolerate, or subsidize, you’ll get more of it. The federal government has demonstrated this principle for decades, and on a personal level, as parents, we’ve known for years that our children will find the limits of our tolerance and then exercise them. For our
399
400
MAKING COMMON SENSE COMMON PRACTICE
purpose, however, I’d rather focus on rewarding good behavior. Here we’re not really talking about pay for performance, since the data generally say that pay for performance does not work very well and its effects are not lasting.7 Perhaps reinforce would be a better word to use than reward, since rewards are usually associated with money. Money is not what is suggested, so long as your people are compensated at or near market rates in an internally equitable manner. What does need to be reinforced are those behaviors that have resulted in improvements in our leading or lagging indicators. This includes doing simple things like thanking people for a job well done; having them do a presentation to a group or your boss; announcing the results of their efforts in a newsletter; asking their opinion on a problem area; facilitating their participation on improvement teams; and so on. Showing your respect and appreciation for a job well done motivates more effectively than money or banners. Punish bad behavior. Any time you go through a change process, you can expect some “casualties.” Jim Collins6 says we need to get the right people on the bus, and the wrong people off the bus. Getting the wrong people off the bus will result in “casualties.” If in spite of your best effort to engage people in the change process, you find that some are simply unwilling to make the changes needed, then you must take swift action to make sure people understand how serious you are about the need for change. Tolerating foot-dragging, recalcitrant behavior, and passive aggressive behavior (agreeing, but not acting) will scuttle your efforts to change the organization. This is particularly true if these behaviors are being exhibited by people in leadership positions. When taking these actions, you must still show dignity and respect to the people involved.
Stabilize the Changes/Organization in the New Order Once substantial changes have been achieved, these changes need to be stabilized in the new order of things. Some suggestions for doing this include: 1. Updating, or creating, appropriate procedures.
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
2. Incorporating certain practices into policy statements, some of which will become part of our ISO 9000 or other certification. 3. Making change a part of the performance appraisal process. 4. Making the change process part of the succession planning. 5. Fostering an environment where change and continuous improvement are the norm. We will very likely need to put our improved practices into updated procedures, and then train the appropriate people in those procedures. These procedures must also be reviewed for compliance in order to more fully sustain them. Once established, some of our improved procedures and practices probably need to be incorporated into policies related to our ISO 9000 or other certification. For example, if we find that process conformance measurement or condition monitoring is a key element in our improved performance, then we should make a policy statement to the effect that, “We will ensure process conformance on the following processes . . . , using the following measures . . . , according to Company Procedure No. 111.” This statement needs to be specific enough to be compelling and lasting, but not so specific that it leaves you with little latitude for routine change and improvement. Making the change process and related changes a part of management’s performance appraisal, as well as any succession planning, will ensure greater emphasis and attention at that level and will more likely sustain the change process. Management must foster an environment where change and continuous improvement are the norm. This simply takes tenacity and time. As Winston Ledet said, “Cultural change requires everyone participate in at least one improvement team, at least once per year, for at least 3 years.” Another person observed that we need to do something 21 times before it becomes a habit and part of our new culture and behavior. Once stabilized and habitual, we need to continue to look for additional things to improve and refine our practices. Hence, we need a process for continuously seeking ways to improve, accepting that some proposed changes will fail and in doing so will also provide a learning experience. As we do so, we will be instilling a culture that expects and welcomes managed change as an ordinary part of the culture. This is perhaps the hardest part of all, that is, creating a culture where change is the norm. In this Utopian world,
401
402
MAKING COMMON SENSE COMMON PRACTICE
the organization has the inherent capacity to change, quickly and smoothly, in response to changing organizational goals, customer requirements, and competitor challenges. What does it take to be a world-class athlete? A little talent and a lot of very hard work. What does it take to stay in world-class condition? A lot more hard work. Managing change is much the same. It begins with a compelling reason for change, that’s clearly communicated to employees along with a strategy and set of goals. This must be followed by a process for facilitating the changes and for creating a sense of ownership in the people making the changes. The results must be measured, and good behavior must be reinforced. Bad behavior must be punished. Once the change has been reasonably accomplished, it must be standardized and inculcated to the organization. Once this is successfully done, the process must be repeated, over and over, until you’ve created a culture where change and continuous improvement are the norm and are mostly owned by the shop floor. Following this process will improve the odds of being a world-class performer.
Summary As Margaret Wheatley has so eloquently stated Organizations . . . do best when they focus on direction and vision, letting transient forms emerge and disappear . . . organizational change, even in large systems, can be created by a small group of committed individuals or champions . . . information—freely generated and freely exchanged—is our only hope for organization.
Beta has learned that the best leaders provide vision and direction, and establish a common strategy with common superordinate goals for ensuring organizational alignment, thereby ensuring pride, enjoyment, and trust throughout. The execution of the strategy to achieve these goals is facilitated through genuine empowerment and teamwork, facilitated by the principles outlined. Measurements ensure that the goals are constantly reviewed and improvement plans revised as needed. Organizational structure is allowed to change forms as needed to meet the changing business environment, but an organizational culture of
LEADERSHIP AND ORGANIZATIONAL BEHAVIOR AND STRUCTURE
teamwork focused on a greater sense of purpose is considered more important. Beta must forever more foster open lines of communication and teamwork to truly benefit from free information exchange and achieve manufacturing excellence.
References 1. Wheatley, Berrett-Koehler, M. Leadership and the New Science, San Francisco, CA, 1992. 2. Donnithorne, L. The West Point Way of Leadership, Doubleday & Company, New York, 1993. 3. Maxwell, J. C. The 21 Irrefutable Laws of Leadership, Thomas Nelson Publishers, Nashville, TN, 1998. 4. Bennis, W. Managing People Is Like Herding Cats, Executive Excellence Publishing, Provo, UT, 1997. 5. Koestenbaum, P. Leadership—The Inner Side of Greatness, 2nd ed., Jossey-Bass, San Francisco, CA, 2002. 6. Collins, J. Good to Great, Harper Business, New York, 2001. 7. Wickens, P. The Ascendant Organisation, MacMillan Business Press, London, 1995. 8. Lencioni, P. The Five Dysfunctions of Teams, Jossey-Bass, San Francisco, CA, 2002. 9. Senge, P. The Fifth Discipline, Currency Doubleday, New York, 1990. 10. Frangos, S., with Bennett, S. Team Zebra, Oliver Wight Publishing, Essex Junction, VT, 1993. 11. Galbraith, J., and Lawler, E. III. Organizing for the Future, Jossey-Bass, San Francisco, CA, 1993.
403
This Page Intentionally Left Blank
Training
16
. . . and the absolutely decisive “factor of production” is now neither capital nor land nor labor. It is knowledge. Peter Drucker At Beta International, far more often than not, when the subject of training is brought up, a wave of cynicism sweeps the audience. The vast majority do not feel that their training is adequate, either in quantity, or in quality, or both. Some point out that “when push comes to shove” in the budgeting process, training is often one of the first things to be cut. Others point to the boring or misguided nature of the training they receive. Only a few have admitted that they don’t personally put a lot of effort into the training process, expecting to be “spoon fed.” Learning from training, as we all know, requires hard work from both the student and the instructor. Such cynicism and pessimism may in fact be well founded. According to a survey published in 1994 by the National Center for Manufacturing Science,1 training does not appear to be a very high priority, at least for US companies. In this survey only some 10% to 20% of the employees of US companies were reported to routinely receive formal training. This stood in sharp contrast to Japanese companies, where 60% to 90% of employees routinely received formal training. The quantity of training varied with company size, with the larger companies providing more training. Of those
405
406
MAKING COMMON SENSE COMMON PRACTICE
employees of US companies who did receive training, the types of people receiving training were characterized as follows:
Personnel
Percent Receiving Training
Managers/Supervisors
64% to 74%
Customer Service
52%
Salespeople
41%
Production Workers
37%
These are remarkable statistics, in two ways. One is that they demonstrate a huge lack of perceived, or perhaps actual, value for training within this group of US companies. The other is that they demonstrate a lack of perceived value in the need to train production workers more fully. Note that 64% to 74% of the managers and supervisors received formal training, while only 37% of the production workers received formal training; training for managers was about twice that for production workers. The question to be posed with these data are: Who needs training most? After all, if production workers don’t perform well because management hasn’t provided adequate training, how can we possibly hope to succeed in our efforts to become world-class companies and ensure our business success? Nothing is wrong with having highly skilled managers and supervisors. In fact, as we’ll see later, one company insists that its managers be the principal trainers. However, this needs to be formalized in practice to be effective. This lack of perceived value in training and learning brings to mind an old cliché—“If you think education is expensive, try ignorance.” One hopes that much of this lack of appreciation for training and learning has changed in recent years with the advent of several books about learning.2,3 One model to follow in any situation when trying to improve a given process, in this case training, is to return to first principles. We’ve returned to first principles many times in this book, and focusing on getting the basics right is almost always the first order of business. Let’s use that approach in this instance for training. What is training? Learning? Educating? According to Webster, the following definitions apply:
TRAINING
Educate—to provide schooling; to train or instruct; to develop mentally and morally.
Train—to teach so as to be qualified or proficient; to make prepared for a test of skill; to undergo instruction, discipline, or drill.
Learn—to gain knowledge, or understanding of, or skill by study, instruction, or experience. (In Chinese, to learn means to study and practice constantly.2 This constancy of purpose about learning and improving would serve most of us well.)
For our purpose, we’ll narrow this a little further. Educating is more broadly cast as mental and moral development. Training is targeted at developing a specific proficiency or qualification. Learning is gaining a specific skill or proficiency by study, instruction, or experience. At Beta, there are several things that could be improved in its training processes, much the same as with the US educational system today. Perhaps the first is to create clear expectations in the trainers and in the students. For example, it should be stated that, “I expect that when you leave here at the end of this training week, you will be able to (fill in the skill). There will be a test at the end of the week of your proficiency to determine if this learning has occurred.” This does two things: (1) it sets the expectation, and (2) it makes sure that the students understand that there will be a test of having met the expectation. If students want to be deemed proficient, they must pay attention, participate, and study. If trainers want to continue as instructors, they must provide the proper input to the students to ensure proficiency in those who are willing to work. Granted, there are always criticisms of the trainer, the students, the time, etc. This is just more opportunity for improvement. Beta’s experience has been that when expectations are clearly expressed, they are more often met. Let’s consider some strategic and tactical approaches used by Beta for improving the training process.
The Strategic Training Plan At Beta’s Langley plant, a large discrete parts manufacturer, a strategic training plan directed at meeting its corporate objectives was established. The process proceeded as follows:
407
408
MAKING COMMON SENSE COMMON PRACTICE
1. Establish corporate objectives, e.g., RoNA of 20%, average plant uptime of 85%, unit cost of production, etc. 2. Analyze the skills required to achieve these objectives, given your current position compared with benchmarks, and in particular relative to your current losses from ideal. 3. Identify your current skills—types and quantities. 4. Review anticipated attrition, workforce demographics, and its effects. 5. Review planned changes to processes and equipment and training needs thereto. 6. Review “soft” skill needs—supervisory, team building, communication, conflict resolution. 7. Review regulatory and legal training requirements. 8. Perform a gap analysis of the shortfall between skills/quantities, as well as specific regulatory-type requirements, and define training needs to minimize losses from ideal. 9. Develop a strategic training plan, including budgets, timing, numbers, processes, priorities, etc. 10. Establish clear expectations and outcomes from the training effort, and measure them. 11. Recognize that training involves exposing people to principles, techniques, methods, and so on, and that real learning doesn’t occur until they actually practice the training. Make sure they practice what they’ve been trained to do. 12. Repeat steps 1–11 annually. This process is providing a framework for establishing strategic training goals and for ensuring training that supports manufacturing excellence. Beta is currently applying this process, and is satisfied with the improved skill level of its employees.
Boss as Trainer As a practical matter, Andrew Grove4 makes a very strong case for the boss as trainer, as opposed to hiring training specialists. His case is also strongly supported by Intel’s success, where 2% to 4% of
TRAINING
every employee’s time is spent in training every year, and where training is required to “maintain a reliable and consistent presence,” and not be something for solving immediate problems. As a result, training is viewed as a process, not an event. In his view, a manager’s success, as measured by output, is the output of the organization he or she manages. Thus, a manager’s productivity depends upon increasing the output of the team under his or her direction. The two ways in which to accomplish this are: (1) increase the motivation of the team, and (2) increase the capability of the team. He makes the case that a manager’s job has always been to ensure strong motivation of the team, so training should be no different, especially in light of the leverage of training. The example he provides to illustrate this leverage of using the boss as trainer is quite telling: Say that you have 10 students in your class. Next year they will work a total of about 20,000 hours for your organization. If your training efforts result in a 1% improvement in your subordinates’ performance, your company will gain the equivalent of 200 hours of work as the result of the expenditure of your 12 hours. The other, unspoken point in this example is that when training is provided, expectations should be articulated about the consequences of the training on improved productivity, output, etc. Training for its own sake, while philosophically satisfying, may not be in the best interest of the business. An example of this is discussed in the following. Grove goes on to distinguish the difference in training required for new employees and veteran employees, offering the example of a department having a 10% turnover rate and a 10% growth rate. This necessitates training 20% of the department in the specific skills needed for their jobs. On the other hand, training everyone in the department in a new methodology or skill requires training 100% of the entire department. When the two are combined, the task becomes even more daunting. But, remember the leverage in training. His suggestions for developing the training requirements include defining what the training needs are, assessing the capability to provide the training, and identifying and filling the gaps, both in training required and in instruction available.
409
410
MAKING COMMON SENSE COMMON PRACTICE
Training vs. Learning 3
Roger Schank makes the point that: The way managers attempt to help their people acquire knowledge and skills has absolutely nothing to do with the way people actually learn. Trainers rely on lectures and tests, memorization and manuals. They train people just like schools teach students: Both rely on “telling,” and no one remembers much that’s taught . . . we learn by doing, failing, and practicing. His view appears to be fairly cynical about most educating and training done today, in that it does not foster learning. While this has considerable validity, a strong case could be made that studying and memorizing even the most mundane facts in itself develops learning skills and good habits, such as discipline and hard work, which are useful in themselves. Studying and listening to instruction, properly done, also provides a framework from which to practice and gain the experience we need to develop a given skill. The Japanese have demonstrated that discipline and hard work needed for the rote style of learning that is part of their culture can lead to a very successful economy.5 The very definition of learning, whether you adopt the “Webster” version of study, instruction, and experience, or the “Chinese” version of study and practice, both require study, presumably under some instruction, and both require practice and experience. Given this, it is recommended that training be accomplished in the context of mutually supportive methods:
First, classroom instruction should have a specific objective in mind and should ensure that those objectives are met through some sort of validation process, e.g., a test. This will encourage study, paying attention, good instruction, etc., and will create the framework for learning.
Second, workshops or hands-on sessions should accompany most training, wherein the instruction or principles are actually practiced in a forum that is not threatening. Practicing in
TRAINING
this type of environment should at least begin the skill development.
Third, recognize that the skill is not fully developed until it has been practiced in an operating environment for a period of time. Allowing for this transient time and for complete learning and proficiency to develop is essential.
Finally, expectations should be clearly stated when training is provided, both for a given course of instruction and for the training program as a whole. To understand these expectations, try answering the questions: “What business benefit will result if we do this training? What are we going to get in return? What do I expect my people to be able to do as a result?”
Training for Pay Beta’s Bosco and Maytown plants at one time adopted a policy of paying for skill development. While paying for skills is generally a part of any market-driven economy, in this particular circumstance, it did not lead to the anticipated result. At these plants, it became standard practice that for every skill demonstrated that was generally consistent with a worker’s job, an increase in pay was awarded. However, as with many things, the policy became abused in that at times the sole purpose of some employees was to demonstrate the skill, and receive the pay increase, without applying the skill to their jobs. Several employees were receiving pay for skills that were so old, and out of practice, that the skill had become a skill on paper only, all the while costing the company additional money. In this case, this not only created additional costs, without additional benefit, but could also create a safety risk. This could be particularly true if a person were perceived to be qualified for a job, but had not performed the job in an extended period, and then was called upon to do the job. The increased safety risk results because of skills assumed to be present but are not. Pay for skill should have a business purpose in mind, a process for verifying the need for the skill, and that the skill itself has been acquired.
411
412
MAKING COMMON SENSE COMMON PRACTICE
Training in Appropriate Methodologies If you are seeking to be a world-class company, many of the technologies and methods described in this book will be necessary. However, a key reason these technologies fail to provide the expected benefit is the failure to recognize the need for extensive training and startup effort for their implementation. As a rule of thumb, you can expect to spend an additional two to five times the initial cost of the instrumentation and software for the startup and training effort needed to fully implement these technologies. Plan on it and you’ll be happy with the result. If you don’t plan on it, then don’t waste your money. Finally, all training programs must recognize the need for what might be termed “soft” skills—teamwork, communication, conflict resolution, supervisory methods, leadership, regulatory and legal issues, etc. Make sure these are incorporated into your training program. However, as one Beta division found, these soft skills aren’t very productive in and of themselves, and must be combined with determining the “things you’re going to do” to improve performance, and how that performance will be measured.
How Much Training Is Enough? As we all know, training and learning are a continuing effort. You’re never done, whether you’re advancing existing skills, training on new technology or equipment, refreshing existing skills that have become rusty, or training for a new role or position. However, how much training is enough is very problematic. If you’re currently well below world-class performance, or even below average, it’s likely that you’re going to require much more training per year than those who are above average or world class. As an engineer might say, “You’re likely behind the power curve and require an extraordinary amount of training to get over the skills requirement hump.” At one Beta plant, the maintenance manager trained everyone in maintenance basics first, and then in best practices for an average of 160 hours per year for 2 years to “catch up” in their skill levels. Over the following 2 to 3 years, it is expected that this will decline to 40 to 80 hours per year. Following are some guidelines for deciding how much training is enough. Train sufficiently:
TRAINING
1. To eliminate or minimize the gaps in the losses from ideal. 2. To ensure that processes are in statistical control;6 this is especially true of operators. 3. To inculcate best practices. 4. To eliminate worst practices. 5. Or train about 40 to 80 hours per year4—but only after you’re at a high level of performance.
Multiskilling and Cross-functional Training Much has been said about multiskilling and cross-functional training. Some of Beta’s plants have reported very good results with this concept. Others have reported less than satisfactory results. Some even sneer when either word is mentioned. Based on anecdotal evidence, the plants that have had the greatest success appear to be those that did not try to take multiskilling too far. That is, they didn’t expect a senior mechanic to become a journeyman electrician, or vice versa. Rather, for example, they expected the mechanic to be able to lock out and tag out a motor, disconnect it, replace the bearings, check the stator and rotor for any indication of problems, etc., and put the motor back on line in 90% of the cases without having to call in an electrician. Likewise, they expected the electrician to be able to change bearings, too, normally a mechanic’s job. Multiskilling had to do with making sure all the skilled trades could perform basic tasks of the other trades, and yet retain the craftsmanship needed in their trade to ensure precision and craftsmanship in the work being done. These basic skills also varied from site to site, depending on any number of factors. They also worked with the unions to make sure that common objectives for productivity improvements were established, that safety considerations were observed, and that all were appreciative of the success of the business.
Intellectual Capital Much has been discussed in recent years concerning the concept of intellectual capital, a phrase coined by Thomas Stewart in his book,
413
414
MAKING COMMON SENSE COMMON PRACTICE
Intellectual Capital: The New Wealth of Organizations.7 It would be remiss not to at least introduce this concept at this point. Imagine being a board member of a large company when your CEO announces at a board meeting that a major plant is being disposed of, resulting in a huge charge to the company. How would you feel about the CEO’s decision, disposing of this hard-earned capital? Now imagine how you would feel if these were “human assets” being disposed of through layoffs? Should there be a difference in your feelings? Both are valuable company assets, and yet you may be accepting of, or even encouraged by, the “disposal” of people, because of the high costs eliminated with people. But shouldn’t we also be concerned about the value of our people, our human capital, at least as much as their costs when making these decisions? And how do we assess that value, since it isn’t currently part of our accounting system? The answer to these questions suggests a fundamental change in the way we think about our human assets and their “disposal.” In the long term this change could require major revisions to generally accepted accounting principles. In the short term this thinking must be used to ensure a balanced analysis, particularly in an economic downturn. Most companies like to tout the value of their employees—“Our people are our most important asset,” is a popular refrain. But, we seldom hesitate to dispose of those assets in a downturn. Nor do we typically think of the acquisition (or disposal) of our human assets with the same strategic thinking as we do our fixed capital assets. With a downturn looming, the good business manager ought to be thinking not just about the cost of people, but also about the value of people, and how to keep them—they’re the company’s lifeblood. Think about what it costs to hire, train, and make a single employee fully productive (estimates range from $30K to $60K or more). Think about what we’re throwing away when we “dispose” of that asset, and what it might cost to replace it. Think about the business risk to our future prosperity. When we acquire fixed assets, we generally follow a specific process. Similarly, we follow a specific, and parallel, process for acquiring human assets. However, one of the key differences in the accounting treatment of these assets is that the costs for adding fixed assets are capitalized, and then carried on the balance sheet as assets that depreciate. On the other hand, the costs for the acquisition of human assets are expensed, but are not carried as assets, even
TRAINING
though they appreciate over time, especially in so-called learning organizations. The accounting treatment of these separate, but similar, efforts has a very different effect on the balance sheet, though both represent the investment of capital. More importantly, this accounting has a substantial impact on our thinking—fixed assets are viewed as just that, assets. People are viewed as expenses, which can be easily acquired and disposed of. This thinking is fundamentally flawed. The value of our employees’ knowledge, ability, and general intellectual capability, must be recognized, quite literally. Stewart7 and others suggest that intellectual capital can have at least as much value as fixed asset capital, and he offers several models for understanding and valuing intellectual capital, such as Tobin’s q, market capitalization to book value ratio, calculated intangible value, etc. Recent trends in information and communications technology also highlight a common situation wherein companies with a high degree of intellectual capital often have greater value than the traditional capital-intensive companies. Compare Microsoft and DuPont, for example. Stewart’s suggestions tend to be a bit complicated, so just to get the thinking started, let’s consider a simple model for valuing people, one with which we’re all familiar—the common capital project model. When acquiring a fixed asset, we typically go through a process that identifies a need, selects a supplier, installs the equipment, tolerates early low-efficiency production, and finally achieves full capacity of the asset. A parallel can be drawn for people, and it can be easily demonstrated, for example, that some $50K is invested in acquiring a typical middle manager. It can also be demonstrated that his or her value increases with time, or appreciates, through training, learning, and general skill development, and that after 10 years, we’re likely to have another $50K in additional value added to the employee, for a total investment of $100K. Stewart’s models may be better, but the point is that if we were carrying this asset on our books, or at least recognized this value in our decision making, we’d be much more circumspect in disposing of the asset, particularly if we expected to grow our business (and what business doesn’t expect to grow?). This is especially true if we understand the typical consequence of a layoff or cost-cutting strategy. As we discussed previously, Hamel offered us the concept of “corporate liposuction,” describing it as a condition in which earnings growth is more than five times sales growth, generally achieved through intense cost-cutting. The result
415
416
MAKING COMMON SENSE COMMON PRACTICE
was that it doesn’t just suck the fat out of a company, it sucks the muscle and vital organs out as well, with predictable results! In his review of 50 major companies engaged in this approach, 43 suffered a significant downturn in earnings after 3 years of cost-cutting: really lousy odds. He pointed out that growing profits through costcutting is much less likely to be sustainable, and must be balanced with sales growth through innovation, new product development, and process improvement. And who does this innovation and improvement? Clearly it’s our people, “Our most important asset,” one which must be trained and encouraged to learn. “Our people are our most important asset” is a phrase that must have real meaning and not be an empty statement made at politically opportune times. Unfortunately, nowhere does the balance sheet state the value of a company’s human assets or intellectual capital. It should, or at the very least it should estimate and report that asset value. This would help us make better decisions, and help board members more accurately judge performance. It would also help us properly balance cost-cutting and asset disposal.
Summary If you’re an average manufacturing plant, or even above average, putting together a training plan that supports your strategic objectives, that puts in place the proper resources and time for your employees to learn, and that ensures minimizing the gaps from ideal performance will serve you well. Your training must “maintain a reliable and consistent presence,” not just be done as a reaction to a problem. As Senge2 so aptly put it, “Generative learning cannot be sustained in an organization where event thinking predominates.” Your managers must view themselves as trainers, leveraging their skills across their organization. Clear expectations must be established for the training to be done, and “craftsmanship” must become a part of the culture of the organization. Learning must become a part of the continuous improvement process at your organization, providing your employees with both the direction, and simultaneously the freedom, to learn, generating the intellectual captial needed for your success.
TRAINING
References 1. National Center for Manufacturing Science. Newsletter, Ann Arbor, MI, July 1994. 2. Senge, P. The Fifth Discipline, Currency Doubleday, New York, 1990. 3. Schank, R. Virtual Learning, McGraw-Hill, New York, 1997. 4. Grove, A. High Output Management, Vintage Books, New York, 1983. 5. Reid, T. “The Japanese Character,” SKY Magazine, November 1997. 6. Walton, M. The Deming Management Method, Dodd, Mead and Company, New York, 1986. 7. Stewart, T. A. Intellectual Capital: The New Wealth of Organizations, Bantam Doubleday Dell Publishing Group, New York, 1999.
417
This Page Intentionally Left Blank
Performance Measurement
17
If you don’t measure it, you don’t manage it. Joseph Juran If you do measure it, you will manage it, and it will improve. With this in mind, it is critical that the measures selected be the correct ones, and that they be effectively communicated. Thus far we’ve covered the basics of practices for ensuring manufacturing excellence. This done, however, we must make sure we measure those things that are supportive of our corporate objectives, including those measurements that establish our objectives. The following is a case history of one of Beta’s divisions that achieved exceptional results by implementing a process for measuring and continuously improving its manufacturing methods. It provides guidance for an active, participative management style for most organizations in ensuring continuous improvement. The new vice-president of manufacturing for Beta’s small startup instrumentation division questioned the plant manager about the quality of the plant’s new products. The plant manager ensured the vice-president that quality was good, even very good. When asked for the measures of quality, however, the plant manager admitted that the focus was on finished product quality. Because there was a strong commitment to customer satisfaction, and new products tended to get “buried” in the total statistics, the more mature products usually represented the greatest volume. Soon thereafter
419
420
MAKING COMMON SENSE COMMON PRACTICE
measurements were intensified on all products, and the vice-president focused on two key measures: (1) for internal purposes—firstpass yield, and (2) for external purposes—DOAs, or equipment that was “dead on arrival” or failed to meet any critical function within the first 30 days of shipment. Supporting measures were also implemented, but these two were considered key measures of success. After collecting the information for several months, it turned out that quality was not good in either of the key measures for new product success. DOAs were >3%, or 3 of 100 customers could not fully use the product because of defects that rendered a key function useless. This was particularly painful, because it was Beta’s belief that dissatisfied customers will tell 10 to 20 people of their bad experience, and therefore the potential existed for some 30% to 60% of potential customers not to view the company favorably. It also turned out that first-pass yield was only about 66%, meaning that effective costs for manufacturing were 50% (1.0/0.66) above ideal. The good news was that critical data were now being measured and could be managed toward improvement. Clearly, the next step was to examine the causes for DOAs and low first-pass yields. Upon investigation, causes were determined to be mechanical failures, component failures, assembly failures, software failures, etc. A Pareto analysis was performed, which identified the principal causes of the failures and provided a basis for establishing priorities for solving them. A team was formed to evaluate, on a prioritized basis, the cause of the failures, and then to quickly implement change for eliminating failures, all failures. For example: 1. Each instrument was fully powered for a period of time sufficient to minimize startup and early life failures, and a series of tests were developed and documented in a procedure (check-off) to ensure mechanical and electrical performance. Nonconformance judgments were placed in the hands of each technician who had the authority to reject any equipment under test. Technicians were also encouraged to work directly with manufacturing and engineering personnel to improve processes to eliminate failures. 2. Review of the Pareto analysis and a root cause failure analysis indicated that vendor quality problems were at the source
PERFORMANCE MEASUREMENT
of many problems. Components incurring a higher rate of failure were reviewed directly with the vendor to develop corrective action. A receipt inspection program was also implemented to ensure that quality products were received. 3. Mechanical defects were particularly troublesome. In reviewing the data, specific patterns emerged as to failure modes. Design modifications were implemented. Procedures were rewritten (engineering and manufacturing) to eliminate failures. Each instrument was put through a mechanical shake test to simulate several severe cycles, validating the mechanical reliability of the equipment. 4. Software was rewritten and test procedures were made more stringent to ensure system functionality. 5. A customer service line was established to provide details of failures and failure modes and effects, and these were entered into a database for use in future designs and design modifications. The measurement and analysis process continued forward (continuous improvement). Over the course of 2 years, first-pass yield increased to over 85% (a 21% cost reduction), and the DOA rate dropped to 0.15%, a 20-fold improvement. While neither of these was considered world class, the trend was clearly in the right direction, and the division expected that in time, world-class performance could be achieved. When new products were introduced, these measures would sometimes show a deterioration in performance, but the impact was immediately recognized, analyzed, and corrective action taken to improve the quality of the product and the processes. Customers received higher-quality products, and first-pass yield increased (costs were reduced). Using this success as a model, performance measures were implemented throughout the plant with comparable results. For example, in customer support, measures of customer satisfaction were implemented and routinely surveyed. Customer support staff were strongly encouraged to use their own judgment and take the steps they deemed necessary to satisfy customer needs. Likewise they were strongly encouraged to seek help from and give input to engineering, manufacturing, and sales departments. If, in their judgment, they felt
421
422
MAKING COMMON SENSE COMMON PRACTICE
the problem needed senior management attention, they were encouraged to immediately contact the president or other officer of the company to seek resolution. This was rarely necessary, but demonstrated corporate commitment to customer satisfaction. Over time, the staff became outstanding and handled essentially all situations to the customer’s satisfaction, achieving a 93% overall approval rating. Other departments, likewise, showed substantial improvement in their performance—accounting improved cash flow from 53 days aged receivables to 45 days aged receivables, cut billing errors in half, provided month-end reports within 5 days of the end of the month, etc.; engineering had fewer design defects, improved design cycles, etc. Some additional lessons learned include the following:
Simply measuring data is not sufficient to ensure success. Performance measures must be mutually supportive at every level within the company and across company departmental lines. Measures that encourage an attitude of, “I win if I do this” only alienate other departments and individuals. Measures that focus on the customer winning, and therefore the company winning, will ensure a greater degree of success. They must not, as the management-by-objectives (MBO) strategy tended to do in the 1970s, create narrowly focused interests and the loss of teamwork. The least desirable measure is one that results in a reduction or loss of teamwork. As such, measurements must be integrated into a hierarchy supportive of overall corporate objectives and not conflict with other departmental measures. Measurements must be mutually supportive between departments.
Measures must be displayed clearly, openly, and unambiguously. This provides an emphasis of the importance to the company, to the customer, and therefore to the individual. It also gives employees feedback on individual, department, and company performance and allows employees to adjust their activities to improve their performance according to the measures. Processes and methods can then be revised to ensure continuing improvement. At Beta’s Watergap plant, key performance measures related to output, quality, and unit cost were boldly and clearly displayed for all to see. However, on closer review, the data were nearly 3 months old. When questioned about this, the production manager replied, “We’d had a couple of
PERFORMANCE MEASUREMENT
bad months and we didn’t want to concern people.” The philosophy should be that when you have bad news, get it out right away. If you don’t, the rumor mill will capture it anyway, and amplify it. More importantly, the employees could take the view that management doesn’t trust the people well enough to be honest with them. Of course, we all want to hear good news, but the real test of a company is often how it manages bad news, and particularly the plan of action to ensure employees that the problems will be resolved.
For a given functional unit, limit the number of measures displayed to five. More than this gets very cluttered and confuses any message of importance trying to be conveyed. People can only respond to a few measurements effectively. Any particular manager may want to track other measures that support the key three to five, but displaying them will likely detract from the goal of using measurements to reinforce or modify behavior.
At Beta’s principal divisions, the performance measurement cascade model depicted graphically in Figure 17-1, and detailed in Table 17-1, was used to establish key measures, which cascaded from corporate to shop floor level.
Figure 17-1. Performance measurement cascade model.
423
424
MAKING COMMON SENSE COMMON PRACTICE
Table 17-1 Performance Measurement Cascade . Corporate Measurements: Sales Growth
Earnings Growth
Return on Net Assets
Market Share
Safety and Environmental Performance Business Unit Performance: Market Share/Sales Growth (Each Segment) Variable Margin Contribution “Innovation” New Product Revenue Customer Satisfaction (On-Time Deliveries; Returns, Claims, Complaints) Return on Invested Capital or Net Assets Safety and Environmental Conformance - Manufacturing Performance: OEE/AU and Losses from Ideal
Unit Cost of Production
Inventory Turns
WIP
Customer Satisfaction (On-Time Deliveries; Returns, Claims, Complaints)
On-Time Deliveries First-Pass, First-Quality Yield
Safety and Environmental Conformance - Area Production Units OEE/AU and Losses from Ideal: Transition/Changeover Losses
Availability Losses
Performance Losses
Quality Losses
Process Conformance Reporting: Equipment Reliability
Instrumentation Performance
Standard Operating Procedures, Conditions
Raw Material Quality
Process and Product Cpk
Operator Care Performance
- Manufacturing Performance: Process Conformance/ Nonconformance
Shift Handover Performance
Housekeeping
Operator Care Conformance
Equipment Downtime/Life
PERFORMANCE MEASUREMENT
Table 17-1 Performance Measurement Cascade (continued). Other Measures in Their Direct Influence Maintenance Performance: Maintenance Cost (% of PRV)
Maintenance Cost (% of COGM)
Planned Maintenance (PM/ PDM)%
Reactive Maintenance %
Equipment MTBR for Critical Equipment
% Equipment Precision Aligned/Balanced
Lubrication Conformance Skilled Trades: Average Equipment Life, Mean Time between Repairs
Seal Life; No. of Seals per Month
% Compliance to Plan and Schedule
Other Measures under Their Direct Influence
Bearing Life; No. of Bearings per Month
No. of Leaks per Month Stores Performance: Stores Value as % of PRV
Stockout Rate
Disbursements per Stores Employee
Parts Inventory Turns
Housekeeping
Use of Integrated Logistics
Utilities Performance: $/lb of Steam Generated; $/N2 Consumed
$/lb of Steam Consumed by Production
$/lb of Electricity Used
Production Losses Due to Utility Faults
Accounting Performance: Aged Receivables
Aged Payables
Foreign Conversion Costs
Invoice Accuracy
Cash Conversion Cycle Marketing and Sales: Market Penetration/Share
Innovative Product Market Share
425
426
MAKING COMMON SENSE COMMON PRACTICE
Table 17-1 Performance Measurement Cascade (continued). Margin Growth
Product Volume by Market
Innovative Product Volume
Customer Complaints
On-Time Deliveries Human Resources: % Absenteeism
% Employee Turnover
Injury Rate (OSHA Recordable, LTA)
Training/Learning (Hrs/$ per Employee)
Certified Skills
The following are additional possible measurements for establishing key performance indicators (also see reference 1). Again, select the three to five that are most important to the functional group and display those. Note that some replicate those shown in Table 17-1.
Sample Production Measures
Total production throughput (pounds, units, $, etc., total and by product line)
Asset utilization (% of theoretical capacity)
Overall equipment effectiveness (Availability × Efficiency × Quality)
Conformance to plan (%)
Total costs ($)
Cost per unit of product ($/unit)
Maintenance cost per unit of product ($/unit)
Equipment availability by line/unit/component (%)
MTBF of production equipment (days)
Scrap rate ($ or units)
Rework rate ($, units, number)
Quality rate ($ or units)
Finished goods inventory turns
Inventory “backlog” (weeks of inventory available)
PERFORMANCE MEASUREMENT
Work in process (nonfinished goods inventory, $ or units)
Overtime rate (% of $ or hours)
Personnel attrition rate (staff turnover in %/yr)
Product mix ratios (x% of product lines make up y% of sales)
Energy consumed ($ or units, e.g., kwh, Btu, etc.)
Utilities consumed (e.g., water, wastewater, nitrogen gas, distilled water, etc.)
On-time deliveries (%)
Returns ($, units)
Setup times (by product line)
Cycle times (plant, machine, product)
First-pass yield (product, plant)
Process efficiencies
OSHA injury rates (recordables, lost time per 200K hours)
Process capability (to hold specification, quality)
Training (time, certifications, etc.)
Productivity ($ per person, $ per asset, etc.)
Scheduled production and/or downtime
PM work by operators (%)
Sample Maintenance Measures
Equipment availability (% of time)
Overall equipment effectiveness (Availability × Efficiency × Quality)
Equipment reliability (MTBF, life, etc.)
Planned downtime (%, days)
Unplanned downtime (%, days)
Reactive work order rate—emergency, run to fail, breakdown, etc. (%, %hrs, %$)
427
428
MAKING COMMON SENSE COMMON PRACTICE
Product quality (especially in equipment reliability areas— rolling mills, machine tool, etc.)
Maintenance cost (total and $ per unit of product)
Overtime rate (% of $ or hours)
Personnel attrition rate (staff turnover in %/yr)
Rework rate ($, units, number)
Spare parts/MRO inventory turns
Overtime rate (% of $ or hours)
Personnel attrition rate (staff turnover in %/yr)
OSHA injury rates (recordables, lost time per 200K hours)
Process capability (to hold specification, quality)
Training (time, certifications, etc.)
Productivity ($ per person, $ per asset, etc.)
Scheduled production and/or downtime
Bearing life—actual vs. L10 life
Mean time between failure, average life
Mean time to repair, including commissioning
Planned and scheduled work/total work (%)
PM/work order schedule compliance (% on schedule)
Hrs covered by work orders (%)
PM work by operators (%)
PMs per month
Cost of PMs per month
“Wrench” time (%)
Average vibration levels (overall, balance, align, etc.)
Average lube contamination levels
Schedule compliance for condition monitoring
PDM effectiveness (accuracy of predictive maintenance by technology)
Mechanics, electricians, etc., per
Support person
PERFORMANCE MEASUREMENT
First line supervisor Planner Maintenance engineer Total site staff
Total number of crafts
Training hours per craft
Maintenance cost/plant replacement value
Plant replacement value $ per mechanic
Stores $/plant replacement value $
Stores service level—stock-out %
Critical spares Normal spares Contractor $/total maintenance $ Maintenance $/total sales $ Maintenance $/value added $ (excludes raw material costs)
Other Corporate Measures
Return on assets
Return on equity
Return on invested capital
P:E ratio
Percent profit
Industry-Specific Measures
Refining—dollars per equivalent distillate capacity
Automotive—hours per automobile
Electric power—equivalent forced or unplanned outage rate
Leading and Lagging Indicators Finally, it’s important that we review again the concept of leading and lagging indicators. As we’ve discussed previously, lagging indicators tend to be high-level business measures that reflect accom-
429
430
MAKING COMMON SENSE COMMON PRACTICE
plishment of the goals related to the business strategy. Some examples include unit cost of production, return on net assets, gross profit per product line, market share, and so on. They’re a bit like looking in the rearview mirror to see where you’ve been. They also tend to cascade down to the shop floor, where you’ll need some leading indicators. These are the things that you must do to improve the lagging indicators. Some examples here for operations might include operator care/PM compliance, process conformance/nonconformance, equipment downtime or delay times, housekeeping conformance, production schedule compliance, first-pass, first-quality yield, and so on. Examples for maintenance might include percent PM compliance, compliance to plan and schedule, mean time between repairs for critical equipment, seal or bearing life, number of seals/bearings used per month, lubrication compliance, number of leaks present per month, and so on. Leading indicators measure your ability to do the right things day to day, and to reduce the defects that are resulting in failures in your business system. Lagging indicators measure the results of doing the right things. And, as previously noted, whether a measure is leading or lagging can depend on where you are in the organization. Leading for one might be lagging for another. Finally, one last comment on performance measures—they don’t need to be complex, as may be suggested by these listings. For example, at one of Beta’s chemical plants, a large rotary kiln was experiencing frequent unexpected stops, resulting in lower production and higher costs. The plant manager started measuring and displaying prominently the number of kiln stops per week, and stressing to people that this very serious problem had to be solved. Within 3 months kiln stops had been reduced 10-fold. This was a simple, but very effective measure that helped improve performance.
Return on Net Assets or Return on Replacement Value Many corporations use return on net assets (RoNA), or some comparable measure, as part of determining operational and financial performance. While this is an excellent measure, it may be more appropriate to measure specific plants on the basis of return on replacement value. In those cases where a plant has been depreciated to where it has a low book value, return on net assets may not truly
PERFORMANCE MEASUREMENT
reflect the plant’s performance, as compared with a new plant. It is probably more appropriate to put all plants on a normalized basis and truly judge the plant’s operational performance.
Plant Age and Reliability For those who think allowances should be made for older, less reliable plants, data from continuous process plants indicate that older plants are no less reliable than newer plants. Ricketts2 found no correlation between plant reliability and plant age, location, capacity, or process complexity. Beta’s experience has been that there is some reduced reliability, as measured by uptime, during the first 2 years or so of a new plant operation, and there is some reduced reliability after about 30 years into operation, if the plant has not maintained its infrastructure, e.g., its piping bridges, tanks, roofs, etc. If it has, then there is no correlation between age and plant reliability.
Measure for Weaknesses One of Beta’s divisions also subscribed to the philosophy that measurements should also expose weaknesses, not just strengths. There is a natural tendency in almost everyone to want to look good. Hence, if measurements are made, we want to score high, looking good. However, this tendency can at times lead to a false sense of security. For example, at one Beta plant, the plant manager reported an uptime of 98%. However, on closer review, this was being measured against an old production rate that was established before a debottlenecking effort. After considering this, and factoring in all the other issues, the plant was more like 70% and indicating a 30% opportunity vs. a 2% opportunity. Benchmarking against the best companies will help minimize the tendency to use measures that ensure looking good. Making sure that you’re setting very high standards for your measurements will also help in this. As stated, measurements should expose your weaknesses, not just your strengths. Only then can these weaknesses be mitigated, or converted into strengths. If you measure it, you will manage it, and it will improve.
431
432
MAKING COMMON SENSE COMMON PRACTICE
References 1. Haskell, B. Performance Measurement for World-Class Manufacturing, Productivity Press, Portland, OR, 1991. 2. Ricketts, R. “Organizational Strategies for Managing Reliability,” National Petroleum Refiners Association, Annual Conference, New Orleans, LA, May 1994.
Epilogue
18
There is a tide in the affairs of men which, taken at the flood, leads on to fortune. Mark Antony from Shakespeare’s Julius Caesar Beta International is at a major juncture in its corporate history. It can, through the proper leadership and direction, reestablish itself as a world-class performer. For those who achieve superior performance, global competition whets the appetite, pumps the adrenaline, and spurs them to new levels of performance. For those who do not rise to the occasion, this ever intensifying competition represents a threat from which to retreat. Bob Neurath is convinced that the globalization of economic forces is an opportunity waiting for the best companies. He is also determined that Beta will rise to this challenge by creating clear expectations for excellence in all business activities. Thus far, several divisions are in fact rising to this challenge and meeting these expectations. Manufacturing performance has improved substantially—at one division, uptime is up 15%, unit costs are down 26%; at another division, OEE is up 35%, unit costs are down 27%; at another division, maintenance costs have been reduced by over $40M. At these particular divisions, this has all been done without any downsizing. When combined with other improvements related to restructuring its market focus and new product development, stock price has risen substantially, more than doubling for one division.
433
434
MAKING COMMON SENSE COMMON PRACTICE
Prior to Bob Neurath’s arrival, Beta International had suffered from overconfidence and a lack of investment, focusing too much on making the short term look good. This approach had not served Beta well, ultimately resulting in a downturn in corporate performance, and creating a sense of retreat in the organization. Reestablishing a well-founded sense of confidence, pride, and trust in senior management’s leadership has been a daunting task. With these successes in hand, Beta’s plan for this follows the processes outlined in this book, but it also includes continued improvements in marketing and advances in research and development. Some markets or market segments have been abandoned, either in the form of sell-offs of businesses or by discontinuing certain products. This has been done where there was a lack of confidence in Beta’s technology, performance, or leadership to achieve excellence in those markets. Other changes are also likely. Beta will also be looking for businesses in which to invest, particularly in those markets where it feels strong. Research and development will be enhanced and tied more closely to those markets that represent Beta’s strengths. Manufacturing excellence is now a requirement, not an option, and the marketing and manufacturing strategies will continue to become much more closely aligned and mutually supportive.
Beta’s 10-Point Plan Beta has had success in several of its divisions. However, these individual success stories must be transformed into the corporation’s overall behavior. Bob Neurath’s expectations, like many corporate leaders today, are that Beta should be first or second in all its markets, or have a clear, measurable path for achieving that position. And, trying to be all things in all markets only dilutes management focus, leading to mediocrity. The plan for accomplishing this is clearly described in the following section and begins with reviewing all of Beta’s markets and current market position worldwide. Strengths and weaknesses will be reassessed in those markets, and a plan for achieving market leadership will be put forth, including the integration of R&D requirements, as well as manufacturing performance and strategy. Beta’s 10-point plan for manufacturing excellence is: 1. Align the marketing and manufacturing strategies, establishing the manufacturing performance necessary for market
EPILOGUE
leadership—unit cost, quality, uptime, on-time deliveries, etc. 2. Establish performance benchmarks as the beginning of the process for achieving and sustaining manufacturing excellence in all markets. Compare all manufacturing plants with this standard of excellence. 3. Measure uptime/OEE and losses from ideal performance. Establish the root cause of these losses from ideal. Use these losses as the basis for establishing improvement priorities. 4. Develop a plan of action for minimizing these losses from ideal and ensuring that best practices are being applied to that end. The culture of Beta must be changed from one of having to justify applying best practice to one of having to justify not doing best practice. Best practice and manufacturing excellence are requirements, not options. Getting the basics right, all the time, is fundamental to making best practice an inherent behavior. 5. Establish and apply best practices, using the models developed at successful plants, for the way Beta designs, buys, stores, installs, operates, and maintains its plants, that is: a. Capital projects must be designed with lowest life-cycle cost in mind, not lowest installed cost, incorporating the knowledge base from operations and maintenance to minimize costs through better up-front design. b. Buying decisions must incorporate life-cycle cost considerations. Design, operations, and maintenance must use their knowledge base to provide better specifications and feedback. Supplier strategic alliances must be developed that focus on operational and business success, not just low price. c. Stores must be run like a store—clean, efficient, low stock-outs and minimal stock, high turns, low overhead, accurate cycle counts, and driven to support manufacturing excellence. d. Installation must be done with great care and precision, and to an exacting set of standards. Commissioning to validate the quality of the installation must likewise be done to an exacting set of standards, both
435
436
MAKING COMMON SENSE COMMON PRACTICE
for the process and for the equipment. These exacting standards apply to both skilled trades and contractors. e. The plants must be operated with great care and precision. SPC principles must be applied by operators who are trained in best practices related to process and operational requirements. Good shift handover practices, operator basic care and PM, common set points, ownership, and teamwork become standard. f. Maintenance must become a reliability function, not a repair function. Working with design and operations to minimize equipment failures is a requirement. Balancing preventive, predictive, and proactive maintenance methods to ensure maximum equipment life and minimum reactive maintenance is also a requirement. TPM, RCM, PM Optimization, RCFA and CMMS models, as well as other tools, will be used to facilitate the effort. 6. Contractors will be held to the same high standards as the balance of the organization, and a policy will be established for the use of contractors. In particular, safety performance expectations will be strengthened, as well as commissioning and housekeeping standards. 7. Each division, and each plant within each division, will be expected to put forth its current performance relative to these benchmarks and best practices, and its plan for improvement, along with the anticipated value of these improvements and the investment required to achieve them. 8. Organizational behavior and structure will be modified to facilitate teamwork for applying manufacturing excellence principles. The clear goal of the teams is to ensure manufacturing excellence. These efforts will be facilitated by small structural changes to the organization that will include a corporate manufacturing support function for benchmarking and best practices in operations/maintenance, virtual centers of excellence, and reliability engineers at each plant. Beta’s mission statement will be reviewed, and made clear and easily remembered, creating a common sense of purpose. 9. A corporate strategic training plan will be established in cooperation with the divisions and plants that supports continuous learning and manufacturing excellence. Skill
EPILOGUE
requirements will be established in each of the major elements for manufacturing excellence. 10. Performance measurements that ensure manufacturing and business excellence will be reviewed, and modified as necessary, to ensure that Beta measures, manages, and improves every step of the way. These measures must cascade from the executive office to the shop floor in a mutually supportive manner. Bob Neurath is determined to assert his leadership and make Beta International a premier company worldwide. He understands that manufacturing excellence is a necessity for Beta’s success and that there is no “magic bullet.” Costs must come down, not through cost-cutting, but rather as a consequence of best practice that includes clear expectations about cost reductions. This view has only been reinforced by an analysis of Beta’s plants, described in Appendix A, which demonstrates how management support and plant culture, organization and teamwork, and applying best practice day to day in operations, maintenance, training, performance measures, etc., have a positive influence on performance. Conversely, not doing best practice had a negative impact. Business success requires a solid strategy that integrates marketing, manufacturing, and R&D, and ensures excellence in each. Beta International is well on its way to superior business performance. Manufacturing excellence is about doing all the basics right, all the time. Business excellence requires this, but it also requires excellence in marketing, sales, customer support, and distribution systems, as well as research and development, and a host of things. If marketing and sales are the beacon of light for your products, then manufacturing is the cornerstone of the lighthouse, and R&D is the power supply. Generating the capital required to put all those systems in place demands adequate gross margins from your manufacturing systems. I know of no single plant that is doing all the basics right, all the time, though there may be some. Always there is room for improvement, with Utopia being only a concept, not a reality. So, please don’t be discouraged about not doing everything perfectly. As McCarthy said in Blood Meridian, “No man can put all the world in a book.” Likewise, all the issues related to manufacturing excellence cannot be captured in this book; clearly there are many, many additional models and issues related to manufacturing, to
437
438
MAKING COMMON SENSE COMMON PRACTICE
marketing, to research and development, to corporate training, to safety performance, etc., that are necessary for business excellence. I hope this has helped get you started in improving your manufacturing performance, giving you a map for doing the basics well, so that you “will not lose your way.” Manufacturing excellence, indeed excellence in any area, requires leadership to create the environment for excellence, a good strategy, a good plan, and good execution. All are necessary; none are sufficient; and leadership, the art of getting ordinary people to consistently perform at an extraordinary level, is the most important. In closing, I’d like to offer a few personal observations. I have a concern relative to leadership as it is practiced in most organizations. With all the downsizing and threat of downsizing, albeit some of it necessary, it’s not likely that people will consistently perform, even at an ordinary level, when threatened with their jobs, cost-cutting, downsizing, etc., nor are they likely to be very loyal. Extraordinary performance requires that people be instilled with a greater sense of purpose, or, as David Burns has said, “superordinate goals” that transcend the day-to-day grind most of us endure. People also need to believe that their employer has their best interests at heart and be treated with dignity. At the same time employers must insist on high standards and set high expectations. If this happens, it is more likely they will rise to the occasion and achieve excellence. Saving money is good, but it’s just not likely that you can save your way to prosperity. Prosperity over the long term takes a long-term vision, regular positive reinforcement, doing value adding activities, ensuring growth, etc. This vision and sense of purpose is captured in the story of three men working in a rock quarry, described in Chapter 15. Recall that the first was asked what he was doing, to which he replied, “I’m just breaking up rocks. That’s what they pay me to do.” The second on being asked, replied, “I’m making a living for my family, and I hope some day they’ll have a better living.” The third replied, “I’m helping build a temple for the glory of God.” Notwithstanding any religious inclinations you may or may not have, who has the greater sense of purpose? Who is more motivated? Who, according to Maslow, is “self-actualized”? Clearly, those leaders who create in their employees a sense of purpose in their daily lives, and help them to feel that they are contributing to the success of the organization, are more likely to be successful. Some executives don’t have enough understanding of this—they are managers, not leaders, and what people need in most organizations is leadership. I encourage you to
EPILOGUE
work harder at being a leader, creating the environment for success, giving people a sense of purpose, and something to be proud of. Yet, we still have to live with the short term. I often do small workshops for shop floor people. One of the questions I almost always ask—“Do you have a retirement plan, like a 401K, or some other plan?” Most all do. “What’s in your plan’s portfolio?” Most have a large chunk of mutual funds of publicly traded stock. “What do you expect of your mutual fund?” Marked growth in value at a rate substantially higher than inflation. “So, when you open your quarterly statement, you expect it to be up, and, if it’s not, you start fretting, right?” Yes. “What happens if your company is not one of those companies considered worthy of your fund manager investing your money?” Its stock price goes down. “Then what happens?” The company starts cutting costs. “Who’s creating this expectation of ever increasing performance?” As one man said, “You mean we’re doing it to ourselves?” You tell me. Capitalism is a harsh taskmaster and very Darwinian. We have to succeed in the short term, while planning for success in the long term, so that we can all meet our own demanding expectations. When you begin your improvement effort, as with most systems, things are likely to get worse before they get better, and all new systems have a lag time before the effects are realized. Putting new systems in place just seems to be that way. With this in mind, patience may need to be exercised as new processes are put in place; this is difficult for most executives. Oh, and please go easy on using the phrase “world class.” It becomes so common as to be trivialized and meaningless in many companies, particularly on the shop floor. If you do use the phrase, mean it, and give it specific meaning—for example, 50% market share; 87% uptime; $2.00 per pound; 1.5% of plant replacement value; etc., might be best in class benchmarks, or real world-class goals. Change is a difficult thing to manage for most organizations. Having little or no change is not healthy. Alternatively, too much change is not healthy. As humans we like stability, and we like change. Too much of either and we become bored, or stressed out, neither of which is effective. So it is with organizations. Try to use the principles in this book to promote change, but at a rate and under circumstances in which people feel in reasonable control and that provide a sense of self-satisfaction. After good practice in one area becomes
439
440
MAKING COMMON SENSE COMMON PRACTICE
habit, then it’s time to add something new and to continue to learn and improve. As noted earlier in this book, physics teaches that for every action there is an equal and opposite reaction. Seemingly for every business strategy, there is probably an equal and opposite strategy—economies of scale for mass production vs. cell, flexible, and agile manufacturing for mass “customization”; outsourcing vs. loyal employees for improved productivity; niche markets vs. all markets and customers; centralized vs. decentralized management; and so on. These strategies are often contradictory and add credibility to Gilbert Jones’s comment—“There are no solutions, only consequences.” I trust this book will help you select those models, practices, and systems that optimize consequences in light of your objectives. Thank you for considering my suggestions. I wish you every success.
A
Appendix
World-Class Manufacturing— A Review of Several Key Success Factors*
There are no magic bullets. R. Moore
Introduction What are the critical success factors for operating a world-class manufacturing organization? How much influence does each factor have? What should be done “first”? How are these factors related? And so on. The following analysis confirms what we’ve known intuitively for some time. That is, if you put in place the right leadership, the right practices, and the right performance measurements, superior performance will be achieved. It also demonstrates that there is no “magic bullet” for superior performance. All the factors had some influence, and many factors were interrelated. While it’s almost impossible to review all the factors for success, the following characterizes the relative influence of several key success factors for manufacturing excellence. For several years now, data have been collected regarding critical success factors as they relate to reliability practices and manufacturing excellence. These factors are defined in the following sections.
*I am grateful to Mike Taylor and Chris Crocker of Imperial Chemical Industries for their support in this analysis.
441
442
MAKING COMMON SENSE COMMON PRACTICE
Definitions of Key Success Factors
Uptime. Critical to manufacturing excellence, uptime is the focal point of the following factors. Compared with a theoretical maximum annual production rate, at what rate are you operating? For purposes of this discussion, uptime has been defined in this manner. However, some companies refer to this as utilization rate. It is understood that uptime typically excludes the effects of market demand. For example, if your utilization rate as a percent of theoretical maximum were 80%, of which 10% was due to lack of market demand, then uptime in some companies would be adjusted to 90%. As noted, the choice has been made here to define uptime more stringently as a percent of maximum theoretical utilization rate.
Leadership and Managment Practices
Management Support and Plant Culture—Measures how much plant personnel believe that management has created a reliability-driven, proactive culture in the plant, and the degree to which management is perceived to be supportive of good operating and maintenance practices.
Organization and Communication—Measures how much plant personnel believe that a sense of teamwork, common purpose, and good communication has been created within the plant.
Performance—Measures how much plant personnel believe that key reliability and manufacturing excellence measures are routinely collected and displayed and used to influence improved performance.
Training—Measures how much plant personnel believe that the plant has developed a strategic training plan that supports business objectives and is implementing the plan in a comprehensive manner to ensure skills for good operating and maintenance practices.
APPENDIX A
Operating and Maintenance Practices
Operational Practices—Measures how much plant personnel believe that the plant is operated with a high degree of consistency, and applies good process control methods, e.g., common set points, SPC, control charts, good pump and valve operation, etc.
Reactive Maintenance—Measures how much your organization’s maintenance is reactive, i.e., what percentage of your maintenance effort is reactive, e.g., run to failure, breakdown, emergency work orders, etc.
Preventive Maintenance—Measures how much plant personnel believe that preventive maintenance (interval-based) practices are being implemented, e.g., use of a maintenance management system (typically computerized), routine inspections and minor PM, comprehensive equipment histories, comprehensive basis for major maintenance activities, etc.
Predictive Maintenance—Measures how much plant personnel believe that condition monitoring practices are routinely being employed to avoid catastrophic failures, optimize planned maintenance, commission the quality of newly installed equipment, etc.
Proactive Maintenance—Measures how much plant personnel believe that the maintenance organization works diligently to eliminate the root cause of equipment failures through improved design, operating, and maintenance practices.
Stores Practices—Measures how much plant personnel believe that stores is well managed, i.e., it is run like a store—clean, efficient, few stock-outs (but not too much stock), good inventory turns, etc.
Overhaul Practices—Measures how much plant personnel believe that overhauls, shutdowns, turnarounds, etc., are well performed, e.g., advance planning, teamwork, condition monitoring to verify the need for maintenance, commissioning afterward to verify the quality of the work, etc.
443
444
MAKING COMMON SENSE COMMON PRACTICE
It should also be noted that the workshops tended to consist mainly of maintenance staff, which could bias the data in some instances. Summary data reported in each factor are provided in Tables A-1 and A-2, and graphically in Figure A-1. The data were collected from a survey of attendees of reliability-based manufacturing workshops in which the attendees were asked to assess their uptime (or more accurately, plant utilization rate), and to judge, on a scale from “0 to N,” the quality of their leadership and management practices, as well as their operating and maintenance practices. The study comprises data from nearly 300 plants. These practices were then analyzed as to their potential influence.
General Findings Perhaps the most striking thing about Table A-1, and its graphical depiction in Figure A-1, is the marked differences in scores between continuous manufacturers and batch/discrete manufacturers. These scores are not limited to management practices, but are apparent throughout the entire spectrum of management, maintenance, and operating practices, and are in some cases quite substantial. This is more than just the nature of the manufacturing process; it represents a fundamental cultural difference between the two types of manufacturers. For example, continuous manufacturers fully understand that if the plant is down, production losses and their attendant cost are huge, and they appear to work harder to put practices in place to avoid those losses, much more so than do batch manufacturers. Batch manufacturers tend to believe that if a batch or line is disrupted, they can always “make it up.” It should also be emphasized that time lost is time lost forever. You don’t ever really make it up, and inefficiencies in manufacturing show up as additional costs, poor delivery performance, waste, or increases in work in process and finished goods, etc., which in the long run are intolerable. Additional analysis using principal components analysis (not detailed here) reinforces the belief that practices between the two types of plants are fundamentally different. The data strongly indicate that they operate in two different, almost exclusive, domains of performance and behavior. Further, this difference also exhibits itself in other ways. These practices need not be fundamentally different. However, they are, and the consequence is that batch and discrete
APPENDIX A
Table A-1
Summary of Reliability-based Manufacturing Seminar Scores. Average Score of Continuous Plants
Average Score of Batch/ Discrete Plants
Typical Score of WorldClass Plants1
Management support and plant culture
57%
51%
80%+
Organization and communication
59%
50%
80%+
Performance
60%
44%
80%+
Training
50%
48%
—
Operations practices
57%
50%
80%+
Reactive maintenance
46%
53%
10%2
Preventive maintenance
57%
47%
80%+
Predictive maintenance
54%
27%
80%+
Proactive maintenance
50%
31%
80%+
Stores practices
47%
36%
80%+
Overhaul practices
68%
50%
80%+
Measurement
Notes: 1 These scores are typical of plants with superior levels of performance, typically reporting 80% to 85%+ OEE for batch plants and 90% to 95%+ uptime for continuous plants. In fact, the best plants typically reported scores in the upper quartile in essentially all categories. The fact that a given individual or group reports a high score in any of these areas does not necessarily mean that the plant is world class. 2 Generally, the lower the score in reactive maintenance the better the plant is operated. The best plants operate at 10% or less reactive maintenance 3 In the winter 2000 edition of SMRP Solutions, a newsletter by the Society for Maintenance and Reliability Professionals, Dr. Ken Kirby of the University of Tennessee published data on maintenance similar to that shown in Table A-1, that is, from 43 plants surveyed, an average of 55% reactive maintenance, 31% preventive maintenance, 12% predictive maintenance, and 2% other was reported.
manufacturers typically have much further to go in establishing world-class performance in the categories shown than do continuous manufacturers. As shown in Table A-2, it was also found that a typical continuous manufacturer, e.g., paper, chemicals, refining, etc., would report
445
446
MAKING COMMON SENSE COMMON PRACTICE
Table A-2
Uptime for Typical and Best Manufacturing Plants. Uptime
Reactive Maintenance
1. Typical continuous process manufacturer
80%
46%
2. Best continuous manufacturers
95%+
10%
3. Typical batch or discrete manufacturer
60%
53%
4. Best batch or discrete manufacturers
85%+
10%
Type of Manufacturing Plant
uptimes averaging 80%, while the best (highest) would report uptimes in the range of 95%+. A typical batch or discrete manufacturer, e.g., automotive, food, etc., had an average uptime of 60%, whereas the best would report 85%+. It was also found that generally the manufacturers with the highest uptimes would also report the lowest levels of reactive maintenance, e.g., 10%, while the typical continuous manufacturer averaged 46% reactive maintenance, and the typical batch or discrete manufacturer averaged 53%. It was also found that for the group of plants studied, for every 10% increase in reactive maintenance reported, a 2% to 3% reduction in uptime rate could be expected—the higher the reactive maintenance, the lower the uptime. While there was considerable scatter
Figure A-1. Comparison of operational practices—batch and continuous plants.
APPENDIX A
in the data, this scatter was often explained by the fact that in many cases the data may have lacked precision; that installed spares helped to mitigate downtime, though installed spares were sometimes unreliable themselves; that many plants were reported to be very good (fast) at doing reactive maintenance; and that cyclical demand also increased the data scatter in some plants. However, for most plants, two factors added even further significance: (1) the impact of reactive maintenance on lost uptime was often worth millions of dollars in lost income, not to mention out-of-pocket losses or the need for increased capital (to add equipment for making up production losses); (2) reactive maintenance was reported to typically cost as much as two times or more than planned maintenance. An additional consideration, which is perhaps as important but more difficult to measure, is the defocusing effect that reactive maintenance has on management, taking away from strategic management, and forcing far too much time spent on “fire-fighting.” For the plants studied, uptime was positively correlated to all leadership/management practices and to all operating/maintenance practices, except one—reactive maintenance. That is to say, all factors, with the exception of reactive maintenance, provided a positive influence on uptime—the higher the score in good practices, the higher the uptime. Further, all those same factors were negatively correlated with reactive maintenance. That is to say, when reactive maintenance levels were high, all leadership and management practices, as well as all operating and maintenance practices, were poorer. Finally, all factors except reactive maintenance were positively correlated with each other, that is, they influenced each other in a positive way in daily activities. Indeed, management and operating practices were significantly more positively correlated to each other than they were to uptime, indicating the increasingly positive influence these factors have on one another to create a positive culture within a given organization. See Figures A-2 through A-7 for details. As shown in Figure A-2, uptime was positively correlated to all the factors under consideration, except reactive maintenance, of course, and were typically in the range of 0.3 to 0.4, with no factor being particularly statistically dominant, suggesting the need to do many things well at once. It also begs the question of whether management being supportive of good maintenance and operating practice creates a culture where it is more successful, or whether operating and
447
448
MAKING COMMON SENSE COMMON PRACTICE
Figure A-2. Uptime—correlation to key success factors.
maintenance practices alone are sufficient. In my experience, plants with high uptime always have strong management support for good practice, in both operations and maintenance. The reverse is not
Figure A-3. Reactive maintenance—correlation to key success factors.
APPENDIX A
Figure A-4. Management support and plant culture—correlation to key success factors.
always true, that is, plants that have reasonably good (though not the best) maintenance practices will not necessarily have high levels of management support or good uptime. In these cases, the people at the lower levels of the organization have worked hard to put in
Figure A-5. Organization and communication—correlation to other success factors.
449
450
MAKING COMMON SENSE COMMON PRACTICE
Figure A-6. Performance—correlation to key success factors.
place some good practices without benefit of strong support of senior management, and have achieved some improvement. Unfortunately, they are generally limited to a level of performance that is less than world class. Apparently numerous factors affect uptime and help lower operating costs. None alone appears sufficient. All appear necessary in some degree. What can be said with high confidence is that the best plants create an organizational culture in which the leadership of the organization is highly supportive of best practices, both from a managerial perspective of promoting teamwork, performance measurement, training, craftsmanship, etc.; and from an operating perspective of fostering best operating and maintenance practices. Further, the “less than best” plants appear to focus more on arbitrary cost-cutting, e.g., head count reduction, budget cuts, etc., without understanding fully and changing the basic processes that lead to “less than best” performance. In these plants arbitrary cost-cutting is believed to be likely to have a positive effect on manufacturing performance. However, this perception is not necessarily supported by the data presented in Chapter 1, which concluded that a cost-cutting strategy is not a high-probability approach for longterm success.
APPENDIX A
Figure A-7. Training—correlation to key success factors.
The best plants focus on putting in place best practices, empowering their people in such practices, resulting in unnecessary costs not being incurred and costs coming down as a consequence of good practice. This is also considered to be much more readily sustainable over the long term. It must be emphasized that these data represent typical data and ranges and should not necessarily be considered conclusive about any particular business. For example, a best-practices paper plant manufacturing coated paper might have an uptime of 87% and be considered world class, even though it is not at a 90% to 95% rate. This can be because of any number of factors, including the nature of the process, the product mix and number of products being made, management policy regarding finished goods stock, on-time/in-full, etc. These other business factors must also be considered in determining what uptime is optimal for a given manufacturing plant. Likewise, the same could be said for all other types of plants.
451
452
MAKING COMMON SENSE COMMON PRACTICE
Summary These data lend strong support to what we’ve known intuitively for some time—put in place the right leadership, the right practices, the right performance measurements, create an environment for teamwork and a common sense of purpose about strategy and performance, and superior performance will be achieved. Not all factors that could influence world-class performance were reviewed. For example, marketing and/or research and development that lead to superior products and market demand, or ease of manufacture, were not considered in this study. Nor were operational issues, such as the level and application of redundant equipment and in-line spares, just-in-time policies, on-time delivery policies, specific design policies, etc. These and other factors will clearly affect uptime and other operating and maintenance practices and ultimately plant performance. However, the factors that were analyzed do indicate that a major portion of the variability can be accounted for in the parameters measured, and they provide compelling reasons to ensure that these practices are in place. Finally, this effort is ongoing and continuously being improved. It is anticipated that this additional analysis and information will be available at a later time. In the interim, it is hoped that these results will be beneficial to you and your organization.
B
Appendix
Reliability Manager/Engineer Job Description
Insanity is doing the same old things in the same old way and expecting different results. Rita Mae Brown The following is offered as a model job description for a reliability manager, engineer, or specialist who facilitates the change process. Many of Beta’s plants are implementing this position, or a variant thereto, to ensure focus and facilitation of the reliability improvement process for manufacturing excellence. While likely useful, it is not essential that the individual be an engineer, but it is strongly recommended that he or she has 10+ years experience in an operating plant, a strong work ethic, and be capable of being both team leader and team member. It is likely that the individual will not be able initially to do everything identified in the following text, and that the role will evolve as the improvement process takes effect. Ultimately, the role may not even be necessary, if, and when, the practices become second nature and habit. In any event, this individual might be expected to be performing the following kinds of tasks. Loss Accounting. One of the fundamental roles of a good reliability engineer is to focus on uptime/output losses and the causes of those losses via Pareto analysis. This does not mean that this person would necessarily create the database for identifying the losses, but rather would use existing databases to the extent possible for the analysis. For example, a review of a given plant might indicate that most of
453
454
MAKING COMMON SENSE COMMON PRACTICE
the unplanned losses are a result of stationary equipment; the second biggest loss may be related to instrumentation; the third due to rotating machinery; etc. A good reliability engineer would focus on those losses on a priority basis, do root cause failure analysis, develop a plan for eliminating those losses, ensure the plan’s approval, and then facilitate its implementation. In doing this, he or she would work with the production, engineering, and maintenance departments to ensure buy-in, proper analysis, support teams, etc., for the planning and implementation process. Some issues may not be in his or her control. For example, the need for a full-time shutdown manager to reduce losses related to extended shutdowns (e.g., issues related to organization, management, long-range markets, etc.) would generally be passed up to managment for resolution. Root Cause Failure Analysis (RCFA). Once losses have been categorized and a tentative understanding has been developed, the reliability engineer will begin to perform, and/or facilitate, root cause failure analysis and prospective solutions for major opportunities for improvement. Note that root cause failure analysis is a way of thinking, not something that requires the individual to have all the answers or be an expert in everything. Some of these RCFA issues may be equipment specific, e.g., redesign of a pump, while others may be more programmatic, e.g., the need for a procedure for precision alignment. The reliability engineer would be responsible for analyzing losses and then for putting forth a solution based on root cause analysis. Managing the Results of Condition Monitoring Functions. In some of the best companies around, the so-called predictive maintenance technologies (condition monitoring) have all been organized under the direction of a single department manager. That is not to say that the individual or his or her staff actually collects all the data or has all the technology. In some cases, this individual may contract the analysis to external or internal suppliers, and in some cases all the data collection may be done by contractors. Rather, it is to say that he or she is responsible for the quality of the data provided by the suppliers, for the integration of the data, and, more importantly, for the decision-making process that results from the data. Generally, this person would be responsible for ensuring that the following types of technologies are applied and used in an integrated way to improve reliability:
APPENDIX B
Vibration
Oil
Infrared
Ultrasonic (thickness, leak detection)
Motor Current
Corrosion Detection
Other NDT Methods
Overhaul/Shutdown Support. Further, this individual would work to correlate process data (working with process and production people) with condition monitoring technology to “know equipment condition,” and to use that knowledge to help:
Plan overhauls and shutdowns—do what is necesary, but only what is necessary; link equipment condition and maintenance requirements to stores, tools, resources, etc., in cooperation with maintenance planning.
Commission equipment after shutdown—in cooperation with production for commissioning of the process, but also of the equipment quality, using applicable technologies, e.g., vibration, motor current, oil, etc., and specific standards for acceptance.
Proactive Support. Finally, and perhaps related to the previous efforts, the reliability manager would also provide proactive support in the following areas:
Work with the design staff to eliminate root cause of failures through better designs.
Work with purchasing to improve specifications for reliability improvement and to improve the quality and reliability of suppliers’ scope of supply.
Work with maintenance to ensure precision practices in installation efforts—precision installation of rotating machinery and, in particular, alignment and balancing; precision
455
456
MAKING COMMON SENSE COMMON PRACTICE
installation of piping, of infrastructure; precision handling and installation of electrical equipment; precision calibrations; precision power for instruments and equipment; etc.
Work with stores to ensure precision and quality storage practices for retaining equipment reliability, e.g., proper humidity, temperature, and electrostatic discharge control for key, applicable components; proper storage of equipment to preclude corrosion; turning of shafts on motors periodically; proper bearing and critical equipment storage and handling.
Finally, but perhaps most importantly, work with operations to identify losses related to lack of process consistency, e.g., poor process chemistry control; poor process parameter control (temperature, pressure, flow, etc.); poor control of transition modes for both process chemistry and physical parameters; poor valve and pump operational practices; and the application of basic care by operators to the production equipment.
Facilitator/Communicator. Perhaps the most important role of the reliability engineer/manager is to facilitate communication among production, design, and maintenance for eliminating the losses that result from poor practices. Often these practices are not attributable to a single person or department, but somehow are “shared” by two or more groups. The role of the reliability engineer is to facilitate communication and common solutions, without seeking to “find fault” anywhere. His or her goal is to help create common “superordinate” goals related to uptime and reliability improvement, without seeking to blame anyone for the current losses. It is understood that the reliability engineer can’t do all these things at once, and that’s why the loss accounting is so important for providing a sense of priority and purpose for the limited time and resources available.
Index
10-point plan, 434–40 best practices, 435–36 contractors, 436 losses minimizationa action plan, 435 marketing/manufacturing alignment, 434–35 organizational behavior/structure, 436 performance benchmarks, 435 performance improvement plans, 436 performance measurements, 437 strategic training plan, 436–37 uptime/OEE measurement, 435 Accounting performance, 425 Alignment machinery, 144 organization, 369–70 precision, 244–46 rotating equipment, 73 All-contractor projects, 111
Area production units, 424 Assets condition of, 55, 95 deterioration, 67 fixed, 414 new capital, 127 people, 414, 415, 416 startup and operability, 111 valuation, 57 Asset utilization rate, 7 defined, 19 support, 25 use of, 58 Audits, benchmarking, 97–99 Availability actual, 22 defined, 20 equipment, 261 sample calculation, 26 Bags cutters/trimmers for, 335
457
458
MAKING COMMON SENSE COMMON PRACTICE
specifications, 336 variety, 336–37 See also Packaging; Packaging line Balancing precision, 246–47 rotating equipment, 73 specifications, 143 Baselines, 138 Batch manufacturing, 38, 59 Bearings failure component histogram, 230, 231 installation specification, 144 packaging line, 336 Behavior, 399–400 bad, punishing, 400 good, rewarding, 399–400 organizational, 436 Benchmarking audit, 97–99 action plan, 98–99 demonstration, 97 results, 98 Benchmarking Code of Conduct, 54 Benchmarks in 10-point plan, 435 against best companies, 431 arbitrary decisions based on, 52 “big gaps,” 397 as continuous process, 49 data change, 54 data use, 53 defined, 49 industry-specific metrics, 52 level of scatter, 53
maintenance practices, 223 metrics, 50–51 as part of continuous change, 52 single, using, 57 as tool, 54 use caution, 54–55 Best practices in 10-point plan, 435–36 business gains from, 351 comparative, 220 defined, 49 maintenance, 226 practices comparison with, 17– 18 predictive maintenance, 238–39 preventative maintenance, 234, 236 pump reliability, 136–37 See also Practices Bill of material, 160–61 Boss as trainer, 408–9 Bottlenecks, 62–66 design, 63 determination, 68 dynamic nature, 49–50 existence, 62 Breakdown losses, 52 Business focused, 106 future of, 12 key components integration, 5 unit performance, 424 Capacity excess, 34–35 increased, 13–14
INDEX
market share and, 16 rated, 64 Capital intellectual, 413–16 maximum return on, 25 Centralized maintenance, 251–54 hybrid, 254 responsibilities, 252–53 senior skilled trades, 253 as value-adding contributor, 253 Centralized manufacturing, 383– 84 Centralized organizations, 386–87 Change compelling reasons for, 393–95 difficulty, 392, 393, 439 employee implementation of, 397–99 environment, fostering, 401 management steps, 393 managing, 382, 392–93 as part of performance appraisal, 401 process facilitation, 398 stabilizing, 400–402 Changeovers losses, 23 practices, 192–93 quick, principles, 192–93 rapid, implementing, 102 time calculation, 26, 28 Character building, 368–69 Clear/clean/material changes, 26 Commissioning as part of partnership, 193 process, 194
rotating equipment, 184–86 test periods, 191–92 Communications goals, 396–97 of performance expectations, 389 strategy, 396–97 Compensation, 390–92 bad behavior and, 391–92 differences, 391 difficulty, 390 Computerized maintenance management systems (CMMS), 251 case history, 280–88 committees, 285 comprehensive use of, 274, 275 cost, 282 cost savings, 288 database, 266–67 effective use of, 279 equipment database, 286 equipment reliability problems and, 279 impediments/solutions, 283–84 implementing, 266, 277–88 incomplete/inappropriate use of, 277–78 informational meetings, 285 installation, 284 issues, 286–87 for issuing work orders, 278 pilot plant implementation, 284–85 preliminary authorization, 281– 82 primary decision criteria, 283
459
460
MAKING COMMON SENSE COMMON PRACTICE
procedures, 285–87 results, 287 selection criteria, 282–83 selection process, 281–82 as software shell, 278 specification of, 281 as tool, 280 use decision, 281 use of, 286 value, 282 Condition monitoring, 238, 239 confidence in, 240 example, 239–40 as improvement opportunity, 354 need for, 273 systematic use of, 275 See also Predictive maintenance Conflict resolution, 119 Consistency, process control, 199– 200 Continuous manufacturing, 38 Continuous plants operational characteristics, 347– 51 uptime measurement, 26 uptime performance, 32 Contractor consolidation, 291–300 contractor experience, 299 displaced contractors, 298 in downsizing, 303 lessons learned, 302–4 loss of key management staff, 300 loss of skilled staff, 300 maintenance costs, 294, 296 maintenance levels, 294–95, 296
major equipment failures, 300 materials expenditures, 295, 297 need for, 298 results, 298–300 safety performance, 297, 298 scheduled maintenance, 298 shutdown impact, 299 as substantial change, 303 transition process, 300 transitory effects, 303 uptime, 294, 295 Contractors in 10-point plan, 436 accreditation, 308 administrative systems compatibility, 307–8 attitudes, 308 best use of, 304–6 certification requirement, 301 consolidation experience, 299 corporate goal support, 304 displaced, 299, 303 effect delivery, 301 effective use of, 289–310 for emergencies, 306 employee competition with, 309 experience/expertise, 306 financial issues, 307 good, 305 for high-skill jobs, 305–6 for installation, 118 integration process, 302–3 for low-skill jobs, 305 loyalty, 309 measures of effect, 301
INDEX
for overhaul/turnaround support, 306 ownership of intellectual property, 308 performance expectations, 302 preaward, 307 safety, health, environment issues, 307 scope of work, 306, 307 selecting, 306–10 standards, 308 terms and conditions, 307 union agreements compatibility, 307 use risk, 305 working relationships, 307 Control loops, 200–203 LCL, 201 product variability and, 200 requirements, 202 UCL, 201 Corporate measurements, 424, 429 Cost cutting, 3–4 in asset deterioration, 67 consequences, 415 effect on production losses, 346 focus, 11 Costs all, 110 annual manufacturing, 349 CMMS, 282 excess capacity and, 34 extraordinary, 322 focus on, 106 initial, 139 life-cycle, 109, 110, 122–23
maintenance, 33, 51, 55, 57, 260 material, 149, 350 opportunity, 87 out-of-pocket, 196 raw material, 149 reactive maintenance, 225, 252 stores, 159 total operational, 345 unit, 33 utility, 350 variable, 33 Criticality ranking, 325 Cross-functional teams, 68–69, 331 Cross-functional training, 413 Customers as king, 96 negotiating with, 102 rationalizing, 103–6 revenue/profit analysis, 105 Debottlenecking reviews, 93 Decentralized maintenance, 251– 54 Decentralized organizations, 386– 87 Decision making model, 328 Defect elimination, 210 Delay times, 39 Delivery, 106 Design capital project process flow model and, 121 capital projects process and, 120 case histories, 123–25 efforts, 110
461
462
MAKING COMMON SENSE COMMON PRACTICE
key questions, 115–19 maintainability, 116–19 minimum adequate (MAD), 109 objectives, 114–15 operations/maintenance input, 119–22 parameters, 117 plant, 109–29 poor practices, impact, 123 presentation to operations/maintenance staff, 118 process, 112–14 reliability, 115–16 review plan, 121 Stamper Branch plant case history, 125 Turkey Cree plant case history, 123–24 Warco plant case history, 124– 25 Design of experiment (DOE), 214 Digital control system (DCS), 201 Discounted cash-flow (DCF) technique, 128 Displacing contractors, 299, 303 Downtime future, preventing, 150 reduced, value of, 101 sample calculation, 26 scheduled, 21–22, 64 unplanned, eliminating, 65 unscheduled, 21–22 Dynamic models, 213–14
Educating, 407 Effectiveness, equipment, 63
Empowerment, 372–73 as disabler, 374–75 popularity, 372 requirements, 372–73 Equipment availability, 261 calibration, 217 capital project, 173 commissioning standards, 118 condition monitoring techniques, 172 critical, review, 174 database, 288 dead on arrival (DOA), 420 downtime blame, 195 effectiveness, 63 histories, 116, 141 histories, creating, 267–68 life, extending, 173 lubrication practices, 71 maintainability, 118 management, 318 obsolete, 173 Pareto analysis, 267 plant, typical, 115–16 reliability, 100 restoring, 317 rotating, 73 sensitivities, 117 verification, 189–90 Equivalent units (EUs), 13 Ergonomic issues, 117 Excellence business, 1–46 implementing, 389–90
INDEX
as maintenance requirement, 302 manufacturing, 134, 389–90, 437 in process conformance, 203 reliability, 389–90 virtual centers of, 384–85 Excess capacity, 34–35 market share and, 37 as opportunity, 35 as “reserve,” 34 Extrusion machines, 273 Failure modes and effects analysis (FMEA), 67, 135 adaptation of, 331 analysis, 137 analysis comparison to plant problems, 322 applying, 322–28 forward thinking, 321 operational performance analysis with, 332 system-level, 331–33 Failures age-related curves, 229 age-related patterns, 233 avoiding/minimizing consequences of, 232 “bathtub” pattern, 232 bearing histogram, 230, 231 cause of, 420 constant rate of, 231 constant rate patterns, 234 equipment (contractor consolidation), 300 functional, 67–68, 69, 233
learning from, 398–99 mean time between (MTBF), 228 potential, 233 profiles, 227–28 root case analysis, 249–50 Feedback, 121–22 Flange installation, 186 Flex couplings, 246 Flexible manufacturing, 99 Focused factories, 38, 44–45 backups, 45 concept, 44 maintenance practices and, 251– 54 results, 44–45 Focus groups, 120 Front-end loading (FEL), 111 Functional failures defined, 67–68, 69, 233, 322–23 identifying, 69 See also Failures Global competition, 3 Goals communicating, 396–97 key, establishing, 107 low-cost producer, 379 manufacturing capability for supporting, 84–85 minimum, 83 reliability plan implementation, 343 setting, 397 strategic training, 408 stretch, 106 superordinate, 365
463
464
MAKING COMMON SENSE COMMON PRACTICE
team, 375 vendor comment on, 135 High-cost producer, 9 Historical markets, 104 Human resources, 426 Improvement efforts, 398 Industry-specific measurements, 429 Injury rate, 259 In-line spares, 136 Installation, 181–94 checklist, 188–89 commissioning test periods, 191–92 flange, 186 good, 182–83 housekeeping, 187 joint, 186 manufacturer recommendations, 183–84 practices, 181–94 predestruction authorization, 187–88 procedures, 183 rotating equipment, 184–86 standards, 283 workshop support, 187 Instrument calibration, 271 Intellectual capital, 413–16 disposing of, 414 value, 415 Inventory current, 172, 173 high-use rate, 175 level drivers, 170
low-volume noncritical, 175 reducing, 171–72 strategic plan, 176 surplus, 174 See also Stores Joint installation, 186 Just-in-time philosophy, 65 Key markets, 99, 100 Key performance indicators (KPIs), 50–51 Key products, 100 Lagging indicators, 399 defined, 429–30 measurements, 399 See also Leading indicators Leadership, 363–67 Bennis model for, 395 character building through principles, 368–69 characteristics, 370–71 comparative model, 366–67 concern, 438 courage, 367–68 defined, 364 empowerment, 372–73 essential nature of, 363 ethics, 367–68 example setting, 371 management vs., 367 models for, 366–67 organization alignment, 369–70, 371 organizational issues, 371–72
INDEX
personal humility, 368 practice of, 438 predictive maintenance, 385–86 principles, applying, 395–96 professional resolve, 368 reality, 367–68 trust in, 364 trustworthiness, 371 vision, 367–68, 370 Leading indicators, 399, 430 Lean manufacturing, 38, 39–42 head count reduction and, 40 as philosophy, 40–41 reliability support, 39 resources, 39 techniques, 41 See also Manufacturing Learning defined, 407 training vs., 410–11 Life-cycle costs, 109 analysis, 125 cash-flow considerations, 113 commitment, 110 discounted analysis, 128 estimating, 122–23 implications, 109–10 reducing, 120 standards, 140 Life extension, 157 Losses acceptable, 24 analyzing, 68 breakdown, 52 carrying cost, 159 cause elimination, 58
causes, 41 changeover/transition, 23 defining, 68 eliminating, from ideal, 19 expediting cost, 159 ideal circumstances and, 17 issues relative to, 23–24 maintenance inefficiency, 159 market, 24 measuring, from ideal, 19–26 minimization action plan, 435 plant capability, 159 process rate, 22, 27 production, 72 production, cost cutting and, 346 production output, 322 quality, 22–23, 27 shrinkage, 159 in uptime, 134 working capital, 159 Low-cost producer, 8–12 as design objective, 114 expecting to be, 11 goal, 379 increasing market share strategy, 12 model for becoming, 15–17 scenario, 10 unit cost of production, 10 Lower control limit (LCL), 201 Low-volume products, 82 Lubrication practices, 185 requirements, 324
465
466
MAKING COMMON SENSE COMMON PRACTICE
Machinery alignment, 144 “Magic bullet,” 61 Maintainability, 116–19 equipment, 118 process, 118–19 requirements, 117 Maintenance categories, 319 centralized, 251–54 control, 42 as core competency, 302 decentralized, 251–54 design input, 119–22 difficulty, 70–71 domains, 256 efficiency, improving, 318 efforts, 290, 292 excellence as requirement, 302 historical cost, 127 inefficiency, 159 lay-down/pull space for, 325 levels (contractor consolidation), 294–95, 296 measures, sample, 427–29 overhaul, 163 performance, 425 performance profile, 349 planning, 236 planning prioritization, 72 predictive, 221, 237–41, 385–86 preventative, 162, 221, 234–37, 265–88 preventative, plan, 210 proactive, 210, 221, 243–50 purchases, 291, 293 as rapid repair function, 226
reactive, 59, 221 as reliability function, 226 as repair function, 220 scheduled, 298 Maintenance costs, 33, 51, 55, 57 as benchmarking conclusion, 294 contractor consolidation, 294, 296 increase, 260 preventative maintenance and, 265 reducing, 196–99 reliability improvement program, 290, 292 reliability plan and, 343, 344 See also Costs Maintenance practices, 227–34 assessment, 61 benchmark, 223 best, 225 comparative results, 224 focused factories and, 251–54 integrating, 254–57 levels of, 220 life extension program, 257 proactive, 219–25, 243 reactive, 219–25 repair culture, 226–27 safety performance and, 258–59 summary, 260–61 worst, 225 See also Best practices; Practices Management change, 382, 392–93 equipment, 318 key, loss of, 300
INDEX
leadership vs., 367 principles, applying, 395–96 spare parts, 354 stores, 161–64, 168 supply chain, 133 support, 60 top-down, 372 Mandate team, 154 Manufacturer recommendations, 183–84 Manufacturing agile, 99 annual costs, 349 batch, 38 business excellence and, 1–46 capability, 84–85 centralized, 383–84 continuous, 38 excellence, 437 excellence, implementing, 389– 90 excellence, objective, 134 excellence, steps to, 17–19 historical performance, 107 lean, 38, 39–42 marketing strategy alignment with, 5–8 marketing teamwork, 16 performance, 13, 424–25 scenarios, 1–3 strategies integration, 4 strategy alignment, 106, 108, 434–35 uptime improvement model, 67– 74 Marketing forecasts, 102
historical performance, 107 manufacturing teamwork, 16 performance, 425–26 strategy alignment, 5–8, 106, 108, 434–35 Market(s) demand, 70 goals, manufacturing capability for supporting, 84–85 growth analysis, 82–83 historical, analyzing, 104 history and forecast, 83 key, 99, 100 losses, 24 new, ensuring sales to, 103 niche, 107 plant performance by output and gross margin, 86 rationalizing, 103–6 reducing prices in, 88 revenue/profit review, 105 review, 82 success factors, 79–82 target, 77 Market share capacity and, 16 excess capacity and, 37 increasing, 79, 88 strategy, 12, 79 Materials cost improvement, 148–49 costs, 149, 350 expenditure (contractor consolidation), 295, 297 requirements, planning, 212 use waste, 94
467
468
MAKING COMMON SENSE COMMON PRACTICE
Mean time between failure (MTBF), 228 Mechanical defects, 421 Mechanical integrity, 274–75 Mechanics, training, 324 Metrics industry-specific, 52 key, 52, 53 list of, 50–51 See also Benchmarks Mid-cost producer, 9–10 Minimum adequate design (MAD), 109, 126 Mission statements, 387–89 benefits, 387 example, 387–88 living, 388 Moisture control chart, 197 Motors jacking bolts for, 145 service factors, 145 specifications, 144–45 Multiskilling, 413
Net operating income, 100 No demand time defined, 27 sample calculation, 28 Nonconformances causes of, 205 eliminating, 204 issues, 206 See also Process conformance Nondestructive examination (NDE), 241
Operational practices, 195–218 advanced process control methods, 213–14 control loops, 200–203 improving, 196–99 operator basic care, 206–7 operator care/ownership, 207–9 operator-owner and maintainerimprover principles, 209–11 process conformance, 203–6 process control consistency, 199–200 production planning, 212–13 shift handover process, 211–12 summary, 214–18 Operations, design input, 119–22 Operator care basic, 206–7 establishing, 207–9 Operator ownership, 207 conditioning, 217 establishing, 207–9 responsibilities, 209–10 Operators inadequate training, 205 inexperience, 324 maintenance involvement, 317 shifts, 205 training, 215–17 Optimization PM process, 21 practice, 73 preventative maintenance, 265– 75 product mix, 101–2 Organizational structure, 383 Organizations
INDEX
aligning, 369–70 centralized, 386–87 comparison, 53–55 decentralized, 386–87 “superordinate” goals, 365 Overall equipment effectiveness (OEE), 6 batch manufacturer measurement, 59 defined, 20 injury rate and, 259 measurement, 23, 25–26, 435 sample calculation, 26–31 tactically, 25 as tool, 24 See also Uptime Overhaul practices, 355 Packaging bag specifications, 336 bag variety, 336–37 capability, improving, 89, 91 performance, improving, 330– 31 Packaging line bag magazine, 335 bearings, 336 cams, 336 chains, 336 cleaning, 334–37 cutters/trimmers, 335 glue pot, 336 guides, 336 hopper, 337 inspecting, 334–37 lacquer, 335
maintenance PM program, 338 mechanical/spreading fingers, 336 operator care, 338 palletizer, 336 restoring, 334–37 result, 337–39 setup instructions manual, 338– 39 shift differences and, 337 tare weight control, 335 training program, 339 Pareto analyses, 241, 267, 347 materials grouping in, 160 reviewing, 420 Partnerships commissioning as part of, 193 with suppliers, 162–63 Payback analysis, 125–29 Performance accounting, 425 business unit, 424 comparative data, 56 control, capability, 202 expectations, communication of, 389 financial, 50 improvement, 422 improvement plans, 436 level sensitivities, 15 maintenance, 425 manufacturing, 50, 424–25 mapping, 13 packaging, 330–31 by plant, 85–87 plant, requirements, 90–91
469
470
MAKING COMMON SENSE COMMON PRACTICE
by product line, 85–87 safety, 258–59 stores, 169, 425 success-based measures, 303 supply chain, 132 typical, 56 utilities, 425 world-class, 46, 56 Performance measurement, 419–31 in 10-point plan, 437 accounting performance, 425 area production units, 424 business unit performance, 424 cascade, 424–26 cascade model, 423 corporate measurements, 424 data, 422 display, 422 human resources, 426 maintenance performance, 425 manufacturing performance, 424–25 marketing and sales, 425–26 numbers of, limiting, 423 plant age and, 431 sample, 426–29 stores performance, 425 utilities performance, 425 for weaknesses, 431 Personnel, cross-training, 94 Plant maintenance (PM), 21 optimization process, 21 performance, 22 practice optimization, 73 requirements, 118 See also Maintenance
Plants age, 431 benchmark vs. typical, 222 best, 132 “bleeding,” 104 building, 93 business profile, 348 culture, 60 debottlenecking reviews, 93 design case histories, 123–25 operating performance, 85 performance by, 85–87 performance data, 90 performance requirements, 90– 91 pro forma performance, 92 replacement value, 57 reviewing, 124, 125 walk throughs, 217 PM optimization model, 268–71 database setup, 268 optimization, 271 PM necessity determination, 269–70 PMs analysis, 270–71 process, 269 See also Preventative maintenance Potential rate utilization, 20 Power quality, 71, 248, 324 Practices best practices comparison, 17– 18 changeover, 192–93 installation, 181–94 lubrication, 185 “magic bullet,” 61
INDEX
maintenance, 219–61 maintenance, assessment, 61 operational, 195–218 overhaul, 355 process control, 113, 199 procurement, 131–56 reliability process, 380 store, 166 stores/parts management, 157– 80 teamwork, 377–78 Precision alignment, 244–46 benefits, 247 flex couplings and, 246 importance, 325 life vs. misalignment, 245 See also Proactive maintenance Precision balancing, 246–47 benefits of, 248 examples, 246–47 See also Proactive maintenance Predestruction authorization, 187– 88 defined, 187–88 illustrated, 188 Predictive maintenance, 237–41 best practices, 238–39 condition monitoring, 238, 239 core capability creation, 242 current practices, 241 defined, 221 deployment plan, 242 existing capability survey, 242 leadership, 385–86 practices integration, 254–57
process trending technologies, 237 in TPM, 318 See also Maintenance Predictive technology, 72 Preventative maintenance, 162, 234–37 age-related failure curves, 229 analyzing, 270–71 best practices, 234, 236 case histories, 272–74 consolidation, 270 database setup, 268–69 defined, 221 during scheduled outages/turnarounds, 272 failure profile, 227–28 forms, 227–28 inspection, test, surveillance, 271–72 instrument calibration, 271 interval basis, 265 intervals, 228 maintenance costs and, 265 mechanical integrity, 274–75 model, 268–71 necessity determination, 269–70 optimization process, 269 optimizing, 265–75 plan, 210 planner functions, 235–36 planning, 236 practices integration, 254–57 rotating equipment, 265 routine, 235 in TPM, 318
471
472
MAKING COMMON SENSE COMMON PRACTICE
See also Maintenance; PM optimization model Pricing level, reducing, 88 strategy, revising, 89–90 structure, planned, 89 Proactive maintenance, 243–50 culture, 243 defined, 221 examples, 243–44 oil analysis, 248 power quality, 248 practices, 243 practices integration, 254–57 precision alignment, 244–45 precision balancing, 246 root cause failure analysis, 249– 50 as ultimate step, 243 valve diagnostics, 248–49 See also Maintenance Procedures installation, 183 stores, 164–66 Process conformance, 203–6 excellence in, 203 nonconformances, 205, 206 problem areas, 203 reviewing, 203 Process control advanced methods, 213–14 consistency, 199–200 practices, 113, 199 statistical (SPC), 201 Processes business success, 155
capability, 100 efficiencies, 66 good, 67 maintainability, 118–19 operators, 200 reliability, maximizing, 379 stability, 100 stream parameters measurement, 197–98 verification, 190 Process rate losses, 22 defined, 27 sample calculation, 28 Process trend technologies, 237 Procurement, 131–56 Production bottlenecks, 267 as king, 380 lines, setting up, 101 measures, sample, 426–29 performance profile, 351 planning, 212–13, 324 planning prioritization, 72 process, walk through, 69 process facilitation, 380 subcontracting, 101 Product mix, 82, 99–100 optimization process, 101–2 rationalizing, 96–97 review, 88 Product(s) consolidating, 101 dropping, 102 history and forecast, 83 key, 100 low-volume, 82
INDEX
new, higher standards, 88 performance by, 85–87 plant performance by output and gross margin, 86 price reductions, 88 quality, 182 substitute, 103 success factors, 79–82 transferring, 101 Projects all-contractor, 111 best, 111–12 best in class, 112 design and, 120 excellent systems, 111 inventory, 173 Pumps jacking bolts for, 145 operation training, 216 reliability best practices, 136–37 suppliers, 137–38 Quality feed-stock, 198 focus on, 106 improvements, 82 power, 71, 248, 324 product, 65, 182 production output, 40 raw material, 70 stores, 179 utilization, 20 Quality losses, 22–23 defined, 27 sample calculation, 28
Rationalizing customers, 103–6 markets, 103–6 product mix, 96–97 R&D applying methodology to, 95 strategies integration, 4 Reactive maintenance, 59 cost, 225 costs, 152 defined, 221 levels, 225 practices, 225 uptime vs., 60 See also Maintenance Rejection rates, 100 Reliability assumptions, 117 design, 115–16 engineers, 385 equipment, 100 excellence, implementing, 389– 90 experiences, 151 focused culture, 61–62 improvement meeting, 140 improvement teams, 381 improving, 65, 107 lean manufacturing support, 39 maintenance practices assessment and, 61 managers, 385 maximizing, 237 objectives, 116 owning, 369 plant age and, 431
473
474
MAKING COMMON SENSE COMMON PRACTICE
process illustration, 18 pump, 136–37 requirements, 42 standards, 140 supplier, 149–53 workshops, 208 Reliability centered maintenance (RCM), 135 adaptation of, 331 analysis, 137, 162, 319, 320 case study, 322–28 cautions, 320–21 equipment results, 330 failure modes, 320, 321 imprecision, 323 maintenance categories, 319 methods, applying, 320 operational performance analysis with, 332 potential pitfalls, 320 primary objective, 319, 321 process results, 329–30 results, 327, 329–30 TPM integration, 313 type analysis, 321 Reliability improvement plans, 341–60 autonomy, 341 background, 352–53 beginning, 342 “bow wave,” 345 case study, 352–55 constraints, 359–60 cost cutting effect, 346 critical equipment prioritization, 356 goals, 343
higher rate effects, 357 improvement opportunities, 354–55 investment requirement, 357 losses, 344 maintenance costs and, 343, 344 manpower level/expertise, 357 obstacle, 358 outage time minimization approach, 356 overhaul, 357 performance measures, 355 plan outline, 342–43 PMs, 357 predictive measurements/techniques, 356 process monitoring/control, 355–56 recommendations, 355–58 results, 359 review team setup, 357 spare parts/capital requirements, 356 summary, 360 total operational costs, 345 Reliability improvement program, 289–91 maintenance costs, 290, 292 maintenance efforts, 290, 292 maintenance purchases/stores issues, 291, 293 safety performance, 291, 293 tools, 290 uptime, 290, 291 Reliability process, 379–81 engineering job, 380 maintenance job, 380
INDEX
practices, 380 purchasing job, 380 rules, 379 strategy, 379 Repair culture, 226–27 Resonance specifications, 143–44 Resources, prioritizing, 268 Return on investment (ROI), 112 Return on net assets (RoNA), 13– 14, 430–31 concurrently improving, 15 measure, 430 uptime vs., 14–15 Root cause failure analysis, 249–50 Rotating equipment alignment and balancing, 73 commissioning of, 184–86 heat distribution, 185 installation of, 184–86 lubricants, 185 motor current, 185 preventative maintenance, 265 Safety performance, 258–59 contractor consolidation, 297, 298 reliability improvement program, 291, 293 Sales histories, 82 Scheduled maintenance, in contractor consolidation, 298 Sensitivity equipment, 117 factors, 80 Shift handover process, 211–12 rules, 211–12 types of, 211
Six sigma, 38, 42–44 cautions, 43 contrary objectives, 42 defined, 42 journey to, 44 themes, 43 tools/methods, 43 Society for Maintenance and Reliability Professionals (SMRP), 109, 126 Spare parts criticality ranking model, 177– 79 management, 354 recommendations, 152 unavailability, 324 See also Stores Specialists centralized manufacturing support, 383–84 defined, 383 example, 383–84 SPC, 201 Specifications balancing, 143 bearing installation, 144 motor, 144–45 resonance, 143–44 Staffing levels, 355 Stagger chart, 213 Standard operating conditions (SOCs), 204 Standard operating procedures (SOPs), 204 Standards equipment commissioning, 118 installation, 183
475
476
MAKING COMMON SENSE COMMON PRACTICE
life-cycle costs, 140 product, 88 reliability, 140 vibration, 141–43 Startup checklist, 188–89 training in, 216 Statistical process control (SPC) specialists, 201 training in, 215–16 Stores bill of materials, 160–61 catalog, 160–61 cost of, 159 defined, 158 function, 158 function, contracting, 168 high-quality management, 168 inventory level drivers, 170 key performance indicators, 169 layout, 164 management methods, 161–62 management techniques, 163– 64 needed, 160 people, 167 performance, 169, 425 practices, 166 procedures, 164–66 quality, 179 receipt inspection, 165 reliability improvement program, 193, 291 reviewing, 164 as spare parts, 158 standardization, 163
stock levels, minimizing, 170–71 supplier partnerships and, 162– 63 training, 167–68 work environment, 166 Strategic alliances, 133–35 Strategic training plan, 407–8, 436–37 Success factors defining, 107 market/product, 79–82 ranking, 107 rating of, 80 Suppliers alliances, 133 manufacturing excellence principles, 93–94 partnerships with, 162–63 pump, 137–38 reliability, 149–53 reliability experiences, 151 selection, 137–39 Supply chains issues, 94–95 management, 133 “mandate team,” 154 performance, 132 Sustainable peak rate, 21 Target markets, 77 Teams, 375–76 accountability, 378 analogy, 376 caution, 378 clear purpose, 378 cross-functional, 68–69, 331
INDEX
goals, 375 leader, 378 mandate, 154 output, increasing, 409 reliability improvement, 381 rules, 382–83 strategies, 375 trust, 378 Teamwork caution, 378 improvement, 377 manufacturing, 16 marketing, 16 practices, 377–78 Three-hole drill process, 327 Total Cost of Ownership, 149 Total productive maintenance (TPM) application of, 313 culture, 314 defined, 314 as effective tool, 314 equipment management, 318 equipment restoration, 317 focus, 313 as ineffective tool, 314 Japanese strategy, 315 losses measurement, 316 maintenance categories, 319 maintenance efficiency, 318 operator involvement, 317 of overall equipment effectiveness, 63 preventative/predictive maintenance, 318 principles, 22, 62, 314–19
principles, application of, 333– 34 principles, following, 316 publication, 315 RCM integration, 313 sample, 29 strategy, 314 training, 318 See also Maintenance Training, 405–16 in appropriate methodologies, 412–13 basic care, 216–17 condition monitoring, 216 consequences, expectations, 409 cross-functional, 413 defined, 407 enough, 412–13 goals, 408 learning vs., 410–11 for maintainers, 152–53 mechanics, 324 need for, 406 operators, 205, 215–17 packaging line, 339 for pay, 411 programs, 412 pump/valve operation, 216 quantity, 405 requirements, 117 in startup, 216 in statistical process control, 215–16 stores, 167–68 strategic plan, 407–8, 436–37 in TPM, 318
477
478
MAKING COMMON SENSE COMMON PRACTICE
Trend charts, 215 Unit cost of production, 7 low-cost producer, 10 measure of excellence, 7 Unit costs determination, 16 objectives, 90 reduction, 93 Upper control limit (UCL), 201 Uptime continuous plant measurement, 26 contractor consolidation, 294, 295 defined, 20 improvement model, 67–74 as key success measurement, 67 losses in, 134 measuring, 25–26, 435 optimal determination, 34 performance, 32 reactive maintenance vs., 60 reliability improvement program, 290, 291 RoNA vs., 14–15 success presumption, 16 tactically, 25 as tool, 24 See also Overall equipment effectiveness (OEE) Utilities costs, 350 performance, 425 Valve diagnostics, 248–49 Variability
control loops and, 200 minimizing, 107 Vendors, 135 Verification equipment, 189–90 process, 190 Vibration Institute, 143 Vibration(s) analysis, 72 spectra, 184 standards for rotating machinery, 141–43 Virtual centers of excellence, 384– 85 Volume goals, manufacturing capability for supporting, 84–85 growth analysis, 82–83 incremental, 13 Work in process (WIP), 40 Workshop support, 187