This page intentionally left blank
Reliability in Scientific Research Improving the Dependability of Measurements, Ca...
292 downloads
1008 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This page intentionally left blank
Reliability in Scientific Research Improving the Dependability of Measurements, Calculations, Equipment, and Software
Covering many techniques widely used in research, this book will help researchers in the physical sciences and engineering to solve troublesome – and potentially very time consuming – problems in their work. The book deals with technical difficulties that often arise unexpectedly during the use of various common experimental methods, as well as with human error. It provides preventive measures and solutions for such problems, thereby saving valuable time for researchers. Some of the topics covered are: sudden leaks in vacuum systems; electromagnetic interference in electronic instruments; vibrations in sensitive equipment; and bugs in computer software. The book also discusses mistakes in mathematical calculations, and pitfalls in designing and carrying out experiments. Each chapter contains a summary of its key points, to give a quick overview of important potential problems and their solutions in a given area. I. R. Walker is a researcher at the Cavendish Laboratory, University of Cambridge, where he has worked for over 20 years. He received his Ph.D. there in 1992, and was an Assistant Director of Research from 1995 to 2002. His principal line of research is the physics of superconductors and other strongly correlated electron materials at ultra-low temperatures, and the development of techniques for subjecting these materials to high pressures under such conditions.
Reliability in Scientific Research Improving the Dependability of Measurements, Calculations, Equipment and Software I. R. WALKER Cavendish Laboratory University of Cambridge
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi, Dubai, Tokyo, Mexico City Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521857703 I. R. Walker 2011
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2011 Printed in the United Kingdom at the University Press, Cambridge A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data Walker, I. R., 1961– Reliability in scientific research : improving the dependability of measurements, calculations, equipment, and software / I. R. Walker. p. cm. Includes index. ISBN 978-0-521-85770-3 (hardback) 1. Statistics. 2. Physical sciences – Statistical methods. 3. Engineering – Statistical methods. I. Title. QA276.W2986 2010 507.2 – dc22 2010032195 ISBN 978-0-521-85770-3 Hardback
Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
To: Aileen, Charles, and Susan
Contents
Preface List of abbreviations
1 Basic principles of reliability, human error, and other general issues 1.1 Introduction 1.2 Central points 1.3 Human factors 1.3.1 General methods and habits 1.3.2 Data on human error 1.3.3 Some ways of reducing human error 1.3.4 Interpersonal and organizational issues 1.4 Laboratory procedures and strategies 1.4.1 Record-keeping 1.4.2 Maintenance and calibration of equipment 1.4.3 Troubleshooting equipment and software 1.5 Reliability of information Further reading Summary of some important points References
2 Mathematical calculations 2.1 Introduction 2.2 Sources and kinds of error 2.2.1 Conceptual problems 2.2.2 Transcription errors 2.2.3 Errors in technique 2.2.4 Errors caused by subconscious biases 2.2.5 Errors in published tables 2.2.6 Problems arising from the use of computer algebra systems 2.2.7 Errors in numerical calculations 2.3 Strategies for avoiding errors 2.3.1 Avoiding conceptual difficulties 2.3.2 Use of diagrams 2.3.3 Notation 2.3.4 Keeping things simple vii
page xix xxi 1 1 1 5 5 8 11 20 24 24 25 25 28 31 31 34 36 36 36 36 36 36 38 38 39 40 42 42 42 43 43
Contents
viii
2.3.5 Use of modularity 2.3.6 Finding out what is known 2.3.7 Outsourcing the problem 2.3.8 Step-by-step checking and correction 2.3.9 Substitution of numbers for variables 2.3.10 Practices for manual calculations 2.3.11 Use of computer algebra software 2.3.12 Avoiding transcription errors 2.4 Testing for errors 2.4.1 General remarks 2.4.2 Getting the correct result for the wrong reason 2.4.3 Predicting simple features of the solution from those of the problem 2.4.4 Dimensional analysis 2.4.5 Further checks involving internal consistency 2.4.6 Existence of a solution 2.4.7 Reasonableness of the result 2.4.8 Check calculations 2.4.9 Comparing the results of the calculation against known results 2.4.10 Detecting errors in computer algebra calculations Summary of some important points References
3 Basic issues concerning hardware systems 3.1 Introduction 3.2 Stress derating 3.3 Intermittent failures 3.3.1 Introduction 3.3.2 Some causes and characteristics 3.3.3 Preventing and solving intermittent problems 3.4 Effects of environmental conditions 3.4.1 Excessive laboratory temperatures and the cooling of equipment 3.4.2 Moisture 3.5 Problems caused by vibrations 3.5.1 Introduction 3.5.2 Large-amplitude vibration issues 3.5.3 Interference with measurements 3.6 Electricity supply problems 3.6.1 Definitions and causes of power disturbances 3.6.2 Investigating power disturbances 3.6.3 Measures for preventing a.c. power problems 3.7 Damage and deterioration caused by transport 3.7.1 Common difficulties 3.7.2 Conditions encountered during transport
44 44 44 45 45 45 46 46 46 46 47 47 48 48 49 49 49 51 52 53 55 58 58 58 60 60 60 62 63 63 65 68 68 69 71 83 83 85 86 91 91 91
Contents
ix
3.7.3 Packaging for transport 3.7.4 Specialist companies for packaging and transporting delicate equipment 3.7.5 Insurance 3.7.6 Inspection of received items 3.7.7 Local transport of delicate items 3.8 Some contaminants in the laboratory 3.8.1 Corrosive atmospheres in chemical laboratories 3.8.2 Oil and water in compressed air supplies 3.8.3 Silicones 3.9 Galvanic and electrolytic corrosion 3.10 Enhanced forms of materials degradation related to corrosion 3.11 Fatigue of materials 3.11.1 Introduction 3.11.2 Prevalence and examples of fatigue 3.11.3 Characteristics and causes 3.11.4 Preventive measures 3.12 Damage caused by ultrasound Summary of some important points References
4 Obtaining items from commercial sources 4.1 4.2 4.3 4.4 4.5 4.6
Introduction Using established technology and designs The importance of standards Understanding the basics of a technology Price and quality Choice of manufacturers and equipment 4.6.1 Reliability assessments based on experiences of product users 4.6.2 Place of origin of a product 4.6.3 Specialist vs. generalist manufacturers 4.6.4 Limitations of ISO9001 and related standards 4.6.5 Counterfeit parts 4.6.6 True meaning of specifications 4.6.7 Visiting the manufacturer’s facility 4.6.8 Testing items prior to purchase 4.7 Preparing specifications, testing, and transport and delivery 4.7.1 Preparing specifications for custom-made apparatus 4.7.2 Documentation requirements 4.7.3 Reliability incentive contracts 4.7.4 Actions to take before delivery 4.7.5 Acceptance trials for major equipment 4.8 Use of manuals and technical support Summary of some important points References
93 95 95 96 96 96 96 96 97 99 100 101 101 101 102 104 104 105 111 116 116 116 117 117 118 119 119 119 120 120 120 120 121 121 122 122 122 122 123 124 124 125 126
Contents
x
5 General points regarding the design and construction of apparatus 5.1 Introduction 5.2 Commercial vs. self-made items 5.3 Time issues 5.4 Making incremental advances in design 5.5 Making apparatus fail-safe 5.6 The use of modularity in apparatus design 5.7 Virtual instruments 5.8 Planning ahead 5.9 Running the apparatus on paper before beginning construction 5.10 Testing and reliability 5.11 Designing apparatus for diagnosis and maintainability 5.12 Design for graceful failure 5.13 Component quality 5.14 Ergonomics and aesthetics Further reading Summary of some important points References
6 Vacuum-system leaks and related problems 6.1 6.2 6.3 6.4 6.5
Introduction Classifications of leak-related phenomena Common locations and circumstances of leaks Importance of modular construction Selection of materials for use in vacuum 6.5.1 General points 6.5.2 Leak testing raw materials 6.5.3 Stainless steel 6.5.4 Brass 6.5.5 Phosphor bronze 6.5.6 Copper–nickel 6.5.7 Copper 6.5.8 Aluminum 6.6 Some insidious sources of contamination and outgassing 6.6.1 Cleaning agents 6.6.2 Vacuum-pump fluids and substances 6.6.3 Vacuum greases 6.6.4 Other type of contamination 6.6.5 Some common causes of contamination in UHV systems 6.7 Joining procedures: welding, brazing, and soldering 6.7.1 Worker qualifications and vacuum joint leak requirements 6.7.2 General points 6.7.3 Reduced joint-count designs and monolithic construction 6.7.4 Welding
127 127 127 128 129 129 130 131 132 133 134 134 135 135 135 136 136 137 138 138 139 140 141 142 142 143 144 146 146 146 146 147 147 147 148 149 149 150 150 150 151 154 154
Contents
xi
6.7.5 Brazing 6.7.6 Soldering 6.8 Use of guard vacuums to avoid chronic leak problems 6.9 Some particularly trouble-prone components 6.9.1 Items involving fragile materials subject to thermal and mechanical stresses 6.9.2 Water-cooled components 6.9.3 Metal bellows 6.9.4 Vacuum gauges 6.10 Diagnostics 6.10.1 Leak detection 6.10.2 Methods of detecting and identifying contamination 6.11 Leak repairs Summary of some important points References
7 Vacuum pumps and gauges, and other vacuum-system concerns 7.1 Introduction 7.2 Vacuum pump matters 7.2.1 Primary pumps 7.2.2 High-vacuum pumps 7.3 Vacuum gauges 7.3.1 General points 7.3.2 Pirani and thermocouple gauges 7.3.3 Capacitance manometers 7.3.4 Penning gauges 7.3.5 Bayard–Alpert ionization gauges 7.4 Other issues 7.4.1 Human error and manual valve operations 7.4.2 Selection of bakeout temperatures for UHV systems 7.4.3 Cooling of electronics in a vacuum Further reading Summary of some important points References
8 Mechanical devices and systems 8.1 Introduction 8.2 Mechanical devices 8.2.1 Overview of conditions that reduce reliability 8.2.2 Some design approaches for improving mechanism reliability 8.2.3 Precision positioning devices in optical systems 8.2.4 Prevention of damage due to exceeding mechanical limits 8.2.5 Bearings 8.2.6 Gears in vacuum environments 8.2.7 Lubrication and wear under extreme conditions
157 158 164 165 165 166 167 170 170 170 179 179 181 187 190 190 190 190 195 207 207 208 208 209 209 210 210 211 212 213 213 216 218 218 218 218 219 223 224 224 227 227
Contents
xii
8.2.8 Static demountable seals 8.2.9 Dynamic seals and motion feedthroughs 8.2.10 Valves 8.3 Systems for handling liquids and gases 8.3.1 Configuration of pipe networks 8.3.2 Selection of materials 8.3.3 Construction issues 8.3.4 Problems caused by PTFE tape 8.3.5 Filter issues 8.3.6 Detection and location of leaks 8.4 Water-cooling systems 8.4.1 Introduction 8.4.2 Water leaks 8.4.3 Water purity requirements 8.4.4 System materials selection and corrosion 8.4.5 Condensation 8.4.6 Water flow and temperature interlocks and indicators 8.4.7 Inspection of water-cooled equipment Further reading Summary of some important points References
9 Cryogenic systems 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10
Introduction Difficulties caused by the delicate nature of cryogenic apparatus Difficulties caused by moisture Liquid-helium transfer problems Large pressure buildups within sealed spaces Blockages of cryogenic liquid and gas lines Other problems caused by the presence of air in cryostats Cryogen-free low temperature systems Heat leaks Thermal contact problems 9.10.1 Introduction 9.10.2 Welded and brazed contacts 9.10.3 Mechanical contacts 9.11 1 K pots 9.12 Thermometry 9.12.1 Two common causes of thermometer damage 9.12.2 Measurement errors due to poor thermal connections 9.12.3 Measurement errors due to RF heating and interference 9.12.4 Causes of thermometer calibration shifts 9.12.5 Other thermometer issues 9.13 Problems arising from the use of superconducting magnets
233 246 250 260 260 260 261 262 262 263 263 263 265 268 270 270 271 271 271 272 280 285 285 286 288 289 290 291 293 293 294 296 296 296 296 300 301 301 301 301 302 303 303
Contents
xiii
Further reading Summary of some important points References
10 Visible and near-visible optics 10.1 10.2 10.3 10.4 10.5 10.6
Introduction Temperature variations in the optical path Temperature changes in optical elements and support structures Materials stability Etalon fringes Contamination of optical components 10.6.1 Introduction 10.6.2 A closer look at some contamination-sensitive systems and devices 10.6.3 Measures for protecting optics 10.6.4 Inspection 10.6.5 Cleaning of optical components 10.7 Degradation of optical materials 10.7.1 Problems with IR and UV materials caused by moisture, and thermal and mechanical shocks 10.7.2 Degradation of materials by UV light (“solarization”) 10.7.3 Corrosion and mold growth on optical surfaces 10.7.4 Some exceptionally durable optical materials 10.8 Fiber optics 10.8.1 Mechanical properties 10.8.2 Resistance to harsh environments 10.8.3 Insensitivity to crosstalk and EMI, and sensitivity to environmental disturbances 10.9 Light sources 10.9.1 Noise and drift 10.9.2 Some lasers and their reliability issues 10.9.3 Some incoherent light sources 10.10 Spatial filters 10.11 Photomultipliers and other light detectors 10.12 Alignment of optical systems Further reading Summary of some important points References
11 Electronic systems 11.1 Introduction 11.2 Electromagnetic interference 11.2.1 Grounding and ground loops 11.2.2 Radio-frequency interference
305 305 308 310 310 310 312 314 315 318 318 321 323 326 327 333 333 334 334 335 337 337 337 337 338 338 341 343 344 344 345 345 345 350 353 353 353 353 370
xiv
Contents
11.2.3 Interference from low-frequency magnetic fields 11.2.4 Some EMI issues involving cables, including crosstalk between cables 11.2.5 Professional assistance with EMI problems 11.3 High-voltage problems: corona, arcing, and tracking 11.3.1 The phenomena and their effects 11.3.2 Conditions likely to result in discharges 11.3.3 Measures for preventing discharges 11.3.4 Detection of corona and tracking 11.4 High-impedance systems 11.4.1 The difficulties 11.4.2 Some solutions 11.5 Damage and electromagnetic interference caused by electrostatic discharge (ESD) 11.5.1 Origins, character, and effects of ESD 11.5.2 Preventing ESD problems 11.6 Protecting electronics from excessive voltages 11.7 Power electronics 11.8 Some trouble-prone components 11.8.1 Switches and related devices 11.8.2 Potentiometers 11.8.3 Fans 11.8.4 Aluminium electrolytic capacitors 11.8.5 Batteries 11.8.6 Low-frequency signal transformers Further reading Summary of some important points References
12 Interconnecting, wiring, and cabling for electronics 12.1 Introduction 12.2 Permanent or semi-permanent electrical contacts 12.2.1 Soldering 12.2.2 Crimping, brazing, welding, and the use of fasteners 12.2.3 Summary of methods for making contacts to difficult materials 12.2.4 Ground contacts 12.2.5 Minimization of thermoelectric EMFs in low-level d.c. circuits 12.3 Connectors 12.3.1 Introduction 12.3.2 Failure modes 12.3.3 Causes of connector failure 12.3.4 Selection of connectors 12.3.5 Some particularly troublesome connector types 12.3.6 Some points concerning the use of connectors
379 381 382 382 382 383 384 386 387 387 388 390 390 393 394 395 397 397 400 400 401 401 403 403 404 409 413 413 414 414 422 425 429 430 431 431 431 433 437 445 448
xv
Contents
12.4 Cables and wiring 12.4.1 Modes of failure 12.4.2 Cable damage and degradation 12.4.3 Selection of cables and cable assemblies 12.4.4 Electromagnetic interference 12.4.5 Some comments concerning GP-IB and ribbon cables 12.4.6 Additional points on the use of cables 12.4.7 Wire issues – including cryostat wiring 12.5 Diagnostics 12.5.1 Introduction 12.5.2 Detection of contact problems 12.5.3 High-resistance and open- and short-circuit intermittent faults 12.5.4 Use of infrared thermometers on high-current contacts 12.5.5 Insulation testing 12.5.6 Fault detection and location in cables 12.5.7 Determining cable-shield integrity Summary of some important points References
13 Computer hardware and software, and stored information 13.1 Introduction 13.2 Computers and operating systems 13.2.1 Selection 13.2.2 Some common causes of system crashes and other problems 13.2.3 Information and technical support 13.3 Industrial PCs and programmable logic controllers 13.4 Some hardware issues 13.4.1 Hard-disc drives 13.4.2 Power supplies 13.4.3 Mains-power quality and the use of power-conditioning devices 13.4.4 Compatibility of hardware and software 13.4.5 RS-232 and IEEE-488 (GP-IB) interfaces 13.5 Backing up information 13.5.1 Introduction and general points 13.5.2 Some backup techniques and strategies 13.5.3 Online backup services 13.6 Long-term storage of information and the stability of recording media 13.7 Security issues 13.7.1 General points 13.7.2 Viruses and their effects 13.7.3 Symptoms of virus infection 13.7.4 Measures for preventing virus attacks 13.7.5 Network security 13.8 Reliability of commercial and open-source software
451 451 452 454 456 462 463 466 471 471 471 472 473 474 474 475 475 482 487 487 487 487 489 490 490 491 491 494 495 496 496 499 499 499 500 500 502 502 502 503 503 504 504
Contents
xvi
13.8.1 Avoiding early releases and beta software 13.8.2 Questions for software suppliers 13.8.3 Pirated software 13.8.4 Open-source software 13.9 Safety-related applications of computers 13.10 Commercial data-acquisition software 13.10.1 Data-acquisition applications and their properties 13.10.2 Graphical languages 13.10.3 Some concerns with graphical programming 13.10.4 Choosing a data-acquisition application 13.11 Precautions for collecting experimental data over extended periods 13.12 Writing software 13.12.1 Introduction 13.12.2 Planning ahead – establishing code requirements and designing the software 13.12.3 Detailed program design and construction 13.12.4 Testing and debugging 13.13 Using old laboratory software Further reading Summary of some important points References
14 Experimental method 14.1 14.2 14.3 14.4 14.5 14.6
Introduction Knowing apparatus and software Calibration and validation of apparatus Control experiments Failure of auxiliary hypotheses as a cause of failure of experiments Subconscious biases as a source of error 14.6.1 Introduction 14.6.2 Some circumstances in which the effects of biases can be especially significant 14.6.3 Subconscious biases in data analysis 14.6.4 Subconscious biases caused by social interactions 14.7 Chance occurrences as a source of error 14.8 Problems involving material samples 14.8.1 Introduction 14.8.2 The case of polywater 14.8.3 Some useful measures 14.9 Reproducibility of experimental measurements and techniques 14.9.1 Introduction 14.9.2 Tacit knowledge 14.9.3 Laboratory visits as a way of acquiring missing expertise 14.9.4 A historical example: measuring the Q of sapphire
504 505 506 506 507 507 507 507 508 509 509 510 510 511 514 523 526 527 527 533 536 536 536 537 538 540 540 540 541 541 542 543 543 543 544 546 547 547 548 548 549
Contents
xvii
14.10 Low signal-to-noise ratios and statistical signal processing 14.11 Some important mistakes, as illustrated by early work on gravity-wave detection 14.11.1 Introduction 14.11.2 A brief outline and history 14.11.3 Origins of the problems 14.11.4 Conclusions 14.12 Understanding one’s apparatus and bringing it under control: the example of the discovery of superfluidity in He3 Further reading Summary of some important points References Index
551 552 552 552 553 557 558 559 560 562 564
Preface
Most scientists who spend a significant amount of time in the laboratory are only too well aware of the amount of lost time, wasted resources, and diminished morale that result from unexpected problems that inevitably arise in research. These reliability problems include things such as sudden leaks in vacuum systems, vibrations in sensitive optics, and bugs in computer software. The purpose of this book is to help those working in the physical sciences and engineering to: r identify potential sources of unexpected problems in their work, r reduce the likelihood of such problems, and r detect and eliminate them if they occur. Most of the problems discussed herein concern technical matters, as in the above examples. However, a significant part of the book is devoted to human errors and biases, and other similar issues. In modern research it is common practice to employ a variety of different experimental methods, often in combination. Some – such as electronics, computing, vacuum, and optics – can be considered “core techniques,” which are widely used in many areas in the physical sciences and engineering. These are a major focus of this book. There are numerous specialized techniques used in particular research fields that can be sources of problems, but which cannot be included in a work of this size. If one aims to cover a large range of subjects in a single volume, the depth at which they can be treated is necessarily limited. For this reason, each chapter is accompanied by extensive references that will allow the reader to explore the issues in more detail. Those that seemed to the author to be particularly useful have been placed in the “Further reading” sections at the ends of most chapters. Each chapter is also provided with a summary of some of its key points. This allows busy readers to obtain a quick overview of important potential problems and their solutions in a particular area. It is not the purpose of this book to provide basic instruction on the various experimental techniques and their use. For such, the reader is referred to the references at the ends of each chapter. It is assumed that the reader is familiar with the principal ideas behind the methods discussed here, and their associated terminologies. Also, this book is not intended to cover safety matters, although safety is mentioned now and again, and many of the suggestions contained within can improve it. Many people assisted the author in one form or another, with their advice, criticism and information; and it would be very difficult to acknowledge them all. The author is particularly grateful for the help provided by the late Gordon Squires, as well as Gil xix
xx
Preface
Lonzarich, and Frank Curzon, in the form of discussions and remarks concerning the manuscript. He also appreciates conversations with, and information given by, the following members of the Cavendish Laboratory: Patricia Alireza, Doug Astill, Rik Balsod, Sam Brown, Dan Cross, Malte Grosche, Dave Johnson, Dima Khmelnitskii, Chris Ko, Keith Matthews, Chris Moss, Sibel Ozcan, Stephen Rowley, Montu Saxena, Leszek Spalek, and Michael Sutherland. The assistance provided by Nevenka Huntic, at the Rayleigh Library in the Cavendish Laboratory, is gratefully acknowledged.
Abbreviations
CA CMRR EMI ESD HVAC IMRR LED MOV NEG OFHC PLC PMT PTFE R RA RFI RH RMA TDR TIG TSP UHV UPS
xxi
computer algebra common mode rejection ratio electromagnetic interference electrostatic discharge heating, ventilating, and air-conditioning isolation mode rejection ratio light-emitting diode metal oxide varistor non-evaporable getter oxygen-free high conductivity programmable logic controller photomultiplier tube polytetrafluoroethylene rosin fully activated rosin radiofrequency interference relative humidity mildly activated rosin time domain reflectometer tungsten inert gas titanium sublimation pump ultrahigh vacuum uninterruptible power supply
1
Basic principles of reliability, human error, and other general issues 1.1 Introduction A number of basic qualities or conditions are of value whenever reliability is an issue. These include: (a) simplicity, (b) redundancy (providing duplicate or backup components or systems), (c) margins of safety, (d) modularity (dividing complicated things into simple components), and (e) conservatism (using conservative technology). These factors, and others, are considered in the following chapter. Human error is, of course, a very important cause of problems in all activities. It might be thought that little can be done to prevent such errors, but this is far from the case. For example, numerous investigations have been carried out (mostly in the aviation and nuclear industries), which show that errors are generally not completely random and unpredictable events, but usually follow regular patterns. These results, which are discussed below, suggest ways of avoiding errors, or at least mitigating their consequences. Other sections of the chapter discuss record keeping in the laboratory (the lack of which is a common cause of problems), the maintenance and calibration of equipment, and general strategies for troubleshooting apparatus and software.
1.2 Central points The following are very general principles of reliability that recur repeatedly in all activities in research. (a) Simplicity The imperative to keep things simple is usually well understood, but not always practised. It is especially important in a university environment where inexperienced research workers are involved. A frequent cause of the erosion of simplicity is the desire to make something (experimental apparatus, computer software, calculations) as general-purpose as possible, rather than tailoring it to a specific task. Also, workers sometimes feel inclined to add extra unnecessary features to apparatus or software that is under development, in the belief that, although they are not needed immediately, these features might be useful in the future. This tendency often leads to difficulties. Another cause of troublesome complexity is the desire (particularly common among 1
2
Basic principles of reliability
beginners in research) to demonstrate one’s ability through the mastery of complicated experimental, theoretical, or computational “machinery.” It is possible to take the principle of simplicity too far. For example, in the case of electronic circuit design, the need to provide proper circuit protection (e.g. over-voltage protection on sensitive input circuits), adequate margins of safety, and where necessary redundancy, is normally more important than reducing the number of components [1]. Furthermore, in systems in which human error during use is an important source of reliability problems, an increase in system complexity for the purpose of automating tasks can lead to an overall improvement in reliability. An example of this would be the automation of valve operations in a high-vacuum system. An important benefit of simplicity, which is perhaps not always appreciated, is that it often simplifies troubleshooting in the event of a failure. (b) Redundancy The implementation of redundancy can range from very elementary measures to relatively sophisticated ones. At the most basic level one may, for example, have a backup piece of experimental apparatus, which can be swapped with a malfunctioning unit when necessary. To take another example, it is usually feasible to provide extra wires in cryogenic instruments, as insurance against possible losses of working wires due to breakages or short circuits. Also, for instance, containers that might be subjected to gas overpressures can be supplied with two pressure relief valves, so that if one fails to open when required, the other should do so. At the other end of the scale of sophistication, there exist multiply redundant computer systems, involving several computers that work separately on the same calculation, and then automatically compare their results. If the results of one computer are found to be different from those of the others, it is ignored, and that computer is then taken out of the system. The use of redundancy can be a very effective method of improving reliability, and it is heavily used in areas where high levels of reliability are needed (such as on spacecraft). However, since its use normally involves additional cost, complexity, and bulk, it should not be employed as a substitute for sound design or technique. In general, redundancy should be applied only after all other methods for improving reliability have been tried, unless the highest levels of reliability are essential. (The use of redundant pressure relief devices on cryogenic vessels would be an example of the latter situation.) There are several other situations in which redundancy is regularly used. For instance, computer hard drives can be unreliable, and in order to prevent the loss of important information in the event of failure, redundant arrays of these devices are often employed. This is not difficult to do – see page 492. Power supplies are also frequently a source of trouble (see pages 395 and 494). In circumstances in which high reliability is very important, power can be provided by redundant power-supply systems, which comprise two or more independent power supplies. If one such unit fails, the others will automatically and immediately compensate. These systems are often used in server computers. Finally, in some measuring systems, redundant sensors are employed to guard against errors due to sensor damage or loss of calibration.
3
1.2 Central points
It is possible for redundancy to lead to a reduction of reliability, if sufficient thought has not been given to its implementation. A classic example of this involves a twinengine airplane with a single-engine ceiling of 1220 m above sea level [2]. The two engines are redundant only under some conditions. If the airplane is flying over Denver (with a ground height above sea level of 1610 m), the presence of two engines doubles the chances of crashing because of engine failure. When using redundancy, one should always be on guard for “common mode failures,” in which the benefits of having redundant elements are negated by the occurrence of a fault that affects all the elements. For example, in the case of pressure relief valves, the advantage of having two valves would be lost if they shared a common passage to the container, and this passage became blocked. In the case of redundant computer systems, the use of redundancy would be to no avail if all the computers used the same algorithm to perform the calculation, and an error occurred due to a problem with the algorithm. The use of redundancy is discussed in depth in Ref. [1]. (c) Margins of safety The use of a margin of safety is often applied in those cases where some operating parameter cannot pass beyond certain limits without causing failure, and one would like to take into account uncertainties or unforeseen conditions. Examples of physical parameters include power, electric current, pressure, and number of operating cycles. A specific case involves using a pressure vessel only at some fraction of its bursting pressure, in order to allow for ignorance about the material properties, uncertainties in calculations of the bursting pressure, errors in the pressure measurement, and mechanical fatigue. The notion of a “margin of safety” is used very generally when one would like to allow for uncertainty, even when there is no element of actual physical danger. A nonphysical example is the use of extended-precision arithmetic in order to take account of round-off errors in numerical calculations. The reduction of the magnitude of a physical operating parameter in order to increase reliability is often referred to as “derating,” and is discussed in more detail on page 58. The use of a margin of safety usually involves making a tradeoff with performance (operating pressure, in the above example). This means that there is often a temptation to reduce the safety margin in an arbitrary way in order to gain performance. Needless to say, in general this should be resisted. It has been said that, during the development of a device or system, 23 of the difficulties and 13 of the costs are incurred while attaining the last 10% of the desired performance [3]. While this rule-of-thumb was coined with regards to aircraft development, the sentiment is equally applicable to research. If the performance of an instrument or a technique is being pushed close to the very edge of what is possible, margins of safety will inevitably have to be compromised, and reliability problems are bound to appear more frequently. (d) Modularity The management of complicated things is normally done by dividing them up into a number of simpler independent ones (“modules”). If necessary, these can be further subdivided, until the stage is reached where further subdivision is no longer necessary. This technique is a very general way of dealing with complexity.
4
Basic principles of reliability
In the case of experimental apparatus, the use of modules makes it possible to: (i) more easily understand the workings of the apparatus by hiding irrelevant complexity within the modules, (ii) create reliable complicated-apparatus by permitting the assembly of a number of simple units (possibly of a standard design with well characterized behavior), which can be designed and debugged easily in isolation, (iii) readily diagnose faults by allowing suspect portions of the apparatus (one or more modules) to be easily swapped with known working ones, (iv) quickly repair faults by making it straightforward to replace defective parts of the apparatus, and (v) more easily implement redundancy strategies. The use of modularity is also invaluable in writing computer software (in which the modules are called “routines”), and performing mathematical calculations (where complicated operations are broken up into a number of simpler ones, which can then be handled separately). Although the use of modularity normally results in improving the reliability of a complex system, it should not be thought that just because the modules work reliably, the system as a whole will necessarily do so. There is a class of faults called “sneaks,” in which the system as a whole fails even though the modules that comprise it work correctly. In such cases, the overall design (e.g. interconnections between components in an electronic circuit) is incorrect. Computer software bugs are often a form of sneak. Hence, it is always necessary to think about the possibility of sneaks during the design of a system. Furthermore, one must test the system as a whole once it has been assembled, and not just assume a priori that satisfactory behavior is guaranteed by the proper functioning of its subunits. The issue of sneaks is dealt with in more detail in Ref. [1]. (e) The advantage of small incremental improvements Making large changes in some parameter (sensitivity, power level, etc.) brings with it the risk of unanticipated changes in some other quality (or qualities) that could otherwise have been predicted, and accounted for, if the change were small. Small incremental improvements, made one at a time, have the advantage that since one is close to the starting point, the relationship between the change and any negative consequences is fairly simple, so that it is fairly easy to tell what needs to be done in order to make corrections. When everything is under control, another incremental change can be made. In this way, one carefully and controllably alters the parameter until the desired improvement is achieved. (f) Using conservative technology If reliability is an issue, it is seldom a good idea to employ a new technology (e.g. a novel type of commercially made instrument) without giving it time to be tested by other users, and for improvements to be made as a result of their experiences. First-generation things are often best avoided. It may take years for the inevitable problems to be sorted out. (g) Testing versus sound design and construction While there is no doubt about the importance of testing in gaining assurance that a given item of equipment or software operates correctly and reliably, testing is not a substitute for sound design and construction. One cannot expect that one will be able to expose by testing, and subsequently
5
1.3 Human factors
repair, all possible potential problems. Reliability can come only from a correct design and its proper implementation. This is especially important when intermittent faults are a possibility, since the presence of these may not be detected by testing (see the discussion on page 60).
1.3 Human factors 1.3.1 General methods and habits 1.3.1.1 Introduction In general, human error is responsible for a great many of the reliability problems that can occur. Strictly speaking, it is responsible for virtually all reliability problems. However, here we will concern ourselves only with errors taking place within the research environment, and not those occurring (for example) at a factory where some apparatus may not be designed correctly. Therefore, suitable general approaches to research, habits and abilities are very important in averting such problems. The habit of being careful is obviously a desirable attribute, but is insufficient by itself. Experience and imagination are also invaluable in foreseeing and averting potential difficulties. Patience and attention to detail (see page 7) are yet other useful characteristics.
1.3.1.2 Finding out what is known It has been said that six months of work in the laboratory may be saved by six hours spent in the library [4]. This is not an exaggeration, and indeed the importance of reviewing previously published work before beginning a research project is hard to overemphasize. It is not uncommon for new investigators in a field to reinvent the same techniques, and make the same blunders, as others who have already described their experiences in print. This redundant knowledge is often gained at the cost of considerable effort and expense. Ignorance of well-known pitfalls in experimental method can, and sometimes does, also lead to the publication of erroneous data. It sometimes happens that a “new” scientific phenomenon or theory is observed or developed, and announced, only to be subsequently revealed as something that is already known. The history of science abounds with instances of such rediscoveries. An example and brief discussion of this is provided in Ref. [5]. With the availability of a huge variety of information sources on the Internet, and some excellent search engines, it is hard to justify not making the effort to find out what is known. Strategies for carrying out literature searches are discussed in detail in Ref. [4].
Basic principles of reliability
6
1.3.1.3 A digression on sources of information Much useful information about research instruments and techniques can be found in scientific instrumentation journals and related periodicals. These include: Review of Scientific Instruments, Measurement Science and Technology, Cryogenics, Journal of Vacuum Science and Technology, Nuclear Instruments and Methods, Applied Optics, and SIAM Journal on Numerical Analysis. There are numerous books on various topics related to these subjects (some of which are listed in the following chapters). If a book or journal is not present at one’s own research establishment, one should consider the possibility of making use of the “interlibrary loan” facilities that are often available. The use of these is usually very straightforward. (Keep in mind that not everything is on the Internet.) Doctoral dissertations can be useful sources of detailed technical information that does not get published. The solutions to technical problems provided by these are often inelegant and extempore. However, they can provide information about the kinds of problem that can occur and their character, and are frequently based on firsthand experience. Copies of dissertations can be obtained online from national libraries (e.g. the British Library in the UK) or commercial sources (e.g. Ref. [6]). Company websites are often a useful source of information on the correct use of a generic type of equipment, potential problems with these, and possible solutions. Nor should one neglect printed catalogs. These can contain information that is complimentary to that provided by the websites, and are often in some ways easier to use. Searching for journal articles containing needed information has long been possible with computer databases of journal titles and abstracts. Using appropriate keywords, it is often possible to find very helpful information amidst the vast number of articles that have been printed. Some useful databases are INSPEC, Web of Science, and Metadex. Google provides an online facility that makes it possible to search the entire contents of a very large number of journals (and not just the titles and abstracts) [7]. Using the appropriate keywords in the correct combination is very important, and the acquisition of skill in this activity can be highly valuable. One useful way of obtaining a variety of suitable keywords is to search the Internet in the normal way. Even if the results of this search are themselves not relevant, the web pages may contain words that can be used as keywords in database searches. A thesaurus (or synonym dictionary) can be a useful source of keywords. Google also provides a very useful website that allows one to do an online keyword search of a large fraction of the books that have ever been printed, and to display parts or all of the book pages containing these keywords1 [8]. This facility does for books what the computer databases do for journals. In addition, it can be used to augment the index of a printed book that one already has in one’s possession. It allows searches of a particular book for words that are not necessarily contained in the book’s index, and can also search
1
As of the time of writing there are plans to allow all the pages in any book to be viewable, thereby turning this website into an online library of great comprehensiveness. However, these plans are mired in legal controversy.
1.3 Human factors
7
for combinations of words and phrases. The latter is, of course, generally not possible using an ordinary printed index.
1.3.1.4 Periodically review the state of the art It is a good idea to review the state of the art and capabilities in any given area from time to time – things change. One should not make assumptions about the best methods or technologies to use on the basis of what was done many years ago. Instrumentation and methods are often in a state of flux. An entirely new and superior technology or method may suddenly appear. Or perhaps an alternative, previously inferior, technology or method may improve to the point where it becomes the one of choice. While we are normally accustomed to things improving, they sometimes get worse. Excellent products can disappear if their manufacturers go out of business, or the product is discontinued either because it wasn’t selling or because an important part that went into its construction is no longer available. Alternatively, the quality of a product may be degraded in order to cut costs, or because technical people with unique skills have left the firm that makes it.
1.3.1.5 Paying attention to detail In technical matters, the neglect of small details (even of an apparently trivial nature) can have great consequences. Reliability problems are very often caused by simple things that have not been given sufficient attention [1]. For example, a tiny screw that has not been properly secured in an instrument might work its way loose with vibration and cause short circuits, thereby producing erratic behavior, or even damaging the device. One of the common characteristics of successful experimenters is a knack of paying attention to the right details [4]. The potential importance of such things is vividly illustrated by the following historical cases. In 1962, the Mariner 1 space probe, which was on its way to Venus, suffered a failure that led to its destruction. The cause of this failure was a missing hyphen2 in the spacecraft’s computer guidance software [9]. Mariner 1 cost about 19 million US dollars. In 2008, the Large Hadron Collider (LHC) particle accelerator underwent a failure that started a chain of damaging events. These led to the complete shutdown of the facility. It has been projected that it will take a total of about a year3 and some 21 million US dollars to repair the damage and restart the accelerator. The cause of the failure was a single bad solder joint that connected two superconducting magnets being used to control the particle beam [10]. Although a very large number of such joints are present in the accelerator, a 2 3
•
Some reports have described the error as being a missing overbar (i.e. Rn ) in the transcription of the original mathematical description of the guidance algorithm. The overbar indicates an averaging operation. This section was written in early 2009.
8
Basic principles of reliability
large-scale and highly professional effort was made to ensure that they were satisfactory [11]. However, apparently this was not enough.
1.3.1.6 Difficulties caused by improvisation In modern scientific research, there is often tremendous pressure to come up with results as quickly as possible. In some ways this is commendable. However, such pressure often leads to improvisation: in choosing and devising an experiment, designing and building apparatus, and taking measurements and interpreting the data. Often the result of such an approach is an ill-conceived research project, apparatus that is difficult to use and unreliable, data that are noisy and untrustworthy, and conclusions that are groundless. Furthermore, extempore solutions to problems can sometimes hide fundamental defects in the various aspects of a research project, and it may be difficult to tell at a later stage precisely where things have gone wrong. Improvisation is sometimes justified in the early developmental stages of a research project, when it is desirable to validate an experimental concept (e.g. a method of measurement or a type of apparatus) that is not amenable to exact analysis. Sometimes the rapid pace of a particular field of research precludes long-term planning and construction. In such situations, improvisation at some level may be unavoidable. A common problem is that arrangements that were intended to be temporary end up becoming permanent. The subject of improvisation versus planning in the construction of apparatus is discussed in Ref. [4].
1.3.2 Data on human error 1.3.2.1 Frequency of problems caused by human error It has been reported [12] that during human test or maintenance activity in general, the probability of a fault being put on an item lies in the range between 10−4 and 10−2 per operation, depending on the complexity of the task. Therefore, for a maintenance routine comprising a large number of operations, the chances of introducing a failure may be very significant. (Many things done in a research laboratory, e.g. changing samples in an instrument, are similar in character to maintenance activities.) A study of 180 “significant event reports” at nuclear power plants between 1983 and 1984 [13] suggests that of the 387 root causes that were identified, “human performance” comprised 52% of causes, “design deficiencies” were given as 33%, while “manufacturing, etc.” and “other/unknown” external causes roughly equally comprised the rest. (NB: The precise values of the statistical data provided in this and the following sections are not important, since the statistical uncertainties are large. The numbers merely provide a means of identifying the most salient problems and their causes, and indicating the relative qualitative significance of such problems and causes.)
9
1.3 Human factors
Table 1.1 Dominant types of human error, from a survey of error types in 200 nuclear power-plant incidents (see Ref. [13]). The actual number of cases of each type is indicated in parentheses Omission of functionally isolated acts Latent conditions not considered Other types of error, unclassifiable Other types of omission Side effect(s) not considered Simple mistakes among alternatives Alertness low Manual variability (i.e. clumsiness) Spatial orientation weak Strong expectation Familiar association Absent-mindedness
34% 10% 10% 9% 8% 5% 5% 5% 5% 5% 3% 1%
(68) (20) (20) (17) (15) (11) (10) (10) (10) (10) (6) (3)
1.3.2.2 Dominant types of human error – related problems In another study, of 200 “significant events” at nuclear power plants [13], a breakdown of the errors types was made (see Table 1.1). The terms in Table 1.1 are defined as follows: (a) omission: the failure to perform one or more of the actions needed to accomplish some goal [13], (b) functionally isolated: isolated from the main purpose of the task (e.g. switching equipment from “test” or “standby” mode to the normal operating mode) [14], (c) latent conditions: conditions whose adverse consequences do not immediately manifest themselves, but lie dormant within a system (e.g. experimental apparatus) for an extended period (>1–2 days), until they combine with other factors to create a problem [13], (d) mistakes among alternatives: for example, mistakes in setting switches, such as “up/down,” etc., (e) strong expectation: make an assumption, rather than observe the actual situation [14], (f) familiar association: inadequate application of rules for interpreting phenomena. Latent errors (which result in latent conditions) often occur when a potential problem is dismissed as being unimportant, because it is not realized how latent conditions can multiply over time and eventually combine to create an actual problem. Some major accidents in the public domain, such as the Challenger space-shuttle accident and the Chernobyl disaster, have been attributed to latent errors [13]. It was found that omissions were the dominant type of error [13]. These could include such things as forgetting to set a valve to the correct position, or omitting some steps in a procedure. Omissions comprised 43% of all errors. Other studies have arrived at similar conclusions. The omission errors in the above survey were most closely associated with
10
Basic principles of reliability
testing, calibration, and maintenance operations. Omissions in general are responsible for an immense amount of wasted time in scientific research [4]. Some facts regarding the chances of making an omission error are as follows [13], [15]. (a) Such errors occur more often if one is carrying out routine tasks while distracted or preoccupied. (b) The presence of a large number of discrete steps in the action sequence comprising the task increase the chances that at least one will be omitted. (c) If the amount of information needed to carry out a step is large, the chances are high that items in that step will be omitted. (d) Steps that are not clearly cued by previous ones, or do not succeed them in a direct linear sequence, stand a good chance of being omitted. (e) If instructions have been given verbally and there are more than five simple steps, those in the middle of the list are more likely to be omitted than those at the beginning or end. (f) In the case of written instructions, isolated steps (i.e. those not clearly associated with the others) at or near the end of the list are likely to be omitted. (g) Steps in an action sequence involving reassembly (e.g. of apparatus) are more likely to be omitted than those of the original disassembly. (h) If certain steps must be performed on some occasions, but not on others, then these steps stand a higher chance of being omitted. This is especially true if such steps are needed relatively infrequently. (i) In a highly automatic task that has been well practised, unexpected interruptions are likely to lead to omissions. (j) If the person who finishes a task is not the same as the one who started it, it is more probable that omissions will occur. Highly automated, routine tasks are also vulnerable to premature exits (omitting some final steps in the task), especially if there is time pressure or another job waiting to be done. Omission errors are a regular problem during apparatus construction, maintenance or reassembly activities. Forgetting to install or properly tighten fasteners (bolts, screws, etc.) is very common, particularly if multiple fasteners must be installed [15]. In electronic work, it is not uncommon to forget to apply solder to a connection, if a number of connections that require soldering are present. (This can lead to troublesome intermittent electrical contacts.) Hardware items in general are frequently not connected or left loose, or are missing altogether. Naturally, the risk of making such errors increases if the items involved (e.g. fasteners) are subsequently covered up by other components during further assembly work. Also, the removal of foreign objects and tools from a work area (e.g. the inside of a vacuum chamber or an electronic instrument) at the end of a job is frequently omitted.
1.3.2.3 Dominant causes of human error – related problems In the study of “significant event reports” at nuclear power plants discussed on page 8, the category of “human performance” problems was broken down by cause of problem (see Table 1.2).
1.3 Human factors
11
Table 1.2 Dominant causes of human error-related problems, from a survey of 180 “significant events” at various nuclear power plants (from Ref. [13]) Deficient procedures or documentation Lack of knowledge or training Failure to follow procedures Deficient planning or scheduling Miscommunication Deficient supervision Policy problems Other
43% 18% 16% 10% 6% 3% 2% 2%
Deficient procedures are those which [15]: (a) (b) (c) (d) (e) (f) (g)
contain incorrect information, are unworkable or inapplicable in the current situation, are not known about, cannot be located, are out of date, cannot be understood, have not been written to cover the task at hand.
Issues concerning procedures are discussed in more detail on page 18. Documentation includes things such as manuals, system diagrams, product data sheets, etc.
1.3.3 Some ways of reducing human error 1.3.3.1 Adverse mental states Emotional stress Major emotional upsets can have a large impact on human error. For instance, such events can make people more prone to certain types of illness [15]. Distressing thoughts can also be distracting, especially under low workload conditions. (Distractions can lead to omission errors – see page 000.) Furthermore, people are more likely to take risks while under severe emotional stress. It is, of course, often difficult to avert the kinds of things that lead to major upsets (e.g. family problems or financial troubles). However, one can at least minimize their consequences, by avoiding tasks where errors could cause irreversible problems.
Frustration Frustration and aggression are clearly linked [15]. When a person becomes frustrated, what might normally be a careful and contemplative mentality is replaced (at least partially) by a brute-force one, which often leads to very risky behavior. When one is working in the laboratory, it is essential to be able to recognize this condition, so that one’s attention can be focused (or refocused) on activities where mistakes will not have any major adverse consequences.
Basic principles of reliability
12
Fatigue Moderate amounts of sleep deprivation are probably fairly common in research workers, who may work odd hours in order to tend their apparatus. With regards to causing errors, the effects of moderate sleep deprivation are much like those resulting from alcohol use [15]. In many situations, being awake for 18 hours reduces one’s mental and physical abilities to what they would be if one had a blood alcohol concentration (BAC) of 0.05%.4 The sort of activity that is most likely to be adversely affected by fatigue is one that is both boring, and involves the detection of rare problems [15]. (Certain inspection tasks fall in this category.) Fatigued people tend to have difficulty controlling their attention. Impaired short-term memory, and memory-lapses in general can also be troublesome. Unfortunately, a person suffering from fatigue may not be conscious of how far their abilities have deteriorated. Fatigue brought on by working outside normal hours can be reduced in the following ways [15]. (a) (b) (c) (d) (e)
Do not work more than three consecutive night shifts. Permanent night work should be avoided. Shifts should be rotated forward in time, i.e. morning→evening→night. A break of at least two days should be taken following the last night shift. No more than five to seven consecutive days should be spent working outside normal hours. (f) There should be at least 11 hours of rest between shifts.
Regarding point (b), although one might think that the body should be able to adapt to working at night on a permanent basis, this is not the case [15]. Daytime sleep is not as long or refreshing as that which is obtained during the night. Working for more than 12 hours, or not having a good nighttime sleep during the last 24 hours, is likely to bring about fatigue-related errors [15].
1.3.3.2 Preparation and planning It appears that a significant fraction of errors in general stem from a reluctance, or perhaps even an inability, to plan. Without using imagination and experience to foresee possible problems, the kinds of things that lead to errors are almost guaranteed. Mentally rehearsing a procedure before carrying it out can be very useful in preventing trouble [4]. In doing this, one imagines going through the procedure step-by-step, trying to anticipate the various problems that could arise at each stage, and thinking about how these could be dealt with. If the procedure is to be used in carrying out an experiment, one can stand in front of the apparatus, and visualize the events that are about to take place. One can ask oneself, for instance: what should be done if a leak appears, or there are high levels of electromagnetic interference, or the expected signal is not detected, or the computer stops recording data? How can such events be prevented in the first place? Although this process 4
NB: This is at or above the legal BAC limit for driving in many countries.
1.3 Human factors
13
of mental preparation will probably not catch all the problems that may occur, it can be highly effective at minimizing the effects of unforeseen difficulties. Various psychological studies have shown that mental rehearsals can greatly improve the quality and reliability of work done by surgeons and other professionals [15]. A very useful strategy for improving a written plan is to place it to one side after preparing it, and do something else for a while. This enables one to forget the contents of the plan to a certain extent, as well as some of the biases that were involved in its creation, and thereby gain a fresh perspective. When the plan is viewed again a few weeks or months later (the longer the better), it is usually much easier to spot its flaws. (This technique is an excellent way of improving other written items as well, such as scientific papers,5 mathematical calculations, computer software, etc.) Another helpful technique is to have the plans reviewed by several other people. It can be useful to include among such reviewers an intelligent non-expert, who may be able to spot things that are missed by specialists. If it is not possible to have others review one’s plans, it may be helpful to imagine that one is describing the plans to a critical audience. This will help to ensure that they are at least coherent, and may allow one to see things more clearly from other people’s perspective. Overconfidence is a source of numerous strategic and tactical blunders when plans are created. It has been found [13] that the reluctance of problem-solvers to abandon or modify defective plans is greatest when: (a) the plan is very complicated, (b) the plan is the result of a large amount of labor and emotional investment, and its completion was accompanied by a significant reduction in anxiety or tension, (c) several people were involved in preparing the plan, and particularly if these formed a small, elite group, or (d) if the plan has hidden objectives – i.e. it was created, either consciously or unconsciously, to satisfy several different motives or needs.
1.3.3.3 Automation In those situations where its use is practical, automation is a highly effective method for reducing or substantially eliminating human error. For example, there is little doubt that the collection of data by computer has greatly improved its reliability, in comparison to that achievable in the days when manual methods were employed. Routine, repetitive tasks are particularly good candidates for automation. The sophistication of such methods can vary from using a simple device to automatically refill a liquid-nitrogen dewar, to employing a programmable logic controller (or PLC) to operate a complex high-vacuum system. Of course the presence of such automation necessarily increases the complexity of an operation. However, in some cases the resulting reduction of human error can more than 5
It can be argued that scientific papers should always be reexamined in this way. In fact, if possible, a paper should be reexamined several times (separated by significant time intervals) before being submitted to a journal.
Basic principles of reliability
14
make up for this. In other situations (for example, when unreliable mechanisms are involved) it may be found that doing things manually is the most satisfactory method. In the latter case, it may be possible to obtain some of the benefits of full automation without their drawbacks by using a semiautomatic approach. For example, a computer system may sense a condition that needs to be corrected by operating a valve or moving a linkage, and then tells a human operator what has to be done. The use of a calendar program in a handheld PDA (personal digital assistant), or similar device, is an inexpensive but very effective method of ensuring that human tasks get done at the correct time. Such programs are also available for PCs.
1.3.3.4 Averting errors due to overdependence on memory Memory lapses in general are the most frequent physiological cause of error in maintenance work [15]. (As mentioned earlier, many activities in the laboratory, such as changing samples in an instrument, are similar in character to maintenance.) Hence, it is always risky to interrupt a task in order to do something else, without leaving some sort of reminder as to what stage the task is in. Likewise, if an important step in a task is skipped, with the idea that it will be done at a later time, there is a strong chance that this step will be omitted altogether unless a reminder is left concerning its status. Detailed information, such as measurements, should never be entrusted to memory. Such things should always be written down immediately (see the discussion on page 24).
1.3.3.5 Use of checklists, and other ways of preventing omission errors As indicated in Section 1.3.2.2, omissions are the most common type of error. The use of checklists is one of the best techniques for preventing such problems. Despite their undoubted utility, these aids are probably not used nearly as much as they should be. Every important laboratory procedure, such as setting-up apparatus for an experiment, which involves a significant number of steps, should be carried out with the aid of a checklist [4]. It is clearly vital that carefully composed and tested checklists be used whenever highly important6 or very time-consuming or expensive tasks must be performed. However, they are also very useful even for routine laboratory work. Use can be made of the list (a)–(j) in Section 1.3.2.2 to reduce omission errors. For example, isolated steps at the end of a written list can be highlighted or duplicated in order to increase the probability of them being noticed and executed.
1.3.3.6 Omissions caused by strong habits A kind of omission error can occur when one is in the process of doing something that is at odds with a firmly established habit. An example from everyday life is the tendency to write the wrong dates on forms or letters in the first few weeks of a new year. Such mishaps are called omitted checks, and the main effect that causes them is known as strong habit 6
For example, involving an experiment that can be done only once, such as observing an eclipse.
15
1.3 Human factors
intrusion [13]. Omitted checks generally occur only if a normal (habitual) action is replaced by an unusual (non-habitual) one when the actions are part of a well-practiced procedure done in familiar surroundings. In research, an example of something that conflicts with habits is the use of 0 as a matrix index in mathematical or computing work, since the common-sense counting numbers start with 1, not 0 [16]. One should try to avoid using procedures involving actions that conflict with firmly established habits. If this is not feasible, it may be desirable to use a checklist or other memory aid, at least until the unusual action becomes habitual. Omitted-check errors are discussed in Ref. [13].
1.3.3.7 Physical environment and physiological state General guidelines Uncomfortable working conditions are a distraction, and therefore a potential cause of errors. Also, such conditions can reduce alertness. One recommendation for a technical work environment [17] indicates that the light level should have a minimum value of 1080 lux on the work surface. At least 90% of the work area should be without shadows or severe reflections. The work area should be provided with fresh air, but free of draught. A recommendation has been made by ISO (International Organization for Standardization) concerning what should be considered acceptable indoor working conditions [18]. The relevant standard (ISO 9241–6) advises that for most people in mild climatic zones, acceptable room temperatures are in the ranges: 22 ± 3 ◦ C during the winter, and 24.5 ± 2.5 ◦ C during the summer. These values are for sedentary activity, and a relative humidity level of 50%. The ideal temperature ranges differ somewhat from person-to-person, so it is best if local temperatures can be adjusted to suit individual requirements.
Air quality The presence of various types of indoor air pollutants are known to cause physiological problems, including: (a) irritation of the eyes, nose and throat, (b) dizziness, (c) headaches, and (d) fatigue. More serious long-term health problems are also possible. These pollutants include tobacco smoke, biological contaminants from contaminated air handling systems, organic chemicals, formaldehyde from wooden furniture, and others. Such pollutants may exist because of poorly designed, operated and maintained ventilation and cooling systems, the use of chemicals inside buildings, and the use of a building for an inappropriate purpose considering its design and construction. More information about this issue can be found in Refs. [19] and [20].
Lighting The 50 or 60 Hz flicker produced by fluorescent lights with ordinary ballasts can cause headaches [20]. For this reason, if fluorescent lighting must be used, lights with
16
Basic principles of reliability
high-frequency electronic ballasts are preferred, since these have relatively low flicker levels. Incandescent lamps are another preferred type. (NB: Fluorescent lights sometimes cause electromagnetic interference in electronic devices – see pages 371, 380 and 389. Replacing ordinary fluorescent lights with ones containing electronic ballasts may increase, or at least change the nature of, such interference.)
Noise The effect of acoustic noise on human performance is complicated, and depends on the tasks being carried out [20]. Intellectual work that requires creativity may be more at risk of impairment due to noise than highly skilled, but routine, activities. Unpredictable noises (such as ringing telephones or slamming doors) tend to reduce performance in circumstances where mental calculations and short-term memory are important. Sounds such as conversation or singing can also have an adverse effect on tasks involving shortterm memory – because of the content, rather than the sound level. Predictable sounds (such as the continuous noise produced by a ventilation system) have a smaller affect on such tasks. However, loud noise, whether predictable or not, can affect accuracy in tasks in which clerical functions, good motor skills, or vigilance are needed, and in which two tasks must be performed simultaneously. In some situations, noise may be able to actually improve performance, possibly because it stimulates higher centers of the brain [20]. In this way, it may reduce mental fatigue and boredom. Nevertheless, it generally makes sense to limit the overall background noise level in a working area. However, exceedingly quiet environments (where one could “hear a pin drop”) may be undesirable, because in these, small unexpected sounds become more noticeable and distracting [20]. In some cases, it may be worthwhile to use white-noise generators (or “white-noise machines”) to mask unwanted low-level sounds, such as conversation. The installation of soundproofing, in a room or around a noisy device, by a firm that specializes in such work, can be helpful and practical in some situations. The relevant ISO standard [18] indicates that office background noise levels should not exceed: (a) 35–40 dB(A), for tasks that are involve temporary concentration, and are occasionally repetitive, (b) 35–45 dB(A), for tasks that involve temporary concentration, and are occasionally mechanized, (c) 40–45 dB(A), for tasks that are largely mechanized. For difficult and complex tasks generally, the noise level should not exceed 55 dB(A).
Air conditioning The question about whether or not to install air conditioning in a work area is not always decided on a completely rational basis. Air conditioning is often considered to be a luxury. (But is it any more of a luxury than, for example, central heating?) It may be forgotten that air conditioners are essentially heat pumps, and therefore can often be used to warm, as well
17
1.3 Human factors
as cool, an environment. (Moreover, heat pumps are a very efficient means of providing heating.) The ability of air conditioners to increase the quality of work by improving working conditions, not only for research workers, but also technical staff (particularly during the installation and commissioning phases of projects), deserves more consideration than it is usually given. Since research is a highly labor-intensive activity, it is reasonable to expect that such improvements can quickly make up for the relatively small cost of an air conditioner. Furthermore, by controlling the temperature, and reducing humidity and dust, air conditioning can also reduce major sources of equipment failure as well. For such reasons, it can be argued that air conditioning is not an extravagance, but a necessity [21]. The issue of hardware cooling is discussed in more detail in Section 3.4.1.
1.3.3.8 Design of systems and tasks Some general principles for designing hardware and software systems and tasks to minimize human error are as follows [13]. (a) The user should have a good conceptual understanding of the way the system works. (b) Tasks should be simplified so as to minimize the load on vulnerable mental processes such as planning, problem solving, and working memory. (c) Arrange things so that users can see what the outcome of an action will be, so that they will know what is possible and what should be done, and what consequences their actions have lead to. An example of this is the use of a quarter-turn valve in line with a flowmeter to control cooling water. In contrast to other devices (e.g. many types of multi-turn valve), the quarter-turn valve allows one to see immediately what the state of the system is, and what should be done to put it into a different state, and the flowmeter allows one to see whether water is flowing, and how much. (d) Arrange things so that it is intuitively clear what should be done in order to achieve a given outcome, and what the state of the system is based on what can be perceived. There should also be a direct correspondence between the users intuitive understanding of the state of the system, and the actual system state. A good example of these principles can be found in modern computer operating systems that use “graphical user interfaces,” with their icons, drag-and-drop file manipulation, and windows. (e) Make use of natural or artificial constraints to guide the user to the next appropriate decision or action. Generally, one should make it easy (and not just easy, but convenient) and natural for people to do the right thing, and difficult and unnatural for them to do the wrong thing. For instance, it is desirable to get items that can be mistakenly used and cause problems out of the experimental area. These include things such as faulty electronic equipment (which should also be tagged), leaky vacuum hoses (ditto), and intermittent electrical cables (generally, these should be destroyed). Likewise, anonymous chemicals in unmarked vessels should be disposed of. In a similar way, in order to avoid potentially harmful improvisation or the neglect of necessary tasks, the proper tools and equipment, as well as consumable items and parts, needed to carry out activities in the laboratory should be immediately available.
Basic principles of reliability
18
In Japan, the expression for this approach in the case of hard physical constraints is “Poka Yoke,” or “mistake proofing” [1]. An example is the use of keyed electrical connectors to prevent them from being inserted in the wrong sockets. Another example is the use of flexible vacuum lines with different sized flanges to prevent them from being connected to the wrong mating flanges. Suitably arranged protrusions and depressions in mating mechanical parts will ensure that they are oriented correctly when joined. Various methods for preventing computer users from entering incorrect information have been discussed [13]. (f) When a system is designed, assume that errors will occur. Use a design which allows operations to be easily reversed, and makes irreversible operations difficult to implement. (g) If the earlier principles (a)–(f) have been tried without the desired success, try standardizing things, such as layouts, outcomes, actions, and displays.
1.3.3.9 Procedures and documentation The evidence that procedure and documentation problems are a very common source of human error is presented on page 11. Most of the things that are done in a laboratory are not new, and many have been practised and refined for many years or even decades or more. Hence, in principle there should usually be no difficulty in finding the correct procedure for carrying out a given task. In general, procedures should be written down, if possible, not passed along by word of mouth. This means that hard-to-remember details will not be forgotten, and also makes it difficult to dispute what the correct procedures are if a problem arises. Some procedures, involving “tacit knowledge,” cannot be written down – at least not in their entirety (see page 548). Paper documents are often lost or misplaced. A useful strategy for ensuring that procedures are always available is to place digital copies of these (preferably in PDF format7 ) onto a local website. It may also be helpful to do this with equipment manuals, Ph.D. dissertations, certain journal articles, and so on. Equipment manuals are often available at the manufacturer’s website. (The loss or misplacement of printed manuals is a common occurrence in laboratories, and the source of many problems.) Some modern digital instruments (such as lock-in amplifiers) have built-in help functions, although these can often be somewhat awkward to use and may have only a limited amount of information. Another method of ensuring the availability of a manual is to store it on a USB drive, which is permanently attached to the apparatus. In fact, USB drives are sufficiently capacious that it is possible to store all the information that is available about an instrument on them, including technical drawings, a logbook, service history, etc. In this way, the documentation can be made a permanent part of the instrument. One often finds that important information about the operation of some apparatus is buried deep within the instruction manual. In the case of critical information, such as a 7
PDF is a universally accepted digital document format.
19
1.3 Human factors
warning concerning potential damage, it is a good idea to copy this onto the front panel of the device, or somewhere nearby. Similarly, the web address of the manufacturer should be placed on the apparatus. In some cases, it is desirable and practical to affix a complete list of operating instructions to the device. High durability laser-printable and self-adhesive polyester labels are available, which can be used for this purpose. The creation and preservation of documentation for apparatus and software that has been created in-house is extremely important, but often neglected. What often happens is this. A temporary research worker (e.g. a student) comes into a laboratory, expends considerable effort and resources building equipment or writing software, does some research, and then leaves. Since proper documentation is often either completely absent or totally inadequate, the equipment may never be used again. It has been necessary in some cases to abandon research programs because of this. Making sure that adequate documentation will be created and preserved should be given a very high priority right from the start of a research worker’s stay in a laboratory. Documentation includes things such as properly written and commented computer code, electronic schematic diagrams, mechanical drawings, and instructions on how to use the apparatus or software. If the research is being done as part of a Ph.D. project, some such information could be included as appendices in the doctoral dissertation. It is usually desirable to attach the most salient information about an item (e.g. the schematic diagram of an electronic device) directly to it – on an inside panel, for example. It makes good sense to maintain a collection of books in a laboratory, describing the techniques that are used in the research (vacuum methods, computer programming, data analysis, etc.). Research workers may not have time to travel to a central library to obtain these, especially during an experiment when they may be most needed. (Also, it is not uncommon for people to be unaware that books on a subject are even available.) Furthermore, if training in a certain area has not been provided, the presence of such books will at least allow people to teach themselves. Although the books may have to be periodically replaced if and when they are “borrowed,” the gains in overall reliability and productivity can more than compensate for the resulting expenses.
1.3.3.10 Labeling The lack of suitable labels – on electrical cables, switches, pipes and hoses, valves, containers, etc. – is a frequent source of problems. Controls, displays, and electrical connectors on homemade equipment should always be labeled. Furthermore, each apparatus should be given its own number, so that it can be linked with information such as entries in laboratory notebooks, instruction sheets, schematic diagrams, etc. Consideration should be given to the durability and longevity of labels – pen on masking tape is usually not adequate. The inks in some “permanent” marker pens are much more resistant to fading and smudging than those in ordinary types. Special markers are made that can write on difficult surfaces, such as ones made of coated materials, glass, etc. Various robust marking systems for heavy-duty environments are commercially available.
20
Basic principles of reliability
Hand-held labeling machines (or label makers), which are able to produce durable and professional-looking labels, can be obtained from many sources. (However, keep in mind that it is far better to have a legible handwritten label than none at all!) In some laboratories, it is said to be the policy of research supervisors to throw out unlabeled containers (of chemicals, etc.) on sight [4]. Doing this once or twice is usually sufficient to achieve the desired result.
1.3.3.11 Physical and visual access restrictions The placement of apparatus items in awkward places, or those that are difficult to reach or see, can often lead to improper installation and consequent reliability problems. For instance, a vacuum flange may not be seated evenly because of such a difficulty, and might consequently leak. Alternatively, an electrical connector might not be aligned properly with its counterpart, and pin damage may occur when the two are mated. Essential testing (e.g. leak testing) and maintenance may also be neglected, or done incorrectly. Likewise, the proper adjustment of controls and reading of dials during an experiment can be impaired by their placement in inconvenient locations. One should keep the need for access in mind when complicated systems are designed. Problems often arise because of poorly designed enclosures, such as cabinets with racks for mounting electronic instruments, or enclosures for gas-handling systems. Similarly, it is a good idea to arrange apparatus so that it is unnecessary to remove or disturb some components in order to access others. This avoids the possibility of accidentally disabling components that are otherwise of no immediate concern, and improves reliability by simplifying the procedure.
1.3.3.12 Transcription errors The manual entry of written or printed information into a computer is often done inaccurately. For example, typing errors occur routinely while transferring programs that are listed in books into computers. Also, a very common cause of measurement errors when sensors are being used is the incorrect transcription of calibration information from their data sheets into the data acquisition instrumentation [22]. Wherever possible, manual transcription should be avoided. The best approach is to get information that must be entered into a computer in digital form. For instance, a program that has been provided in a book could be obtained from the authors or publisher in the form of a file on a CD. Alternatively, printed information can be entered into a computer using a scanner and some character recognition software. If manual entry is unavoidable, an effective method of averting errors is to have two or more people enter the information independently, and use the computer to compare the separate sets of entries.
1.3.4 Interpersonal and organizational issues 1.3.4.1 Communication In technical work (e.g. in a laboratory), problems often arise because unspoken assumptions are made, which are not then confirmed by communication between the parties involved
21
1.3 Human factors
[15]. Afterwards, this leads to reactions such as: “I thought you were going to fill the liquid nitrogen dewar!”, and so on. An experimenter should always obtain direct confirmation about what his or her research partner(s) intend to do, and what they have done.
1.3.4.2 Persuading and motivating people The survey data in Table 1.2 suggest that the proportion of problems that arise due to a failure to follow procedures is relatively small. However, research workers are probably, on the whole, more independent-minded than workers in other fields, so the question of how to reduce this deserves attention. People are more likely to avoid following standard procedures if the procedures are inappropriate, clumsy, or slow [15]. The best way of getting them to follow procedures is to improve the procedures, and not by arguing or imposing penalties of some form. The problem of encouraging workers to follow procedures has been discussed in connection with the reduction of component damage in the semiconductor industry [23]. Here it is suggested that the workers be consulted on issues where there is a reluctance to follow procedures, and if such procedures are considered impractical, try to change them. If the procedures are troublesome to follow, but there are no alternatives, explain their purpose to everybody involved and strongly suggest that they be followed unless an alternative can be suggested. People are, of course, more likely to follow procedures if the reasons for these are explained, and if they are allowed to help select or change them. It can also be very helpful if specific cases of problems that have arisen as a result of a failure to follow the procedures can be pointed out. Published sources of information, such as journal articles or books, can be very useful in convincing people of the need to follow certain procedures. It may be counterproductive to require that mandatory procedures be followed in all cases [13]. Procedures cannot allow for all contingencies, so it makes sense to permit procedures to be broken under certain conditions, when the judgment of the workers on the spot indicates that they should be. Very generally, difficulties in persuading and motivating people are an important cause of uncertainty in research. A classic discussion on how to reduce such problems can be found in Ref. [24]. This work is indispensable for anyone who manages people, and very useful for everyone else. It should be re-read periodically.
1.3.4.3 The value of division of labor The utility of dividing the responsibility for performing various aspects of a complex task among different people, in terms of improvements in quality and speed, is widely acknowledged in the world at large [25]. This, in large part, is the result of the specialization that the division-of-labor approach permits. That is, if a person focuses on doing a particular type of task, they can become very good at it. There can be disadvantages as well as advantages in this. A person who does only one type of task may become reliant on others to perform other necessary tasks, and thereby lose (or not develop) the ability to work independently.
22
Basic principles of reliability
In order for the division-of-labor approach to be successful, it is important to have precisely defined and well-understood demarcations between the various subtasks. In other words, there should be substantially no overlap between what each person does, so that there can be no opportunity for confusion to arise as to who does what, or interference to take place in each other’s activities. If enough people are involved in the task, the need for some kind of formal management arises. Teamwork in scientific research is discussed in Ref. [4]. A useful overview of project management can be found in Ref. [26]. In experimental work, an advantage of having a number of people involved in a research project is that it can reduce the possibility that the conclusions derived from an experiment will be influenced by subconscious biases [4]. (Such biases are a serious danger in scientific research – see page 540.)
1.3.4.4 Appointing a coordinator for a problem area It may be helpful, when particular types of problems keep recurring (e.g. vacuum leaks), to have a central person to collect and organize information on: (a) what approaches successfully avoided the problems, (b) what approaches did not, (c) what were the successful methods for dealing with the problems, (d) which ones were not, and (e) the reasons for the successes and failures. New plans can be submitted to the coordinator so that they can make comments on these based on the accumulated experience. This approach has proven successful in the field of electronics [27].
1.3.4.5 Problems with communal equipment Shared apparatus that is nobody’s responsibility almost invariably becomes degraded fairly quickly. A single person should always be in charge of, and responsible for, a given piece of equipment at any given time. This responsibility can be changed on a regular (e.g. a weekly) basis, if necessary. If the apparatus is very complicated, responsibility can be subdivided – as long as the areas of responsibility are clearly defined and well understood. Small communal items, such as: tools, electronic devices (e.g. multimeters), and parts (e.g. O-ring seals), often go astray. Looking for these or obtaining new ones is often the cause of a very considerable amount of unnecessary effort. The amount of time that is wasted looking for lost or “borrowed” tools, or other small but essential laboratory items, can be unbelievable to those who have not experienced it first hand. It is fairly common to see research workers spending a large fraction of their day trying to find things. Furthermore, the absence of the correct tools or other items, when they are needed, is a source of frustration and an incentive for risky improvisation [15]. For instance, if a wrench that is needed to install some bolts on a vacuum flange is not available, pliers may be used instead. As a result, the bolts may not be tightened adequately, and a leak may result. Also, the driving surfaces on the bolts may be rounded off, which might make them very difficult to remove later on. It is very important that a laboratory be organized so as to minimize this type of problem. The use of lockable drawers, cabinets, and the like can be very helpful. If it is necessary
1.3 Human factors
23
to have communal tools, these can be kept on shadow-boards [15]. However, it is probably better if every laboratory worker owns his or her own set of tools.
1.3.4.6 Unauthorized removal of equipment A problem that one sometimes encounters, especially in an anarchic university laboratory environment, is the unauthorized removal of electronic instruments, cables, and other items from an experimental setup. This sort of thing is particularly likely if the apparatus is used only occasionally. Sometimes the equipment just disappears without a trace. At other times, it may be removed and replaced without the knowledge of the person who was originally using it. That person may then find that the instrument has been reconfigured in an unknown way, or that the cabling has been reinstalled in such a fashion as to introduce ground loops or other noise problems. Direct methods of avoiding these difficulties include: (a) bolting the instrument into a rack, possibly using “tamper-resistant screws,”8 (b) bolting cables to a rack, or other fixed object, with cable clips, (c) attaching the instrument to the rack with a computer security cable, of the type used to protect notebook computers, and (d) fitting the instrument with a key-operated on/off switch. Another very important measure is to document the experimental setup, including things that can cause problems if they are disturbed, such as the electrical wiring (labeling the cables helps with this), the configuration of the instruments, etc.
1.3.4.7 Presence of non research-related people in the laboratory It is a good practice to always be on the look out for people who are not connected in any way with the research, such as building maintenance or cleaning staff, and to take suitable steps to protect the equipment. For example, one may find that someone has decided to come into the laboratory and carry out maintenance work, without telling anybody and without taking any precautions, which results in delicate apparatus being covered with dust and debris. Such things happen. Generally, building maintenance work in sensitive areas should be done under supervision. This is especially important in cases where work is being done by outside contractors. Alternatively, cleaning staff may come into the laboratory during an experimental run (possibly when no one else is around), and knock some sensitive equipment, again without telling anybody. The disconnection of experimental apparatus from the mains supply, so that vacuum cleaners and the like can be plugged in, has been known to occur. Also, even if apparatus is not disconnected, the switching on and off of vacuum cleaners, floor polishers and other motor-operated devices can produce power-line glitches that may affect instruments plugged into nearby receptacles. Some researchers deal with potential problems of this kind by doing the cleaning themselves. 8
A tamper-resistant screw is a type that can be installed and removed only with a special (not widely available) screwdriver.
24
Basic principles of reliability
It is also a good idea to provide protection for delicate equipment if one is likely to be away from the laboratory for an extended period of time.
1.4 Laboratory procedures and strategies 1.4.1 Record-keeping Leaving a paper trail of activities in research is an essential part of reducing reliability problems, and of correcting such problems when they occur. One should never trust one’s memory with a detail – it must be written down. A lab notebook is essential. The tedium and labor of writing things out is often an impediment to proper documentation. It therefore makes sense to spend some time arranging ways of making the task easier. One such method, in the case of repetitive documentation tasks, is to create standard forms, in which the layout and much of the writing (and any drawing) is already done. The use of multiple-choice entries, where appropriate, can simplify things still further. Forms can also act as checklists for the recording of information. Their use greatly reduces the likelihood that the writing down of essential items will be omitted. Another method of easing documentation tasks is to use rubber stamps. It is possible to get relatively large (e.g. 15 cm × 10 cm) rubber stamps custom-made by various firms. Keeping records is by itself insufficient – they must, of course, also be kept in an accessible and organized condition. Writing things down on small scraps of paper is completely inadequate, and is likely to lead to problems. Some recommendations for creating and maintaining laboratory notebooks are provided in Ref. [4]. The important task of backing up notes is best done by converting these into a digital form, which allows them to be easily duplicated, archived, and merged with other digital documents. Storing information digitally has the additional advantage of reducing the proliferation of paper, which can be an impediment to proper record keeping and information retrieval. Furthermore, with the aid of some character recognition software, handwritten information in digital form can be converted into text, which allows it to be searched by using keywords. Normally, the conversion of writing on paper into a digital form is done by using a scanner. Large amounts of time are often wasted because information about the quirks, pathologies and status of equipment is not passed down to successive users. For this reason, it is a good idea to maintain logbooks for complex pieces of apparatus, such as large vacuum systems, which are used by different people. Logbook records can be indispensable in the diagnosis and repair of equipment faults, and also in preventing errors due to ignorance of latent conditions. Logbooks can include information such as: (a) the nature of activities being carried out using the equipment, (b) day-to-day basic operating characteristics (e.g. base pressure for a vacuum system, or base temperature for a cryogenic one), (c) unusual behavior (also indicate how unusual – slightly, very, etc.),
25
1.4 Laboratory procedures and strategies
(d) malfunctions (their nature, what happened just before the problem occurred, the cause of the problem, and its solution), (e) routine preventive maintenance (e.g. change of oil in a vacuum pump), and (f) calibration logs for any instruments that are associated with the equipment. A type of record keeping that should not be neglected (but often is) is to tag equipment in a potentially problematic condition. For example, a mechanical vacuum pump that has been drained of oil should be clearly marked as such, with the date and personal contact details also written on the tag. If the problem is intermittent, this should also be recorded.
1.4.2 Maintenance and calibration of equipment Claims are sometimes made that a particular type of equipment is very reliable, as long as it is regularly maintained. However, the act of maintenance may itself be unreliable. Furthermore, in a university laboratory environment, preventive maintenance tends to be sporadic at best. Reliability is usually enhanced by selecting equipment that requires little or no regular preventive maintenance [1]. The regular calibration of instruments and sensors is a very important task in laboratories that carry out absolute measurements. It is also often neglected, and the same considerations apply. For instance, some instruments have an auto-calibration capability. Although this does not eliminate the need for such devices to be calibrated at a metrology laboratory, it will increase accuracies in-between calibrations. Calibration is discussed in depth in Ref. [28]. Some types of apparatus (especially mechanical equipment that undergoes wear, corrosion, or fatigue) require regular preventive maintenance. Mechanical vacuum pumps, air compressors, and water-cooling systems are a few examples. In general, purely electronic devices do not require scheduled maintenance, as long as they are protected from environments that may produce corrosion, or create dust, etc. High-voltage equipment and devices often require regular cleaning, and may also need other types of servicing (see page 384). Also, it may be desirable to replace cooling fans and electrolytic capacitors in some electronic equipment from time to time (see pages 400 and 396). Except for simple tasks like replacing oil or changing a seal, it is not normally a good idea for research workers to get directly involved in the repair of commercial equipment. In this matter it is very important to be aware of one’s limitations. What may be, for an expert, a very simple problem, could be the makings of a disaster in the hands of an inexperienced person. The manufacturer should be called without hesitation if there is any doubt. Nevertheless, well-designed equipment is often made so that at least some important maintenance tasks are sufficiently simple that users can do them without difficulty.
1.4.3 Troubleshooting equipment and software Some useful guidelines for debugging defective apparatus and software are as follows [29]. (a) Before doing anything else, find out how the system works (e.g. carefully read the manuals – see below). (b) Make the system fail (this is primarily of relevance for intermittent faults – see below).
26
Basic principles of reliability
(c) Don’t repair anything on the basis of assumptions about the cause of the problem (which often results in things being “fixed” that are not faulty) – find the actual cause by inspecting and testing the system. Consider using test instruments such as oscilloscopes, leak detectors, cable testers, spectrum analyzers, etc. (d) Divide a complex system into two subsections, and find out which of the two is the source of the problem. Take that and repeat the process, as necessary, until the exact location of the problem is found (see below). (e) Change only one thing at a time. (f) Write down the events that led up to the problem (equipment logbooks and lab notebooks are useful here), and keep a written record of what was done and what happened during debugging efforts. (g) Remember to investigate simple problems that are easy to repair (e.g. a blown fuse). (h) Don’t be afraid to ask for help from experienced people, who, in addition to providing their knowledge, may be able to offer a fresh view concerning the problem. (i) Just about everyone is willing to believe that faults (especially intermittent ones) can permanently disappear of their own accord, without human intervention – but this is virtually never the case. If a fault cannot be reproduced, yet no one has repaired the system, it is safe to presume that the problem will reappear eventually. Abrupt faults in experimental equipment frequently arise, not because of spontaneous failure of the equipment, but as a direct result of some human action. Hence, if something suddenly goes wrong with an apparatus, it is often useful to try and remember the last things that were done before the problem occurred. Some of the points in the above list are discussed in more detail below. (a) The failure to thoroughly read and understand equipment manuals is one of the most frequent causes of problems in experimental work. (Indeed, scientific articles with seriously erroneous data have been published as a result of this.) It is also common for people to neglect this task before troubleshooting their apparatus. Somewhere in a manual, perhaps in a section on operating procedures or a list of specifications, might be clues as to why an experimental setup is not working correctly. Manuals often have sections that are specifically devoted to troubleshooting. This issue is discussed further on page 124. (b) Intermittent failures can be very troublesome, because while the system is working correctly, it is difficult to track down the source of the failure. In such cases, it may be desirable to take steps to cause the failure to manifest itself. If it is suspected that the failure is the result of some part of the system being subjected to certain stresses or environmental conditions, one can try to create these artificially. This is done by creating mechanical forces, raising and lowering the temperature, increasing voltage levels, etc. The nature of the stress depends on the problem. In the case of a suspect electrical contact, one can flex the conductor or raise and lower its temperature in order to stimulate an open circuit. If the problem concerns a cryogenic vacuum leak, repeatedly cycling the temperature between cryogenic and ambient values may cause the leak path to open up. Of course, in doing this one must be sure that healthy parts of the apparatus are not in danger of being damaged. However, even very severe measures
27
1.4 Laboratory procedures and strategies
may be acceptable if other methods of finding the fault have been tried without success. The principle is also useful in finding faults in software – by, for example, giving a program out-of-range data. If an intermittent failure is thought to be caused by external conditions that are difficult to create artificially, a useful approach for locating the source of the problem involves trying to correlate it with the suspect phenomenon. The latter might include changes in humidity, background sounds, electromagnetic fields, and so on. For example, one may find that the sudden appearance of a vibration-related disturbance in an experiment is associated with the sounds produced by some distant heavy machinery. Another example is electrical noise in a sensitive instrument, which is believed to be caused by electromagnetic interference. It may be found that this noise is correlated with the faint click produced by a nearby thermostat. (Faulty thermostats can produce radiofrequency interference.) In some cases the use of special instrumentation, which allows one to examine a number of different parameters over long periods, may be helpful. (d) A common-sense sequential search through all the elemental subsections (components) of a complex system, in order to locate the cause of a problem, can take a long time. The number of checks (determining whether or not a given subsection is faulty) that may have to be made using this approach is (N − 1), where N is the number of components. The binary searching strategy described above can be much more efficient. To reiterate, one determines in which of two halves of a system the fault is located. One then takes the faulty half, and looks at each of its two halves, and so on, until the faulty component is found. In this case (assuming, for the sake of simplicity, that N = 2n , where n is an integer), the number of checks that are required to locate the cause of a problem is log2 (N). This is likely to represent a vast improvement over a sequential search if N is large. This principle is used in the binary search computer algorithm to search efficiently through long lists of ordered data. (h) When one is troubleshooting a problem, it can be easy to become bogged down by one’s own theories about what is causing it. It is often useful to bring in someone else (preferably with knowledge and experience in the problem area) who can look at the situation from a completely fresh viewpoint. In certain cases it may be more efficient to abandon some faulty thing and start from scratch, rather than try to find the cause of the problem and make a repair. This is especially true in the case of intermittent faults, where the location of the problem is often extremely time consuming. The modularization of apparatus and software can be a very useful asset in these situations. If the cause of the intermittent can be localized to a particular module, it may be possible to solve the problem with little effort by replacing the module with a known working one. An example of this is the presence of a cold leak in a cryogenic vacuum chamber (see page 175). In this situation, it may sometimes be best to replace the entire chamber, rather than try to precisely locate the leak. Another example is the case of software that has become corrupted. In such a situation (and especially since the cost is negligible) the best approach may be to just reload the software onto the computer, rather than try to determine what has gone wrong. These are instances of a
28
Basic principles of reliability
useful general principle (see, e.g., Ref. [30]), that one should use overkill to solve “stupid problems.” Intermittent problems are discussed on page 60. The testing and debugging of computer programs is dealt with on page 523.
1.5 Reliability of information When assessing the reliability of some information, there are a number of things that one normally looks for. (a) Details Even without further investigation, the presence of numerous details suggests that the information might be valid, because such details can be immediately checked for such things as internal consistency, and consistency with other information. (b) Clarity of presentation If the information is presented in a very clear and precise way, this immediately suggests that it might be valid. This is because internal and external contradictions are then hard to refute. On the other hand, vague and imprecise information is not subject to hard confirmation or refutation, and so it may not be easy to determine whether or not it is correct. (c) Internal consistency. (d) Consistency with other knowledge This requirement can be a subtle one. For example, information (data in particular) can sometimes be “too perfect” – that is, unreasonably close to the predictions of a theory, given the multitudinous potential sources of error seen in other, similar, experiments. This would suggest that the information had been distorted, either consciously or unconsciously, by its originators. (e) Multiplicity of independent sources This is a form of redundancy. One gives more credence to information that is supported by two or more independent sources than that which comes from only one source. However, one has to be on the look out for “common mode failures,” which in this case might involve, for example, different research groups using the same (perhaps erroneous) computer algorithm to perform a calculation. (f) Written vs. spoken information Normally, more credence is given to written information that that which is given only verbally. This is because one generally has to be more careful in writing information down, since then it is more difficult to deny that one has provided it. Writing things down also takes more effort than speaking, and so such things are less likely to be merely casual remarks. Written information can also be accompanied by tables, graphs, and references, which provides further support for its veracity. Published information that is subjected to some type of editorial control is given considerably more trust than other written information. In the case of refereed publications, the information in question is also reviewed by one or more experts before being accepted for publication.
29
1.5 Reliability of information
(g) Direct vs. indirect information Every time information is transcribed by hand, or passed from one person to another by word of mouth, it tends to degrade at least a little. Even the act of paraphrasing a written sentence can easily change its meaning. The total degradation presumably has some sort of exponential dependence on the number of stages in the chain. Passage of information through even a small number of stages can result in severe degradation. For this reason, one tends to trust information more the closer it is to its source. (h) Old vs. new information Information that has been around for a long time, particularly if it has been open to public inspection and comment, is generally more likely to be correct than information that is completely new. (i) Proximity to the center of a field of knowledge Information that is closer to the center of an area of knowledge is more likely to be correct than that which is near its periphery. This is because knowledge at the center is likely to be older and better established, and also better supported by neighboring knowledge than that at the boundaries. The principle also applies to individuals – the part of a person’s knowledge that is closer to the center of his or her interest and experience is more likely to be correct than that which is near the periphery. In general, it applies to experimental results as well – data that have been taken closer to the center of the range of a physical parameter (e.g. temperature or pressure) are more likely to be correct than data that are recorded at the ends of the range. (j) Integrity and competence of the information source Normally when one acquires some information, attention is given to the reputation of its source. If the information has been published in a journal, the integrity and competence of its editors and possibly its referees may also be considered. One also looks to see if the information has been imbedded in a background of other information that is known to be reliable. For example, a journal or website should have a history of providing trustworthy information. (k) What are the biases of the source? This can be a very difficult issue to assess. Nevertheless, it may be useful to ask whether the information source has something to gain or lose by providing correct information, and something to lose or gain by providing incorrect information. Here, the considerations are very similar to the ones discussed on page 13 (“Overconfidence”). In areas where judgment is needed, as well as in situations where seemingly mechanical manipulation of information is required, subconscious bias can influence a result. This is frequently a problem in experimental investigations, and even in theoretical work involving complex calculations. Pertinent facts can be ignored or downplayed, and unimportant ones may be exaggerated. It is often the case that a given piece of information is not actually falsified, but its importance is modified. In cases where numerous facts combine to form “the truth,” such shifts in emphasis can end up changing it. The subject of subconscious biases in experimental work is discussed further on page 540. Some of these points have been nicely summarized by T. H. Huxley with an allegory, in his book on the philosopher David Hume [31].
Basic principles of reliability
30
But when we turn from the question of the possibility of miracles, however they may be defined, in the abstract, to that respecting the grounds upon which we are justified in believing any particular miracle, Hume’s arguments have a very different value, for they resolve themselves into a simple statement of the dictates of common sense – which may be expressed in this canon: the more a statement of fact conflicts with previous experience, the more complete must be the evidence which is to justify us in believing it. It is upon this principle that every one carries out the business of common life. If a man tells me he saw a piebald horse in Piccadilly, I believe him without hesitation. The thing itself is likely enough, and there is no imaginable motive for his deceiving me. But if the same person tells me he observed a zebra there, I might hesitate a little about accepting his testimony, unless I were well satisfied, not only as to his previous acquaintance with zebras, but as to his powers and opportunities of observation in the present case. If, however, my informant assured me that he beheld a centaur trotting down that famous thoroughfare, I should emphatically decline to credit his statement; and this even if he were the most saintly of men and ready to suffer martyrdom in support of his belief. In such a case, I could, of course, entertain no doubt of the good faith of the witness; it would be only his competency, which unfortunately has very little to do with good faith or intensity of conviction, which I should presume to call in question. Indeed, I hardly know what testimony would satisfy me of the existence of a live centaur. To put an extreme case, suppose the late Johannes M¨uller, of Berlin, the greatest anatomist and physiologist among my contemporaries, had barely affirmed he had seen a live centaur, I should certainly have been staggered by the weight of an assertion coming from such an authority. But I could have got no further than a suspension of judgment. For, on the whole, it would have been more probable that even he had fallen into some error of interpretation of the facts which came under his observation, than that such an animal as a centaur really existed. And nothing short of a careful monograph, by a highly competent investigator, accompanied by figures and measurements of all the most important parts of a centaur, put forth under circumstances which could leave no doubt that falsification or misinterpretation would meet with immediate exposure, could possibly enable a man of science to feel that he acted conscientiously, in expressing his belief in the existence of a centaur on the evidence of testimony. This hesitation about admitting the existence of such an animal as a centaur, be it observed, does not deserve reproach, as skepticism, but moderate praise, as mere scientific good faith. It need not imply, and it does not, so far as I am concerned, any a priori hypothesis that a centaur is an impossible animal; or, that his existence, if he did exist, would violate the laws of nature. Indubitably, the organization of a centaur presents a variety of practical difficulties to an anatomist and physiologist; and a good many of those generalizations of our present experience, which we are pleased to call laws of nature, would be upset by the appearance of such an animal, so that we should have to frame new laws to cover our extended experience. Every wise man will admit that the possibilities of nature are infinite, and include centaurs; but he will not the less feel it his duty to hold fast, for the present, by the dictum of Lucretius, “Nam certe ex vivo Centauri non fit imago,”9 and to cast the entire burthen of proof, that centaurs exist, on the shoulders of those who ask him to believe the statement.
9
“For assuredly the image of a Centaur is not formed from a living Centaur . . . ” – from: De Rerum Natura (“On the nature of things”), Book 4. Translated by J. S. Watson.
31
Summary of some important points
Further reading A good general treatment of the reliability of hardware (electronics, mechanical components and systems, etc.) and software, from an engineering point of view, can be found in Ref. [1]. Human error, and ways of reducing it, is discussed in Refs. [13] and [15]. (The former in particular is a classic on the subject.) Some of the points examined in this chapter are well treated in Ref. [4]. (For this and other reasons, this book is highly recommended.) Reference [29] is a useful general guide to the systematic debugging of equipment and software.
Summary of some important points 1.2 Central points The following are some basic qualities that are of value whenever reliability is an issue: (a) simplicity (keeping things simple), (b) redundancy (having duplicate components or systems, which allows apparatus or software to keep working if one of the components or systems fails) – tradeoffs must be made with simplicity, (c) margins of safety (not operating components or equipment too close to tolerable limits of pressure, voltage, temperature, etc.), (d) modularity (dividing complicated things into simple independent components), (e) making small incremental improvements, and (f) conservatism (using things that have a long history of working reliably).
1.3.1 General methods and habits (a) Read the literature (avoid “reinventing the wheel”). (b) Review the state of the art and capabilities in any given area from time to time. (c) In research, even seemingly minor and mundane technical details can be extremely important. (d) Improvisation often leads to reliability problems, although it is sometimes necessary the early developmental stages of a research project.
1.3.2 Some data on human error (a) Human error is often the most important direct cause of reliability problems. (b) The omission to carry out some steps in a task, and particularly the omission of functionally isolated steps, is the most common form of human error.
32
Basic principles of reliability
(c) Another common form of error is a failure to consider latent conditions in a system, which could combine in the future to cause a failure. (d) Absent or low-quality procedures or documentation (e.g. equipment manuals) is the most common cause of human error.
1.3.3 Some ways of reducing human error (a) With regards to causing errors, the effects of fatigue are similar to those of drinking alcohol. (b) Working for more than 12 hours, or not having a good nighttime sleep during the past 24 hours, is likely to cause fatigue problems. (c) Mentally rehearsing a procedure before carrying it out can be very useful in minimizing unforeseen difficulties. (d) Plans (and other written items, such as scientific papers) can often be greatly improved by placing them to one side after initially preparing them, and then having another look at them some weeks or months later. (e) Automate routine, repetitive tasks wherever possible. Semiautomatic schemes are often helpful, and may be more feasible than fully automatic ones. (f) Make liberal use of checklists in order to avoid omission errors – especially in the case of important laboratory procedures that involve a significant number of steps. (g) Avoid using procedures that conflict with firmly established habits. (h) Tasks should be simplified so as to minimize the load on vulnerable mental processes such as planning, problem solving, and working memory. (i) Generally, arrange things so that it is convenient and natural for people to do the right things, and difficult and unnatural for them to do the wrong ones. Use physical constraints to prevent errors. (j) Assume that errors will occur, and plan for this. Try to arrange things so that operations are reversible, and any irreversible operations are difficult to do. (k) Make sure that procedures for difficult tasks and equipment manuals are readily available. It is helpful to place digital versions of these on local websites, preferably in PDF format. (l) Physical and visual access restrictions to items in experimental apparatus (e.g. vacuum flanges) often leads to faults, and can impede necessary testing and maintenance. (m) Manual entry of written or printed information into computers is often done inaccurately – information that is to be entered into a computer should be obtained in digital form, if possible.
1.3.4 Interpersonal and organizational issues (a) Communication failures often take the form of unspoken assumptions that are not subsequently confirmed by communication between the parties involved. (b) Laboratory workers are more likely to avoid following standard procedures if the procedures are inappropriate, clumsy, or slow.
33
Summary of some important points
(c) Any such reluctance to follow procedures is best reduced by improving the procedures, if possible, preferably with worker participation. (d) Division of labor (increasing specialization) can improve reliability and speed – subtasks should be precisely defined, with well-understood demarcations (little or no overlap between subtasks). (e) Consider appointing a knowledge-coordinator for a problem area (e.g. vacuum leaks), who can provide information on how problems may be avoided and solved. (f) One person should always be in charge of a piece of apparatus (avoid truly communal equipment). (g) The absence of the correct tools when they are needed is a very common cause of wasted time, frustration and risky improvisation – laboratories should be organized to minimize this type of problem. (h) The presence of non-research-related people in the laboratory (especially those working for outside contractors) can be a source of problems (damage to equipment, disturbance of experiments, etc.).
1.4.1 Record-keeping (a) Keeping records of laboratory activities is very important in reducing reliability problems. (b) One should never trust one’s memory with a detail – it must be written down. (c) Make it easy to create documentation (e.g. lab notes, logbooks, etc.) by using preprepared forms, multiple-choice entry schemes, or other aids. (d) Make sure that the things (apparatus and software) created by short-term research workers are properly documented. (e) Logbooks should be maintained for complex equipment, such as large vacuum systems, which are used by different people in succession.
1.4.2 Maintenance and calibration of equipment Use things that require little or no regular preventative maintenance, since the act of maintenance itself can cause problems, and also because (at least in university research environments) maintenance is often sporadic at best.
1.4.3 Troubleshooting equipment and software (a) If something goes wrong, try to remember the last thing that was done before the problem occurred. (b) A binary approach to locating problems, in which the problem is isolated to one half of the system, and then half of the remainder, etc., is often useful. (c) For intermittent problems, one can try to make the item fail by applying a “stress” (mechanical stress, heat, etc. – depending on the problem). (d) Also, try correlating the occurrence of the problem with some phenomena (e.g. changes in ambient temperature, the detection of a sound, etc.).
Basic principles of reliability
34
1.5 Reliability of information When one is assessing the reliability of some information, one generally looks at the following things. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k)
Presence of details. Clarity of presentation of the information. Internal consistency. Consistency with what is already known. Support for the information by more than one independent source. Is the information in written (and especially published) form, rather than just spoken? Does the information come directly from its source, or indirectly through one or more intermediaries? Has the information has been around for a long time (and open to public scrutiny and comment)? Is the information near the center of a field of knowledge, or at its boundaries? Integrity and competence of the information source. Does the source have conscious or unconscious biases?
References 1. P. D. T. O’Connor, Practical Reliability Engineering, 4th edn, Wiley, 2002. 2. N. Butterfield, in Space Vehicle Mechanisms: Elements of Successful Design, Peter L. Conley (ed.), Wiley, 1998. 3. N. R. Augustine, Augustine’s Laws, 6th edn, American Institute of Aeronautics & Astronautics, 1997. 4. E. Bright Wilson, Jr., An Introduction to Scientific Research, Dover, 1990. Except for some minor modifications, this is a republication of a work that was originally published by McGraw-Hill in 1952. 5. G. J. Dienes and D. O. Welch, Phys. Rev. Lett. 59, 843 (1987). 6. ProQuest LLC. www.umi.com 7. scholar.google.com 8. www.print.google.com 9. The Risks Digest, Vol. 5, Issue 73, 13 December 1987. catless.ncl.ac.uk/risks 10. CERN, The latest from the LHC (30–01-2009). cdsweb.cern.ch/journal/ 11. F. Bertinelli, P. Borowiec, D. Bozzini, et al., The quality control of the LHC continuous cryostat interconnections, Large Hadron Collider Project, LHC Project Report 1131, 20 August 2008. cdsweb.cern.ch/record/1123726/files/LHC-PROJECTREPORT-1131.pdf. 12. A. E. Green and A. J. Bourne, Reliability Technology, Wiley-Interscience, 1972. 13. J. Reason, Human Error, Cambridge University Press, 1990.
35
References
14. J. Rasmussen, What can be learned from human error reports? in Changes in working life, K. Duncan, M. Gruneberg and D. Wallis (eds.), Wiley, 1980. 15. J. Reason and A. Hobbs, Managing Maintenance Error: a Practical Guide, Ashgate, 2003. 16. F. S. Acton, REAL Computing Made Real: Preventing Errors in Scientific and Engineering Calculations, Princeton University Press, 1996. (This book is mostly about numerical errors, not human error.) 17. ECSS Secretariat, ESA-ESTEC Requirements & Standards Division; Space product assurance: The manual soldering of high-reliability electrical connections (ECSS-Q70–08A), ESA Publications Division, 1999. 18. European Standard EN ISO 9241–6:1999, Ergonomic requirements for office work with visual display terminals (VDTs) – Part 6: Guidance on the work environment, CEN, 1999. 19. http://www.epa.gov/iaq/pubs/insidest.html#Intro1 20. R. S. Bridger, Introduction to Ergonomics, 2nd edn, Taylor and Francis, 2003. 21. S. Suhring, Proceedings of the 2003 Particle Accelerator Conference (IEEE Cat. No. 03CH37423), Part Vol. 1, pp. 625–629, IEEE, 2003. 22. B. Betts, IEEE Spectrum 43, No. 4, p. 50, April 2006. 23. J. M. Kolyer and D. E. Watson, ESD From A to Z, Kluwer Academic Publishers, 1996. 24. D. Carnegie, How to Win Friends and Influence People, Revised edition, Simon and Schuster, 1981. Despite the somewhat manipulative tone of the title (the book was originally written in 1936, and the title’s connotations have changed since then), this work emphasizes achieving these goals by taking a sincere positive interest in other people. 25. The classic work on this topic is Adam Smith’s Wealth of Nations. 26. A. M. Cruise, J. A. Bowles, T. J. Patrick, and C. V. Goodall, Principles of Space Instrument Design, Cambridge University Press, 1998. 27. R. A. Pease, Troubleshooting Analog Circuits, Newnes, 1991. 28. R. Pettit, in The Industrial Electronics Handbook, J. D. Irwin (ed.), CRC Press, 1997. 29. D. J. Agans, Debugging: the 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems, AMACOM, 2002. 30. P. C. D. Hobbs, Building Electro-optical Systems: Making it all Work, John Wiley and Sons, 2000. 31. T. H. Huxley, Hume, Macmillan and Co., 1879.
2
Mathematical calculations
2.1 Introduction In books on mathematical methods in physics, there is a very frequent tendency to ignore the subject of errors, and what can be done to prevent them. In conversations on this topic, two common points of view are: (1) error prevention is an ability that is acquired through practice (i.e. it is not something that is taught explicitly), and (2) one just has to be careful. While both viewpoints contain elements of truth, it is also true that techniques exist for preventing and detecting errors. Furthermore, these can be passed on explicitly like other skills (i.e. they do not have to be learned through hard experience.) Such techniques are the subject of the present chapter. These are mostly concerned with the prevention of errors in symbolic (i.e. algebraic) calculations, rather than numerical ones, unless otherwise indicated.
2.2 Sources and kinds of error 2.2.1 Conceptual problems The first and most subtle types of error in analysis in general arise from conceptual problems: understanding the essential physics of a problem and expressing it in mathematical form.
2.2.2 Transcription errors A second very common type, frequently encountered when calculations are done by hand, is transcription errors. These occur when formulae and numbers are copied from one line in the calculation to the next, often taking place when the handwriting is untidy and cramped. They also arise very frequently if more than one mathematical operation is attempted per line in the calculation. This is because such actions place a burden on working memory, which is a vulnerable cognitive process [1]. Transcription errors also occur on a very regular basis when information is being entered into a computer by hand.
2.2.3 Errors in technique A third source of mathematical errors is the forgetting or ignoring of rules concerning the use of mathematical operations. This can, of course, happen in a multitude of ways. 36
2.2 Sources and kinds of error
37
Among the more subtle examples are those arising from the presence of discontinuities in functions. For example, one may blithely employ the Newton–Leibniz Formula1 when integrating a function over a region containing an infinite discontinuity, without realizing that this formula many not be applicable under such a condition (or perhaps not being aware of the existence of the discontinuity). An example of such an operation is 1 −1
1 d x. x2
(2.1)
If the limits 1 and −1 are used directly in the Newton–Leibniz Formula, the above integral appears to be well defined and finite (namely −2). However, this is a nonsensical result, since the value of the integrand is positive everywhere on the interval [−1,1]. By breaking the integral into two parts, covering the intervals [−1,0] and [0,1], it can readily be seen that it is actually non-convergent. (This is a simple example, and the presence of a discontinuity in the integrand at x = 0 is clear. However, it is common to run across functions for which the presence and locations of discontinuities is not so obvious.) Working with functions in the complex plane, without taking account of the presence of branch cuts in these, can also lead to errors of this type. One can also run into problems by not taking into account the range of validity of a particular mathematical transformation. For example [2], an operation may be carried out that is valid only for real numbers, but is subsequently used with complex parameters. When equations are being solved, it is possible to take steps that result in the introduction of spurious solutions. For example [2], if one squares both sides of the equation x 1/2 = 1 − x, the resulting quadratic will have two solutions, and only one of these satisfies the original equation. Another example is the term-by-term differentiation of an asymptotic series, which is not permitted in general [3]. The injudicious use of asymptotic series can frequently cause problems in other ways as well. For example, the sum of a divergent series (including most asymptotic series) is not uniquely determined, so that one must take care when doing this [3]. One problem that appears trivial, but which seems to result in numerous errors, is unit conversion (e.g. changing between SI and cgs units). In fact, this is the most frequent cause of errors in engineering calculations [4]. Experience suggests that such problems are a common source of mistakes in physics as well. A historical example, involving a comparison between measured and theoretically predicted values for the pressure of light, can be found in Ref. [5]. The error described there, which was one of several in a calculation, consisted of omitting a necessary unit conversion. However, it is also common to do such conversions, but in the wrong way. (Reference [4] discusses this issue in detail.) For these reasons, it is very important to use consistent units in calculations. Major errors are likely 1
The Fundamental Theorem of Calculus states: if f(x) is integrable on the interval [a,b] and F(x) is any antib derivative of f(x) on this interval, then f (x)d x = F(b) − F(a). This is sometimes called the Newton–Leibniz a
Formula.
38
Mathematical calculations
to result if a mixture of different units is employed in a calculation, unless special attention is given to unit conversion.
2.2.4 Errors caused by subconscious biases There is a real danger, especially during hand calculations, that subconscious biases may distort the results. For instance, minus signs can be omitted, dimensionless factors may be dropped, and terms in algebraic expressions can be left out, in such a way that the results of a calculation agree with what is expected. Alternatively, during the checking process, the person involved may make a series of corrections to an incorrect calculation, until the result agrees with expectations, and then look no further for errors. Some historical examples of calculation errors caused by subconscious biases are discussed in Ref. [6]. (See also the general discussion of subconscious biases on page 540.)
2.2.5 Errors in published tables Another potential source of errors is mathematical tables. It is true that tables (of integrals, series, etc.) are not as important as they used to be, owing to the ascendancy of computer algebra (CA) systems. However, the reliability information that has been compiled on them can at least serve to show how easily mistakes can enter calculations – even when they are done by professionals, and reviewed and edited by others with a similar background. In a systematic study of the reliability of eight published integral tables, involving the numerical validation of samples of integrals from each set, surprisingly large error rates were found [7]. Only indefinite integrals were examined in the survey, and integrals of certain types were excluded from it. These included, for example, those whose result was expressed in terms of an unevaluated indefinite integral, or a non-terminating series, or could not be expressed simply in terms of “standard” functions such as rational and irrational functions, trigonometric and exponential functions, etc. A simple random sample of the remaining integrals was made. In the case of one of the tables, the error rate was about 27%, for a sample size that was roughly 9% of the entire table (containing 2329 integrals). A well-known table of integrals, by Gradshteyn and Ryzhik [8], had an error rate of about 8%, using a sample size of about 17% of the table (containing 1115 integrals). One of the tables (by Dwight [9]) had an error rate of only about 0.5%, for a sample size of about 16% of the table (which contains 1153 integrals). In the first two examples, most of the errors (respectively 71% and 86% of the total number of erroneous integrals) were “unambiguously incorrect,” while the remainder were incorrect only for certain values of the parameters or one part of the region of integration. In the last example (Dwight) the single incorrect integral found was erroneous only for certain values of the parameters or one part of the region of integration. It should be noted that, at least in the case of one of the tables (Gradshteyn and Ryzhik [8]), newer versions have become available since the survey was done. It is said that new entries and corrections in the latest edition have been checked “whenever possible” using symbolic computation [10]. Hence, the situation may have improved. However, incorrect results even from relatively recent integral tables have been identified (involving definite
2.2 Sources and kinds of error
39
integrals, in this case) [11]. Hence, it is probably best not to assume that such errors have been eliminated.
2.2.6 Problems arising from the use of computer algebra systems The relatively recent development and popularization of computer algebra (CA) software has undoubtedly done much to improve the reliability of symbolic calculations. However (and this is the major point of this section) one should generally not treat calculations done using such systems with the same cavalier attitude that one would normally have in performing ordinary arithmetic operations on a handheld calculator. Analysis2 in particular (as opposed to just algebra) is frequently subtle, and requires intelligence. The latter is a quality that computer algebra systems still do not have. Furthermore, such systems can be very complicated – possibly containing hundreds of thousands of lines of computer code, depending on the system. This means that large numbers of software bugs are inevitable. It has been observed that the manuals for most computer algebra systems pay scant attention to the limitations of the programs, and that obtaining incorrect or misleading results may actually occur more frequently than when numerical codes are used (to do non-trivial calculations such as integration, etc.) [12]. Human errors, including the usual typing mistakes, are a very frequent source of problems. According to the creator of one of the systems (Mathematica), the single most common cause of mistakes when using that program is to forget about definitions made earlier in the session [13]. So, for example, if one sets x = 5 at some point during the session, and then later on use x as if it were undefined, the resulting expressions may be erroneous. This point is presumably a very general one that applies to other CA packages as well. Operations in such systems involving integration can be problematic, partly because of the complexity of the integration subroutines [14], and partly because of the intrinsic subtlety of the integration operation, especially when it is performed on the complex plane. Definite integration in particular can be tricky. The best-known errors in computer algebra software manifest themselves during the integration of functions containing singularities (including discontinuities) or branch cuts. A review of the capabilities of seven general-purpose CA systems has been published, examining 542 short problems covering a wide range of mathematics [15]. Most of the problems are of a symbolic form. Of these, 27 are definite integrals, ranging in character from the very simple to the somewhat arcane. There is a great deal of variability in the ability of these systems to solve the problems correctly. Nevertheless, the number of mistakes made by some is surprisingly high. One widely used system, when presented with the 27 definite integrals, produced the wrong answer in two cases. One of these involved the evaluation of ∞ −∞ 2
5x 3 d x. 1 + x + x2 + x3 + x4
Analysis deals with evaluating limits, the operations of calculus, continuity, and the theory of functions.
(2.2)
40
Mathematical calculations
Another, less widely used system, delivered incorrect results for eight of the definite integrals tried, including the above example, and the following 1 √ −1
1 − x2 d x. 1 + x2
(2.3)
(NB: It does not appear as if the 27 definite integrals in the above review were chosen at random, and hence it is not clear how representative they are of definite integrals in general.) Other operations besides integration, involving the simplification of expressions containing branch cuts, can cause problems [16]. The evaluation of limits is yet another frequent source of errors [17]. Further troubles can arise from the implementation of operations with parameters outside the domain of validity, and solving equations in such a way as to introduce spurious solutions (see the remarks made in section 2.2.3). Computer algebra systems allow users to perform very complicated calculations, which may be done in an automatic or semiautomatic way. As such, errors caused by actions like the ones discussed above may be hidden from the user in intermediate results. Even if such results are seen, they may be so complicated that there is no hope of checking them [12]. Other information about problems in CA systems can be found in various internet discussion groups [18].
2.2.7 Errors in numerical calculations A complete discussion of the errors that can arise in numerical computations is beyond the scope of this book. However, a few remarks will be made. A study has found that although the users of numerical computer software like to imagine that their results are accurate to the numerical precision of the arithmetic used, the errors can in fact be much larger, and proportional to the number of lines of code in the program [19]. This information is contained in a report on the consistency of the results of computations in the field of seismic data processing. The study investigated the numerical disagreement between the output of nine large commercial software packages that implement mathematical algorithms from the same or similar published specifications, using as the language, and with the same input dataset and the same user-specified parameters. After attempting to use the feedback of obvious flaws to reduce the overall disagreement, the authors found that it grew at a rate of 1% in average absolute difference per 4000 lines of implemented code. Moreover, they found that the nature of the disagreement was non-random. Similar fault rates have been reported with software in other disciplines. It is recognized that one of the two most frequent sources of errors in numerical calculations is from roundoff [20,21]. This effect can be very important when two approximately equal numbers are subtracted, so that most of the leading digits cancel out. Unless one takes precautions against this possibility, it can happen almost anywhere in a long calculation. The other common source of errors is the use of polynomic formulas for calculation in cases where these are not appropriate [21]. An example of this is the employment of
41
2.2 Sources and kinds of error
polynomial methods of numerical integration (such as “Simpson’s rule”) in cases where the arguments of the integrals contain singularities or asymptotes. It has been remarked that extra confusion is often added to problems when both of these error sources appear in a problem [21]. Numerical differentiation is sometimes used to analyze experimental data. Unfortunately, the process is inherently error-prone, since it magnifies small amounts of noise in the data. Indeed, even the sign of the derivative can easily be incorrect. For this reason, numerical differentiation should be avoided wherever possible, and used with great care when it is necessary. If derivatives must be computed from data, it is better to smooth it using a polynomial that has been fitted by a least squares method before differentiating. These issues are discussed in detail in Refs. [22] and [23]. The fitting of curves to data can be problematic. In order to be truly useful, a fitting procedure should give not only the fitted parameters and error estimates on these, but also a statistical measure of goodness-of-fit [23]. If the latter value indicates that the fitted curve is an improbable model of the data, the first two items are probably worthless. Many workers never get beyond the level of obtaining the parameters, often with unfortunate results. It has been noted by workers at National Institute of Standards and Technology (NIST) that although good linear and nonlinear regression routines are available, some of the ones that are used are not effective, and produce incorrect results [24]. A method for testing such routines is discussed in Section 2.4.9. A special problem can arise when a least squares procedure is used to fit curves to data containing erroneous outlying points, which are much further away from the “true” curve than would be expected according to the usual (but not necessarily accurate) Gaussian error model on which the procedure is based. Such points can cause the procedure to mistakenly skew the entire curve in order to accommodate them. This issue, and others concerning the modeling of data, is discussed in Ref. [23]. Reference [21] gives a particular warning about fitting data with the following function, which is for example encountered in studies of radioactive decay: y = Ae−at + Be−bt .
(2.4)
If one is required to obtain values of all four parameters (a, b, A, B), given a set of data points {ti , yi }, the problem turns out to be extremely ill conditioned. That is, there are many combinations of the parameters that will fit the data very well (possibly to as many as four significant figures), so that the results of such an effort are generally of no value. (On the other hand, if a and b are known, so that one only needs to determine A and B, the resulting problem is well conditioned and relatively easy to do.) Other important sources of error in numerical calculations include (1) extrapolation instabilities, and in particular (2) the growth of unwanted solutions when solving “stiff” differential equations. It must be emphasized that in the case of these problems (as with others in numerical analysis) the simple brute-force methods of decreasing the step-size, increasing the range of calculation, or increasing the precision of the arithmetic used may not be sufficient to overcome the difficulties – more thoughtful approaches are usually needed. These issues are discussed in Ref. [21].
Mathematical calculations
42
Another problem that arises in certain calculations is the effective creation of random numbers (used, for example, in the Monte Carlo integration algorithm). This is not a trivial issue, and much wasted effort has resulted from the injudicious choice of a random number generator [23]. The generators that have caused problems are often the ones built into common programming languages (for instance, the generators rand and srand in C and C++). The creation of random numbers is discussed in Ref. [23]. It is natural to ask whether errors can occur when using ordinary handheld calculators. Errors due to faults in the calculator hardware or software are probably rare, but historically there have been such cases [25,26]. These problems have taken the form of roundoff errors, unreliable linear regression calculations, errors involving the solution of systems of linear equations, and others. On the other hand, mistakes with calculators involving human error are (as usual) very common. Perhaps the most frequent mistake when using handheld calculators is to forget to switch the units of angle between radians and degrees. (This is yet another example of the ubiquitous “omission error”3 – see page 9 [27].) Extensive discussions on reliable numerical methods have been provided in Refs. [21], [23], and [28]. The testing (verification and validation) of numerical computer codes is discussed in Ref. [29]. (Here, “verification” means ensuring that the computer code solves the chosen model of a physical problem correctly, whereas “validation” refers to determining whether the model accurately reflects the physical phenomena.) A good introductory discussion of the problems of ensuring accurate numerical computer codes, particularly very complicated ones, has been written [30].
2.3 Strategies for avoiding errors 2.3.1 Avoiding conceptual difficulties According to the mathematician David Gregory, in some comments on the work of Issac Newton: “The best way of overcoming a difficult probleme is to solve it in some particularly easy cases. This gives much light into the general solution. By this way Sir Issac Newton says he overcame the most difficult things.” [31]. Drawing diagrams (see below) is an essential part of the conceptual stage of solving most problems. Furthermore, order-ofmagnitude estimates can be very useful in getting oriented (see the discussion in Ref. [32]). A very helpful (and classic) work on the subject of solving mathematical problems, which covers conceptual troubles, has been written by Polya [33].
2.3.2 Use of diagrams The use of diagrams to guide one’s thinking is probably the most helpful single method for preventing mathematical mistakes, both at the conceptual stage, and that of execution. This 3
In this case, it is the omission of a “functionally isolated act,” which is the most common type of omission error.
43
2.3 Strategies for avoiding errors
includes drawing coordinate systems, sketching equations, drawing physical pictures, and plotting the results. It is also possible to do approximate calculations using graphical means – see page 51. It has been said that during numerical analysis, plotting the equation before selecting an algorithm will prevent more major computational problems than any other action [28]. Sketching a result will normally allow one to spot sign errors, and a good sketch can make it possible to detect erroneous factors of 2. Preliminary geometrical sketches, illustrating physical things or certain relationships, can be done crudely at first. However, these should normally be succeeded by drawings that are created with a certain amount of care, and with parts that are at least roughly to scale. This is in order to avoid distorted appearances that can lead to fallacious reasoning during the subsequent calculations [34]. Such drawings should be created in such a way as not to suggest special circumstances that may not be realized in general. For instance, avoid angles at 45◦ , if the angles may be otherwise, or lines of equal length, if unequal lengths are possible. It is often useful to draw things from several different perspectives.
2.3.3 Notation The use of a clear and consistent notation is an essential part of solving problems reliably. This issue may seem trivial and unimportant, but errors resulting from the neglect of such details can (and sometimes do) have major consequences. At the most elementary level, one should choose symbols that cannot be confused, either with other symbols or with numbers. So, for example, avoid combinations of v and ν, o and σ , etc.; and try to avoid characters like l (can be confused with the symbol for one), S (can be confused with 5), G (can be confused with 6), etc. Also, it is preferable that symbols have a well-established meaning. When a computer is used to solve problems, one normally has the option of using words as names for variables. This capability should be fully exploited to unambiguously identify the latter. The selection of variable names in computing is discussed on page 519 (see also Ref. [35]). One may also select the notation in such a way as to reduce burdens of manual manipulation during calculations. For example, one may write a quadratic equation as: x2 + 2bx + c = 0, in order to eliminate two factors of 2 in the solutions. As has been pointed out, factors of 2 and minus signs can be lost very easily in computations [28].
2.3.4 Keeping things simple It often pays to ask oneself, before embarking on a long calculation, whether a simpler approach may produce all the information that is needed. For example, it may be sufficient to obtain approximate solutions to problems rather than exact ones, and these may have the additional benefit of providing more insight into the problem. Perhaps even an “order of magnitude” estimate would be perfectly adequate. To take another example, in a problem involving time dependence, perhaps an average or asymptotic solution would be acceptable, and even more useful than, the exact timedependent one.
Mathematical calculations
44
Also, in some situations, it may not even be necessary to obtain a single-valued solution to a problem, but merely an upper and/or lower bound on its possible value [34]. In other words, it may be sufficient just to know whether the exact solution to a problem would be less than or more than a certain amount. This can result in a considerable simplification of the calculation. (See also the comments on page 51.)
2.3.5 Use of modularity As with other technical activities, breaking down complicated calculations into a number of smaller ones that are dealt with in isolation is an effective way of improving reliability – see also the discussion on page 3.
2.3.6 Finding out what is known Many of the calculations that one needs to do in practice have been done before – sometimes many times. It pays to look at the literature in order to avoid “reinventing the wheel.” Large compendiums of solutions to mathematical problems in physics are available [36]. Of course, as in the case of the integral tables, these may contain mistakes, and should be checked. Misprints are fairly common in published mathematical work in general.
2.3.7 Outsourcing the problem One doesn’t have to do everything. It may be that some software that does exactly what you want (whether it be analytic or numerical computation) is available as a commercial package. In such cases, all the advantages of a commercial item (e.g. reliability based on experience, technical support, economies of scale, etc.) may, in principle be realized. In some cases, it may even be worthwhile to outsource the entire problem-solving process. This may be particularly appropriate if the calculation is a “one-shot” affair that makes use of unusual and expensive software, or particular expertise. (A lack of experience in setting up and running a particular piece of software, and interpreting the results, can be an important cause of errors). On the other hand, a disadvantage of this approach is that one may not really know, or be able to find out, how a problem is being solved. That is, the problem is essentially being solved by a “black box,” with no window on its inner workings. As can be gathered from discussions earlier in this chapter, there is little doubt that there can be problems with commercial software. (For these reasons, and with the exception of linear algebra packages, the author of one book on the subject of errors in calculations states his preference for writing his own software [28].4 ) However, bugs in commercial software need not necessarily be an obstacle if one has a reliable procedure for checking the solutions. One should also keep 4
However, that author is an expert in numerical methods. In another work, the same author also advises novices not to write their own routines for solving initial value differential equation problems [21]. Generally, most people, and especially those who only occasionally do numerical calculations, should avoid writing their own programs for doing standard types of calculation (see the discussion on page 514).
45
2.3 Strategies for avoiding errors
in mind that new software is notoriously buggy, so it is preferable to employ more mature (second generation or later) versions whenever possible. (See the discussion on page 504.) Another factor should be taken into consideration. There is a widespread belief that “open-source” computer software (for which the source code is in the public domain, and can be inspected and modified by anybody) tends to be more reliable, all other things being equal, than software for which the source code is proprietary. This issue is discussed in more detail on page 506. When borrowing code from other sources, whether it be from the web or books or journals, it is very important to find out what it was intended to do and what its limitations are. Furthermore, the software should always be tested with some problems for which the answers are known. Such tests will usually expose the most blatant defects [28].
2.3.8 Step-by-step checking and correction One fairly common strategy for doing a sequence of calculations is to complete them all in a bold and cavalier way, with the intention of going back afterwards and correcting any mistakes that may have occurred. Although this practice has analogs in daily life, where the results may be satisfactory, it generally does not work for manual (i.e. pencil and paper) calculations. The best method is to make sure that each step is correct before moving on to the next one. The situation is less severe when computer algebra systems are being used. In particular, some systems make it possible to go back and correct mistakes (such as incorrectly entered formulae) in a sequence of calculations, and automatically recalculate all subsequent steps. Nevertheless, the principle of checking and correcting at each stage is still an important one.
2.3.9 Substitution of numbers for variables Certain checking operations (see Section 2.4) are possible only if the variables are left unspecified – information about the solution is lost when numbers are combined with each other by arithmetic operations. Hence, the substitution of numbers into variables should be left until the very end of the calculation, when everything, including the checking of the solution, has been completed.
2.3.10 Practices for manual calculations As noted earlier, untidy and cramped handwritten calculations tend to result in numerous mistakes. Hence, one should ensure that the mathematics is neatly written and liberally spaced. For complex algebraic calculations, a systematic, tabular layout for the work can be very beneficial [34]. Furthermore, one should avoid doing more than one logical operation per line of the calculation, but should rewrite an expression whenever a new operation is done. This limits the need for retaining the results of calculations in memory, which can be another frequent source of errors.
46
Mathematical calculations
2.3.11 Use of computer algebra software The earlier warnings in this chapter notwithstanding, the use of computer algebra (CA) software can be a very effective way of minimizing errors in symbolic calculations. In the case of non-trivial calculations, the best way of using these systems is not to set up the problem in its entirety and turn the software loose on it. Instead, although the machine may be employed to do the tedious work, the operator should keep a close eye on the proceedings, use judgment, and be prepared to give the software guidance from time to time. This is especially true in the case of potentially tricky problems, such as operations involving functions with branch cuts, and definite integration. In such cases, it is a good practice to simplify the problem as much as possible beforehand. For instance, when integrating, it is a good idea to try and eliminate or combine constants in the integrand and break the integral up into simpler operations – integrating a sum term by term, for example [14]. Generally, in working with mathematical expressions by using a CA system, it is important to know if these expressions have any discontinuities, and where they are located. Sometimes CA systems have trouble correctly doing definite integrals by evaluating the indefinite integrals at the limits of integration, in cases where the arguments of integration at these limits are singular. In such cases, a suitable approach may be to evaluate the definite integrals by using the directional limit capability of the CA system. (Directional limits are used, rather than non-directional ones, because one would like to stay within the limits of integration.) An example of the use of this technique is given in Ref. [37]. It was mentioned earlier that a very common form of human error when using CA systems is to forget about definitions that one has made earlier in the session. Therefore, one should either get in the habit of removing the values that have been assigned to variables once they are finished with, or employ long descriptive variable names that are unlikely to be used twice.
2.3.12 Avoiding transcription errors When computers are used, in order to minimize the errors that occur as a matter of course when transcribing information by hand, it is best to keep data and software in a machinereadable format as much as possible. If it is necessary to enter printed information into a computer, manual transcription can be avoided by using a scanner and some character recognition software. (See also the discussion in Section 1.3.3.12.) Computer algebra systems can provide the ability to translate algebraic expressions produced by these directly into computer source code (e.g. in ) in order to allow numerical calculations to be done.
2.4 Testing for errors 2.4.1 General remarks Testing is an essential aspect of doing calculations, although it is usually not emphasized, or even discussed, in references on mathematical methods. It is not necessarily a trivial
47
2.4 Testing for errors
issue, and indeed it may take more time (sometimes much more time) to gain confidence in the results of one’s calculations than it took to do the calculations in the first place. For example, in realistic engineering calculations (as opposed to simple problems used for pedagogical purposes) between about 50% and 90% of the work goes into verifying that a result is correct [21].
2.4.2 Getting the correct result for the wrong reason A particularly troublesome possibility should be noted, which is that it is possible to get exactly the correct answer to a problem for the wrong reason. This may happen because of an accumulation of “trivial” errors that somehow cancel out the ones caused by the incorrect reasoning. Alternatively, there may be no small errors, but a striking coincidence that causes a result that arises from incorrect assumptions at a very fundamental level to agree with the result obtained using the correct approach. There are a number of major historical examples of this possibility. One is a calculation of nuclear scattering that was done by Ernest Rutherford using classical physics, which happened to agree with the one done correctly using quantum theory [38]. Another is the famous “Drude formula” for the conductivity of a metal. This is another example of where a classical approach was incorrectly applied to a quantum problem, but which nevertheless yielded the correct answer purely by coincidence [39]. Yet another example is the result for the radius of the event horizon of a black hole. In this case, the answer obtained by Laplace in the eighteenth century using a Newtonian approach coincidentally agrees exactly with the result for a non-rotating black hole obtained using general relativity by Schwarzschild in the twentieth century [40]. It would be a mistake to conclude that such occurrences are unimportant, since one has obtained the correct result anyway. The difficulty is that this kind of accident may cause one to put misplaced confidence in the general analytical method, which can result in a major failure if it is applied to a slightly different problem. (Admittedly though, as in the above cases, such results can have considerable heuristic value.)
2.4.3 Predicting simple features of the solution from those of the problem An important aspect of the testing process (but not the only one) is to use a simple feature of the problem to predict a simple feature of the solution, and then to see if one’s actual solution has that simple feature. For example, one may know that in certain special cases (e.g. when some parameter approaches infinity or pi, etc., depending on the problem) the result must have some particular value, and this can be compared with the calculated one. For example, if one were calculating the strength of a magnetic field produced by a finite current-carrying conductor, one would know that the value at infinity must be zero, and this could be compared with the calculated value. Other features that may be checked include: (a) the direction in which a function changes with some change in a parameter (e.g. if it should be increasing, is it doing so?),
Mathematical calculations
48
(b) number and positions of maxima, minima, and inflection points (c) local curvatures (positive or negative?). Another tactic is to observe the symmetry properties of the parameters. One looks at parameters that play the same roles in the problem, and see if they appear in the solution in the same form as each other. For example, if the problem involved three orthogonal dimensions a, b, and c, in a situation in which none of these assumed unique importance, one would recognize that an expression such as (a2 + b2 + c2 ) could be acceptable, whereas (a2 + ab + ac) would not be. This is because exchanging a with either of the other two parameters would change the result. Alternatively, in a problem containing parameters that play unique roles, one would not expect to be able to exchange them in the solution with those that had different roles. This technique can be applied on a line-by-line basis as one proceeds through a calculation.
2.4.4 Dimensional analysis With this technique (also referred to as “dimensional checking”5 ), one looks at the dimensions of the solution, as composed of its constituent variables, to see if they are the correct ones. For example, energy has units of mass × length2 / time2 , and any solution to a problem involving the determination of an energy must involve combinations of parameters that, in aggregate, have these dimensions. If the answer was presented in the form of force × length, since force has dimensions of mass × length / time2 , the result does have the correct units. The technique clearly does not work if the solution is dimensionless, as in the case of an angle, for example. The method of dimensional analysis does not even require that one have a problem, or even an equation. A single expression involving more than one term can be checked for internal self-consistency by examining the dimensions of these to see whether they are the same. The use of special cases, symmetry properties of the variables, dimensional analysis, and other methods to check mathematical results have been discussed at length in Ref. [41]. A detailed discussion of dimensional analysis, and other checking methods, can also be found in Ref. [34].
2.4.5 Further checks involving internal consistency Other approaches also make use of the necessity that the problem be internally consistent. For example, one can take the proposed solution to a problem that is represented by some equation (e.g. a differential equation), and substitute it back into the original equation in order to see whether it is a solution. One can apply this method on a line-by-line basis. (This would be employed primarily only when CA systems are used.) Since many mathematical operations have an inverse, the 5
The terminology can be a source of confusion. The term “dimensional analysis” is also the name of a comparatively sophisticated technique (also based on dimensional arguments) used to find approximate solutions to differential equations in physics.
49
2.4 Testing for errors
inverse operation can be applied to an expression to see whether the result corresponds to the original expression. For instance, the indefinite integration of a function will result in an expression that, if differentiated, should result in the original function. Other examples include expanding a factored expression, and inverting an inverse matrix. Clearly, this method will not work for the calculation of definite integrals, determinants of matrices, or other operations that involve the loss of information. See also the discussion on page 52.
2.4.6 Existence of a solution In the case of some problems (involving, for example, the solution of systems of linear equations, or the solution of first-order differential equations) general methods exist for establishing whether or not the problem has a solution, without actually finding it. This is one of many examples of obtaining information about a problem without solving it, which can then be used to test the solutions.
2.4.7 Reasonableness of the result One can make use of general principles of physics and mathematics to test the solutions to problems. For example: (a) the result must satisfy the laws of conservation of mass, energy, charge, momentum, and angular momentum; (b) the result must obey the laws of thermodynamics; (c) probabilities must always be real and positive; (d) the principle of causality must be observed; (e) the symmetry properties must be correct; and so on . . . . Regarding the last consideration, an important but subtle point to bear in mind is that the symmetries of the solutions of a set of equations are not necessarily the same as those of the equations themselves [42]. One can also examine more particular considerations. For example, a diffusion problem should not result in wave behavior; and equations that are expected to have analytic solutions (as is usually the case in physics) should not lead to ones with discontinuities. Of course, knowing whether or not a result makes sense comes more easily with experience. With enough experience, it may not even be necessary to look at the details of the calculation in order to see that something is wrong. Such a realization may result from bringing a number of considerations from diverse areas to bear on the problem. This approach may be thought of as resulting from an intuitive “holistic,” as opposed to an “algorithmic” understanding of the physics. An example of this has been discussed by Feynman [43].
2.4.8 Check calculations If a calculation is checked by redoing it and comparing the results, it can help (as discussed in a general context on page 13) to place the original aside for a few weeks or months before
50
Mathematical calculations
repeating it. This helps to ensure that, if mistakes are made in the check calculation, they will probably not be the same as the ones in the original. (Although there is unfortunately a tendency to repeat certain kinds of error [34].) If the two calculations are found to be different, one should inspect the check calculation first, since it was probably done less carefully than the original [44]. It also helps if the check calculation can be done by using a different method than the original one. In this category, the most important technique is that of obtaining an orderof-magnitude estimate of the result (often involving a “back-of-the-envelope” calculation). One does this in the following way. (a) Making plentiful use of sketches, as discussed on page 42. (b) Approximating parameters in the problem (to within a power of 10). (c) Approximating the dimensionality (e.g. if one of the length scales in the problem is much larger than the others, then perhaps the problem could be considered to be one-dimensional). (d) Simplifying problems by working in convenient limits. For example, if one is calculating the field produced by a solenoid, perhaps the solenoid could be considered to be infinitely long, or of zero length (i.e. a single loop of wire), or of zero length and zero diameter (i.e. a point dipole). (e) Approximating the fundamental physics (e.g. if the velocity is not too large compared with the speed of light, perhaps a non-relativistic approach could be used). (f) Simplifying the model (e.g. if a damping term in the equation of motion of a system is small compared with the others, then maybe that term could be dropped). (g) Focusing on the major contribution to the result, and ignoring the rest. (h) Using dimensional approximations (or “dimensional analysis”) – taking the relevant parameters in a problem, along with any necessary physical constants, and combining them in such a way as to obtain a quantity with the required dimensions, in order to obtain an estimate of the desired result (see Refs. [32] and [34]). (i) Approximating functions used in the calculation – e.g. for x 1, one can write (by expanding the function in a power series): (i) sin(x) = x, (ii) cos(x) = 1 − x2 /2, (iii) tan(x) = x (iv) exp (x) = 1 + x. (j) Approximating operations in the calculation – for example, integrals of functions of a parameter x can be approximated by expanding the function in a power series of x, integrating term by term, and keeping only the first few terms (valid for x 1). Simple approximations are also available for derivatives. See Ref. [45] for examples of these and other operations. (k) Once the problem has been boiled down to its simplest possible approximate form, it is often recognizable as one of a relatively small number of standard simple problems in physics, which may be tabulated (see, e.g. Refs. [46 and 47]), and for which the solution is already known.
51
2.4 Testing for errors
Graphical, and other qualitative methods of solving problems may also be helpful. For example, integration may be carried out using a graphical technique [28], as can differentiation [32]. Laplace’s equation in two dimensions can be solved by graphical means using what have been called “Richardson’s Rules” [48,49]. Schr¨odinger’s equation, and other differential equations, can be tackled using graphical methods, and other simple approximation techniques [21,50]. Graphical methods can, at least in some cases, provide solutions with surprisingly high accuracy (much better than to within an order of magnitude). For example, the technique described in Ref [28] for integration is said to be usually correct to two significant figures and often almost three. With his method of solving Laplace’s equation, Richardson claims to achieve accuracies of 1% [48]. A useful collection of qualitative methods for solving physics problems in general, and quantum theoretical ones in particular, is provided in Ref. [32]. The use of numerical methods in checking analytical calculations (such as the numerical evaluation of a definite integral) is another method. This comes under the classification of “looking at special cases”. If numerical calculations are being used to check calculations done using a different method (analytical or numerical), and the results match to eight significant figures, the latter are probably correct [28]. As mentioned on page 44, in some cases one may be able to hem the solution to a problem between upper and lower bounds. It is often possible to pose simple auxiliary problems in order to derive these bounds, which may be much more easily deduced than an exact (or even an approximate) solution to the original problem. Errors in the solution to a problem can sometimes be more easily spotted if it can be recast and reexamined using a different coordinate system. For this purpose, the use of “conjugate variables” may be helpful. For example, instead of looking at the results of a time-dependent problem using time, it might be helpful to use frequency. In many problems in the field of condensed matter, it is often helpful to work in “momentum space” rather than “real space.” The different variable types are typically linked by Fourier transforms.
2.4.9 Comparing the results of the calculation against known results This category of testing includes comparing the results of the calculation against those published in books, tables, journals, websites, etc., possibly in the form of special cases. Such an approach is often used to test computer algebra software. In the case of software for numerical computations, the use of reference datasets with compilations of input values and the “correct” outputs for a particular operation may be useful. For example, a National Institute of Standards and Technology (NIST) website contains reference datasets which may be used to check numerical statistical procedures, such as linear regression, nonlinear regression, analysis of variance, etc., [24]. These sets, which may be either synthesized data, or “real world data,” are accompanied by certified values of parameters that can be derived from them. In the case of linear regression, these include estimated parameters, standard deviation of the estimates, R-squared, etc. This appears to be one of the few methods available for testing linear and nonlinear regression routines. The testing of statistical software is discussed in Ref. [51].
Mathematical calculations
52
2.4.10 Detecting errors in computer algebra calculations In general, the computational checks that are used in hand calculations are applicable to those found using CA software. In the case of self-consistency checks, where one applies the inverse of a given operation to an expression in order to see whether the result matches the original expression (see page 48), there is an additional consideration. Generally, CA systems are much better at simplifying expressions that are equivalent to zero to this value, than they are at simplifying an expression that is not equivalent to a rational number to a particular form [2]. Hence, in most cases, the best approach is to try and simplify the difference between the original and inverted result to zero. For instance, assume that one would like to find the indefinite integral of a function f (x) with respect to x. The CA system is instructed to do this, and produces an expression g(x), where (supposedly) g(x) = f (x)d x. (2.5) Then, rather than simplifying the expression corresponding to d g(x), dx
(2.6)
using the CA system’s simplification command,6 and comparing it with the simplified form of f(x), try simplifying the expression corresponding to f (x) −
d g(x), dx
(2.7)
and seeing whether it equals 0. Unfortunately, for most expressions, it will not be possible to simplify the differences to zero in a reasonable number of steps. An alternative approach is to look at special cases by substituting random numbers in the domain of interest, for some or all of the variables, and see whether these are at least approximately equal to zero. If the CA system has a bug, the result will probably be wrong for most random substitutions [2]. A way of rapidly comparing the original and inverted results for a large number of points, at least approximately, is to plot them side-by-side and visually examine them. Another way of testing results obtained using a CA system is to compare them with those obtained by other such systems. It is somewhat unlikely that two CA systems will have the same bug. However, cases in which the same wrong result is returned by two or more different systems are common enough that this technique should be used with caution [17]. Certainly, it should never be used on its own. Running several CA systems simultaneously on one’s computer should be feasible in most cases. The comparison of symbolic results with numerical ones is facilitated by the presence of numerical analysis packages in some CA systems. 6
This command often takes the form of Simplify[expr], where expr is some mathematical expression.
53
Summary of some important points
Summary of some important points 2.2 Sources of error (a) Conceptual difficulties. (b) Transcription errors. (i) During manual calculations: especially when writing is untidy and cramped, and when more than one logical operation is attempted per line in the calculation. (ii) When using a computer: typing errors are very common. (c) Errors in technique. (i) Examples: integrating in regions containing infinite discontinuities or branch cuts, creation of spurious solutions when solving equations, misuse of asymptotic series. (ii) Errors in unit conversion (e.g. between SI and cgs) are very common. (d) Errors in published tables. In published tables of integrals, errors have historically been very common – an 8% error rate was found in one well-known table of indefinite integrals. (e) Problems during the use of computer algebra systems. (i) Most common form of human error: forgetting about definitions made earlier in the session. (ii) Software problems: especially during definite integration of functions containing infinite discontinuities or branch cuts, evaluation of limits, simplification of expressions containing branch cuts. (f) Errors in numerical calculations (see References). (i) Errors in numbers calculated by commercial numerical software can run at about 1% per 4000 lines of code. (ii) Very common sources of errors: roundoff when subtracting two approximately equal numbers, misapplication of polynomic formulas for calculation (e.g. of integrals containing singularities or asymptotes).
2.3 Strategies for avoiding errors (a) Avoiding conceptual difficulties. (i) Gain an understanding of the problem by solving some particularly easy cases. (ii) Orient oneself by making order-of-magnitude estimates of the result. (b) Use of diagrams. A very important measure during all stages of the problem. (c) Write down and use a clear, consistent notation (very important). (d) Keep the calculation as simple as possible. (i) Are approximate answers adequate? (ii) Is one demanding more information than is needed? (e) Modularize the calculation. Break it down into smaller units.
54
Mathematical calculations
(f) Find out what is known. Avoid reinventing the wheel by not recalculating what has already been done before. g) Outsourcing the problem. (i) Get someone with experience to solve the problem – e.g. by using commercial numerical software. Be prepared to check the results. (ii) New software is very buggy – try to use more mature (e.g. second generation or later) versions if possible. (iii) When borrowing computer code, find out about its limitations and what it was intended to do. (h) Check and correct as you go. Make sure that each stage in a calculation is correct before moving to the next one. (i) Do not substitute numbers for variables until necessary. Certain checks are facilitated by leaving this operation until the very last stage. (j) Practices for manual calculations. (i) Ensure that calculations are tidy and liberally spaced. (ii) Avoid carrying out more than one logical operation per line. (k) Use of computer algebra software. (i) This is a very effective way of minimizing errors in symbolic calculations. (ii) Get in the habit of removing variable definitions when these are no longer being used. (iii) Guide the computer through complex calculations, monitoring intermediate results and giving help when necessary. (iv) Simplify difficult problems, such as definite integrals, as much as possible before turning them over to the computer. (l) Avoiding transcription errors. (i) Avoid manual transcription as much as possible by keeping data and software in machine-readable form. (ii) Use a scanner and character-recognition software to convert printed information. (iii) Some computer algebra programs can convert symbolic calculations into conventional programming code (e.g. ) for numerical calculations.
2.4 Testing for errors (a) It can take more time to gain confidence in a result than to derive it. (b) The correctness of a solution in a particular instance can be misleading, since it is possible to obtain an exactly correct result for the wrong reason. (c) Use simple features of the problem to predict simple features of the solution, e.g. as follows. (i) Look at special cases for which the answer is known. (ii) Examine the symmetry properties of the parameters. (d) Dimensional analysis (i) Check results to see if they have the correct dimensions. (ii) Ensure that separate terms within an expression have consistent dimensions.
55
References
(e) Other aspects of internal consistency. (i) Insert the solutions of a problem into the starting equations in order to see whether they are solutions. (ii) On a line-by-line basis, apply the inverse operation to an expression resulting from a given operation to see if this generates the original expression (mostly for CA systems). (f) In some cases general methods exist to find out whether a given problem actually has a solution. (g) Does the result make sense – use fundamental principles of physics and mathematics (e.g. conservation of energy, requirement that probabilities be real and positive) to check solutions. (h) Redoing the calculation. (i) If possible, wait for a period of weeks or months before repeating the calculation. (ii) Redo the calculation using a different method than the original (e.g. order of magnitude estimate, use of a graphical method, use of a numerical method, etc.). (iii) Consider using different variables, such as frequency instead of time, in the repeat calculation. (i) Comparing the results of the calculations against known results (at least in special cases: use information in books, tables, journals, websites, etc.). Web-based test data are available for testing linear and nonlinear regression routines. (j) Special considerations concerning detection of errors in computer algebra calculations. (i) Apply the inverse operation to an expression and compare with the original expression (best approach: try to reduce the difference between the original and inverted result to zero). (ii) Use built-in numerical analysis capabilities of CA systems to make comparison of symbolic result with a numerical one. (iii) Employ several CA systems and compare their results (however keep in mind that different systems sometimes return the same incorrect results).
References 1. J. Reason, Human Error, Cambridge University Press, 1990. 2. D. R. Stoutemyer, Not. Am. Math. Soc. 38, 778 (1991). 3. C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers, McGraw-Hill, 1978. 4. M. T. Holtzapple and W. Dan Reece, Foundations of Engineering, 2nd edn, McGrawHill, 2002. 5. M. Bell and S. E. Green, Proc. Phys. Soc. 45, 320 (1933). 6. M. Jeng, Am. J. Phys. 74, 578 (2006). 7. M. Klerer and F. Grossman, Industrial Math. 18, 31 (1968). 8. I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series and Products, 4th Edn, Academic Press, 1965.
56
Mathematical calculations
9. H. B. Dwight, Tables of Integrals and Other Mathematical Data, 4th Edn, Macmillan Co., 1961. This table of integrals has the reputation of being particularly easy to use. Reference [7] inaccurately gives the publication date of the fourth edition as 1966. 10. I. S. Gradshteyn, I. M. Ryzhik, Alan Jeffrey, and Daniel Zwillinger, Table of Integrals, Series and Products, 6th edn, Academic Press, 2000. 11. E. Talvila, Am. Math. Month, p. 432, May 2001. 12. J. M. Aguirregabiria, A. Hern´andez and M. Rivas, Comput. Physics 8, 56 (1994). 13. S. Wolfram, The Mathematica Book, 4th edn, Cambridge University Press, 1999. 14. http://support.wolfram.com/mathematica/kernel/Symbols/System/Integrate.html 15. M. Wester, in Computer Algebra Systems: a Practical Guide, M. J. Wester (ed.), John Wiley & Sons, Ltd., 1999. 16. A. Dingle and R. J. Fateman, in: ISSAC ’94. Proceedings of the International Symposium on Symbolic and Algebraic Computation, ACM, 1994, pp. 250–257, New York, NY, USA. 17. D. Gruntz, in Computer Algebra Systems: a Practical Guide, M. J. Wester (ed.), John Wiley & Sons, Ltd., 1999 (see also Ref. [15]). 18. See, for example, http://groups-beta.google.com/group/sci.math.symbolic 19. L. Hatton and A. Roberts, IEEE Trans. Software Eng. 20, 785 (1994). 20. R. W. Hamming, Numerical Methods for Scientists and Engineers, 2nd edn, Dover Publications, Inc., 1973. 21. F. S. Acton, Numerical Methods That Work, The Mathematical Association of America, 1990. 22. B. Carnahan, H. A. Luther, and J. O. Wilkes, Applied Numerical Methods, John Wiley & Sons, Inc., 1969. 23. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing, 3rd edn, Cambridge University Press, 2007. 24. http://www.itl.nist.gov/div898/strd/ 25. W. Kahan, Mathematics written in sand – the hp-15C, Intel 8087, etc., Proc. Joint Statistical Mtg. of the American Statistical Association, 1983, pp. 12–26. 26. E. J. Barbeau; Mathematical Fallacies, Flaws, and Filmflam, Mathematical Association of America, 2000. 27. http://math.vanderbilt.edu/∼schectex/commerrs/ 28. F. S. Acton, REAL Computing Made Real: Preventing Errors in Scientific and Engineering Calculations, Princeton University Press, 1996. 29. P. J. Roache, Verification and Validation in Computational Science and Engineering, Hermosa Publishers, 1998. 30. D. E. Post and L. G. Votta, Physics Today 58, 35 (2005). 31. W. G. Hiscock, ed.; David Gregory, Issac Newton, and Their Circle, (Oxford: printed for the editor, 1937), p. 25. 32. A. B. Migdal, Qualitative Methods in Quantum Theory, Addison-Wesley, 1977. 33. G. Polya, How To Solve It: A New Aspect of Mathematical Method, 2nd edn, Princeton University Press, 1957. 34. E. B. Wilson, Jr., An Introduction to Scientific Research, Dover, 1990. 35. S. McConnell, Code Complete, 2nd edn, Microsoft Press, 2004.
57
References
36. See, for example, P. M. Morse and H. Feshback, Methods of Theoretical Physics, McGraw-Hill, 1953. 37. S. Hassani, Mathematical Methods Using Mathematica: for Students of Physics and Related Fields, Springer, 2003. 38. G. L. Squires, Problems in Quantum Mechanics: With Solutions, Cambridge University Press, 1995. (This book also discusses other examples of where classical calculations have yielded the same result as the quantum mechanical ones, in regimes where one might expect a classical approach to be invalid.) 39. N. W. Ashcroft and N. D. Mermin, Solid State Physics, Saunders College, 1976. 40. M. Ludvigsen, General Relativity: A Geometric Approach, Cambridge University Press, 1999. 41. B. Cipra, Misteaks and How to Find Them Before the Teacher Does, 3rd edn, A. K. Peters, 2000. 42. M. P. Marder, Condensed Matter Physics, Wiley-Interscience, 2000. 43. R. P. Feynman, Surely You’re Joking, Mr. Feynman!, Vintage, 1992, pp. 77–78. 44. G. L. Squires, Practical Physics, 3rd edn, Cambridge University Press, 1985. 45. See Ref. [32]. Note that in the discussion on estimating the integral of a function by expanding in a power series and integrating term by term (p. 7), the first term in the resulting series has a sign error. 46. H. L. Anderson (editor-in-chief), A Physicist’s Desk Reference: The Second Edition of Physics Vade Mecum, American Institute of Physics, 1989. 47. G. Woan, The Cambridge Handbook of Physics Formulas, Cambridge University Press, 2000. 48. L. F. Richardson, Phil. Mag. 15, 237 (1908). 49. H. Poritsky, AIEE Trans. 57, 727 (1938). 50. J. Mathews and R. L. Walker, Mathematical Methods of Physics, 2nd edn, Benjamin/Cummings, 1970. 51. L. Wilkinson, in Computational Statistics: Papers Collected on the Occasion of the 25th Conference on Statistical Computing at Schloss Reisensburg, P. Dirschedl and R. Ostermann (eds.), Physica-Verlag, 1994.
Basic issues concerning hardware systems
3
3.1 Introduction The following chapter discusses a number of important issues that affect many classes of experimental hardware. For instance: vibrations can cause problems with optical, cryogenic, and other apparatus, and overheating may afflict electronic instruments, computers, and numerous other items.
3.2 Stress derating The practice of derating (otherwise known as “providing a margin of safety”) is a means of improving the reliability of devices exposed to physical stresses (including power, temperature, electric current, pressure, etc.). If devices are operated continuously at their rated maximum stress levels, they are not unlikely to undergo failure in a relatively short time.1 Derating consists of operating them at reduced stress levels, at which the probability of failure is comparatively small. The derating factor of a device is usually expressed as a ratio of the stress needed to achieve good reliability to the rated maximum stress. While the term “derating” is normally used in conjunction with electrical and electronic components [1], the principle is a general one. Generally, systems comprising a collection of components (e.g. a power supply or a power amplifier) are not derated as a whole, because derating has (or should have been) already been done for each of the constituent components. That is, derating is usually done at the component level, rather than the system (collection of components) level. (An exception is the temperature derating of power supplies – see below.) The amount of derating that is appropriate in a given case depends in part on the level of reliability which is needed. So, for example, the power dissipated by a resistor may be derated to 0.8 of its maximum value in order to achieve normal levels of reliability, and to 0.5 of the maximum in order to achieve exceptionally high levels [1]. (The latter would be selected if reliability is critical, as in the case of devices used in circumstances where failure could adversely affect human safety, or if they serve a vital function in some apparatus.) 1
58
NB: If mechanical stresses are of concern (e.g. pressure in a gas storage vessel), fatigue is often the primary mode of failure. In such cases, it is the number of stress cycles, rather than operating time, which determines whether failure is likely to occur. (See the discussion on page 101.)
59
3.2 Stress derating
The use of wide safety margins is particularly important in laboratory work, because of the time and effort involved in repairing faulty apparatus, and the labor-intensive nature of research [3]. Operating temperatures may also need to be derated. Temperature derating is usually expressed as a difference rather than a ratio. In the case of resistors, for instance, this is the maximum temperature minus 20 ◦ C for normal reliability levels, and the maximum temperature minus 40 ◦ C for high levels. Derating factors can be interlinked, as in the case of power and temperature, so that operating a device at a higher temperature would reduce the derating factor for the power. Changes in altitude may also affect derating factors. For example, in the case of power derating, heat is less easily dissipated by convection at high altitudes (e.g. at a mountaintop observatory), so that derating factors would have to be reduced. In the case of high-voltage devices, increases in altitude will lower breakdown voltages. Guidelines for derating electrical and electronic components are provided in Refs. [1] and [2]. Derating data should be available from manufacturers. In the absence of specific information about a given device, general derating factors (e.g. 0.5 [3]) are sometimes suggested. However, these may not be appropriate in certain situations. For instance, in the case of current in electric motors, derating factors of 0.3 and 0.2 have been recommended for achieving “normal” and “high” reliability levels, respectively [1]. For the current in electric filaments, the respective quantities are 0.2 and 0.1. It is important to keep in mind that published derating factors are guidelines, not rules. Hence, it may in some instances be reasonable to exceed them, depending on the particulars of the application. The derating of power electronic equipment, such as power supplies, usually takes the form of reducing the amount of power that is drawn from the device (from its maximum rated value) above a certain temperature. This is done according to a power vs. temperature derating curve. One has to be careful about how stresses are defined. For example, stated maximum ac current or voltage levels can refer to amplitude, peak-to-peak amplitude, or root-meansquare (RMS) amplitude. Alternatively, for instance, a “maximum power” rating may refer to a particular duty cycle (the percentage of time for which the power is applied). If a device is being used to drive a load, there may be some dependencies on the type of load that is being driven. In the case of relay and switch contacts, for example, the current derating factor would depend on whether the load was resistive, capacitive, or inductive. Sometimes derating conditions are not given explicitly, but are implied by the way in which a particular device is normally used. This is a particularly important consideration if the part or equipment has been adapted for research use from another field in which it finds its most widespread application (a fairly common situation). For example, if an audio amplifier that is normally used to operate loudspeakers is employed to drive an electromagnet with a large inductance, special attention will have to be given to properly derating it for this unusual application. (It may also be desirable to modify the amplifier, by providing it with protective devices – see page 396.) In some cases, derating (or excessive derating) will not lead to an improvement in reliability. Malfunction or even damage is a possibility. For instance, a mains-powered electrical or electronic device that is intended to operate at a particular line voltage will
60
Basic issues concerning hardware systems
malfunction if the voltage that is provided is very much less than this. Induction motors can actually burn out if they are operated at a voltage that is too low. To give another example, switch and relay contacts that are plated with anything other than gold generally require that a certain minimum current flows across them in order for them to work properly. Hence, one must ensure that a device is kept inside its correct operating range when a parameter is derated.
3.3 Intermittent failures 3.3.1 Introduction The occurrence of intermittent failures (i.e. those that manifest themselves only a fraction of the time, at irregular intervals) is perhaps the worst type of reliability problem. This is because they are often exceedingly difficult to diagnose and fix. (In this sense, nonintermittent failures are much preferable.) Intermittent failures show up very commonly in electronic devices. In the world at large, one often finds that more than 50% of reported failures of electronic systems are diagnosed upon investigation as being free of faults, and that most of these are the result of intermittent failures [1]. In the discussion that follows, sporadic noise in an experimental measurement is considered an intermittent failure.
3.3.2 Some causes and characteristics In electronic devices and systems, intermittent failures are caused by: (a) poor electrical contacts (resulting in intermittent high-resistances or open-circuits) – in solder joints, connectors, switches, potentiometers and trimmers, ground connections, etc. (usually the most common cause of intermittent failures in electronic hardware), (b) damaged or improperly made cables and cable assemblies, wires, and other conductors (intermittent high resistances or open circuits – also common), (c) electromagnetic interference – caused by ground loops, sporadic sources of electromagnetic radiation (e.g. arc welders, faulty thermostats, radio transmitters, etc.), a.c. power-line transients and radiofrequency noise, etc., (d) electric arcing and corona in medium- and high-voltage circuits, (e) power-line blackouts or brownouts, (f) low-quality, overloaded, or faulty power supplies (e.g. in computers), (g) overheating of equipment, (h) moisture, dust, or other contamination on insulators (especially in very high impedance, high-voltage, or high-frequency circuits), (i) errors in software, (j) short circuits (these occur occasionally, but not nearly as frequently as intermittent open circuits due to poor contacts, damaged conductors, etc.),
61
3.3 Intermittent failures
(k) burst (or “popcorn”) noise in electronic components, such as op amps and electrolytic capacitors. Vibrational disturbances can also cause intermittent problems. This can occur, for example, in the case of electronic systems (e.g. motion of cables in high-impedance circuits), and optical measurements (e.g. noise created during activities involving interferometry). Partly or completely loose parts, objects or debris in apparatus are sometimes responsible for intermittent failures. For example, loose bits of wire or solder in an electronic device can cause sporadic short circuits. Intermittent failures can also occur when a system is on the borderline of functioning, so that small changes in temperature, supply voltage, signal level, etc., shift it into a nonworking state. This shift to a borderline working condition can occur as components (e.g. electronic devices) age. Burst (or “popcorn”) noise takes the form of temporary voltage jumps (offset voltage shifts in op amps, for example) that occur at random intervals [4, 5]. In such cases, the device producing the noise may be quiet at one moment, only to become noisy again minutes or hours later [18]. Burst noise is usually not a problem with modern op amps obtained from reputable suppliers. Electrolytic capacitors also exhibit this phenomenon [6]. In optical systems, incidental etalon fringes can lead to intermittent problems. Also, diode lasers that are exposed to even tiny amounts of back-reflected light can exhibit intermittent mode-hopping behavior. Intermittent failures can also occur in the form of leaks in vacuum systems, such as in ultrahigh vacuum systems where very small leaks are important, and in cryogenic systems where one can have temperature-dependent leak problems (cold leaks or superleaks). Fluctuating air temperature or humidity levels can cause strange problems in some apparatuses, perhaps in the form of irregular disturbances during measurements (see pages 63 and 66). In systems that are sensitive to the density of the air (e.g. certain experiments involving optical interferometry), variations in barometric pressure can also cause difficulties [7]. Services (such as electricity, the Internet, cooling water, etc.) that are supplied by external agents and not under one’s direct control can exhibit intermittent reliability problems. For example: (a) in the case of the electric supply, one can have blackouts, brownouts, and voltage spikes, (b) the Internet can be a prolific source of problems – the network may slow down, or the connection may be lost, or a “denial-of-service attack” may occur, (c) a communal cooling-water supply can be subject to unforeseen changes, causing variations in the water flow-rate, and consequent drift in the operational parameters of a cooled device such as a laser; or the temperature of the water could drop below the dew point, causing condensation problems in equipment. As mentioned above, an important characteristic of intermittent failures is that they are often very difficult to locate, owing their transient nature. This can result in the loss of huge amounts of time, resources, and morale. In fact, these problems are often never solved. People may just learn to live with the difficulty. Sometimes, the piece of apparatus that
62
Basic issues concerning hardware systems
is causing the problem is simply abandoned (which may well be a good idea in certain cases). Sometimes, intermittent failures eventually become permanent. For example, an intermittent open circuit may become a lasting one, as can an intermittent vacuum leak. This is usually fortunate, since it normally means that the defect responsible can be located and corrected with relative ease. If a problem does not reappear after a while, people will often say (or imply) that it has “fixed itself.” But, of course, this is virtually never true, and unless somebody has done something about it, the problem will almost certainly reappear sometime in the future.
3.3.3 Preventing and solving intermittent problems Since intermittent failures can be such a headache to deal with, it is usually a good idea to take all reasonable precautions to prevent their occurrence. This includes, for example: (A) (a) employing methods for creating electrical connections (especially solder joints) with proven reliability, and (b) being careful to pay attention to the quality of ground contacts, and avoiding accidental or incidental grounds (see Chapter 12), (B) (a) using high-quality, reasonably new and good-condition connectors, cables, and switches in electronic equipment, (b) protecting connectors and cables from abuse, and (c) regularly inspecting vulnerable contacts (e.g. in patch-cord cable connectors) for cleanliness, corrosion and damage (see Chapters 11 and 12), (C) avoiding the creation of ground loops in low-level analog circuits operating in the audio frequency range, using a.c. line filters and surge arrestors on all equipment, providing equipment with shields and RF filters where necessary, etc. (see page 83 in this chapter for information about a.c. power line transients and radiofrequency noise, and Chapter 11 concerning interference issues in general), (D) providing sensitive mains-powered equipment with line voltage conditioners or (preferably) uninterruptible power supplies (see page 87), (E) using high-quality power supplies, which are well protected against anomalous loads and adequately derated (see page 396), (F) keeping equipment cool, clean, and ensuring that environmental humidity levels are stable and not excessive (see pages 63 and 67), (G) in high-voltage circuits, taking steps to reduce the possibility of unwanted discharges (see page 382), (H) avoiding the use of very high-impedance electronic-circuits, if possible, or (if they cannot be avoided) taking suitable precautions (see page 389), (I) (a) selecting high-quality computer hardware and software, and (b) using suitable methods to design, construct, and debug computer programs (see Chapter 13), (J) setting up optical systems in such a way as to prevent the formation of incidental etalon fringes (see Chapter 10 for information about this and other optical system problems), (K) for vacuum systems that may produce intermittent leaks, using seals and joining techniques with proven reliability, and protecting these from abuse, vibration, harmful contaminants, and unnecessary mechanical and thermal shock (see Chapter 6 and pages 233–260).
63
3.4 Effects of environmental conditions
Guidelines for troubleshooting defective apparatus and software are provided on page 25 (see also Ref. [8]).
3.4 Effects of environmental conditions 3.4.1 Excessive laboratory temperatures and the cooling of equipment 3.4.1.1 Introduction High ambient temperatures in a laboratory (and excessive temperatures due to other factors) can cause a multitude of problems, including, for instance: (a) instrument measurement errors, frequency shifts of lasers, etc., (b) unreliable (perhaps intermittent) operation of apparatus (possibly caused by the activation of thermal protection devices, thermal drifts in electronic component values, or the tripping of circuit breakers below their rated current), (c) poor performance of apparatus (e.g. reduced computer-processor operating speeds, and a reduction in the capacity of power supplies), (d) poor power-supply voltage regulation, (e) degradation and early failure of electronic components (especially batteries and electrolytic capacitors), (f) failure of computer hard drives and video adapters, and (g) (last, but not least) uncomfortable conditions for laboratory workers. Apparatus can also be affected indirectly by excessive environmental temperatures because of human intervention. For example, equipment may be switched off in order to reduce heat production that is causing personal discomfort in a room. Alternatively, items may be switched off because the temperature of the room exceeds the written specifications for some of the equipment.
3.4.1.2 Reduction and regulation of room temperatures Heating by the sun is probably the most common cause of excessive laboratory temperatures. Minimizing this can be a simple and effective way of reducing such problems. If one is able to choose the room in which a laboratory is to be set up, there are several ways to reduce solar heating. For example, in the northern hemisphere, north or east facing rooms are preferable to south or west facing ones, and it is desirable not to locate the laboratory on the upper floors of a multistory building. The direct entry of sunlight into a room can make it very difficult to control the temperature [9]. Hence, rooms with windows that face the sun should be avoided in situations when temperature regulation is important. The installation of window-blinds (or sun-blinds) is a good way of reducing solar heating. Internal window-blinds are not very useful for this purpose, since they absorb solar radiation and release heat into the room. On the other hand, types that are designed to be mounted on
64
Basic issues concerning hardware systems
the outside of a building are effective. These external blinds can greatly reduce, or possibly even eliminate, the need for air conditioning. It is also possible to obtain reflective films that are made to be attached to windows. One can also place powerful air-cooled equipment, such as pumps and compressors, in a separate room from that containing the people and more susceptible experimental equipment. (This could also have the advantage of reducing noise and vibrations.) If highvacuum pumping equipment is to be used, the avoidance of air-cooled diffusion pumps in favor of other types that produce less heat, such as turbopumps, can be helpful. Of course, there is also the possibility of installing air conditioning. As discussed on page 16, this approach has several benefits that make it very attractive. Not only does air conditioning keep the people and equipment cool, but it can also lower humidity and dust levels, and thereby reduce two other major contributors to equipment failure. Furthermore, some air conditioners can be used in reverse (pumping heat into a room), and thereby provide relatively economical heating when this is needed. The selection of the size of an air conditioner for a given room is an important consideration, which should be done after consideration and consultation. A very large unit will not necessarily be the best one, since an air conditioner that is too big will “short cycle,” and will not run long enough to effectively reduce the humidity. The choice of size is discussed in Ref. [10]. Like many mechanical devices, air conditioners require periodic (usually annual) maintenance. Temperature fluctuations in a room caused by the moving air from air conditioners can be a problem with some equipment (e.g. electron microscopes). In such cases, a more stable environment can be achieved by cooling the room mostly by means of radiators, through which chilled water is passed [11]. Such arrangements do not have to be expensive. It is possible to regulate room temperatures to within 0.1 ◦ C by this means.
3.4.1.3 Measures for preventing the overheating of equipment While working with individual items of equipment, it is important to remember not to block side vents, not to stack objects on top of a vented chassis, to keep air filters clean, and to regularly check that cooling fans are operating. One often finds that multiple cables passing behind rack-mounted equipment get in the way of fan openings, thereby blocking the flow of air, and making it difficult to access filters for the purpose of cleaning or replacement. In the case of equipment with passive cooling, it is possible that the effective removal of heat will take place only if the apparatus has a particular orientation [12]. Excessive amounts of dust on components inside electronic apparatus can act as insulating layers that prevent the release of heat [13]. Hence, the regular cleaning of the inside of such equipment may be called for. (This is a good idea for other reasons as well – dust can cause leakage currents in some electronic devices, and other problems.) Particular attention should be given to removing dust buildups on the internal grills of power supplies, since these can prevent the movement of air. Heat sinks are especially affected by dust, which hinders the free passage of air between their fins, and also acts as thermal insulation. Cooling fans can slow down or stall if they have large accumulations of dust [14]. Procedures for cleaning small computers are described in Ref. [13].
65
3.4 Effects of environmental conditions
The failure of cooling fans is a common nuisance. Cooling fan reliability issues are considered on page 400. A more detailed discussion on the cooling of electronics can be found in Ref. [5].
3.4.1.4 Active cooling of individual items of equipment The ambient temperature inside an electronics rack full of heat-producing equipment can reach 50 ◦ C [5]. Sometimes, the resulting stress on instruments, even those with fans, can be excessive. The use of room air conditioning is one possible way of dealing with this problem, and generally this would be the preferred approach. Another possibility, if such air conditioning is unfeasible or inadequate, is to use some form of cooling arrangement that acts just within an enclosed rack or equipment cabinet. Several methods are available for doing this. One approach is to use a special air conditioner that is designed to be mounted on a rack or other enclosure, called an “enclosure air conditioner.” These require the usual periodic maintenance that is associated with air conditioners, but have the virtue of being self-contained. Since these devices are mounted very close to the electronics, electromagnetic interference could be a problem in some cases. Vibrations and acoustic noise can also be troublesome. If a compressed-air supply is available, a second approach is to use a special cooling device based on the expansion of compressed air, called a “vortex cooler.” These devices are relatively inexpensive compared with air conditioners, and take up comparatively little space on an enclosure. Also, electromagnetic interference is not an issue. Furthermore, since they have no moving parts, vortex coolers are maintenance-free. Compressed-air supplies are often contaminated with water and oil (see page 96), and hence the air provided to vortex coolers must normally be filtered. Periodic replacement of the filters is necessary. Although vortex coolers are generally very noisy, relatively quiet types are available. The main disadvantage of vortex coolers is that they require huge amounts of compressed air. Various methods for cooling equipment enclosures are discussed in Ref. [15]. With any active cooling system, one must always ensure that there is no possibility of condensation taking place on the electronics.
3.4.2 Moisture 3.4.2.1 Definitions The presence of water, either as a liquid or a vapor, generally has an adverse effect on reliability. The presence of moisture in the air is normally measured by the “relative humidity” (or “RH”), in percent. This is an indication of the ratio of the water-vapor pressure to the maximum possible water-vapor pressure at any given temperature [16]. Hence, a relative humidity of 50% indicates that the air is holding half of the moisture that it is capable of holding at that temperature. Complete saturation of the air (the point at which the air can hold no more moisture) corresponds to 100% RH. The temperature at which this occurs is called the “dew point,” and at this value moisture starts to condense
66
Basic issues concerning hardware systems
onto surfaces. The humidity is inversely proportional to the temperature down to the dew point [1].
3.4.2.2 Harmful effects of moisture High humidity levels can lead to detrimental effects. Many common materials will readily adsorb moisture from the air, to form thin films of water on their surfaces. When ionic contamination is also present (perhaps owing to the presence of atmospheric pollutants, skin secretions, or solder flux), the problems are often aggravated owing to the creation of an electrically conductive, and possibly corrosive, electrolyte. Such effects can also be caused by atmospheric moisture that has been absorbed by surface dust. (Subsequent faults in electronic devices are sometimes referred to as a “hygroscopic dust failures” [17].) Water resulting from condensation can be particularly harmful. Some of the problems are as follows. (a) Unprotected steel parts and devices rust in humid atmospheres. Steel antifriction bearings (e.g. ball bearings) are particularly vulnerable to harmful corrosion resulting from high humidity levels (see also page 218). (b) “Galvanic corrosion” can occur if metals with different electrochemical potentials are in contact in the presence of an electrolyte (see page 99). This can cause problems with, for example, electric contacts. Such electrochemical activity can also lead to the generation of noise voltages in low-level d.c. circuits (see page 388). (c) Moisture can cause current leakage (perhaps leading to severe drift and 1/f noise) and even short-circuits in electrical and electronic systems. Such effects can be especially troublesome in very high impedance (>107 –108 ) circuits (see page 388), or those that contain closely spaced conductors. Corona and arcing may take place in high-voltage circuits (see page 384). High-frequency circuits are also affected by moisture that has been adsorbed onto or absorbed into insulators, because of the high dielectric constant of water (about 79 at room temperature). (d) Plastic insulation, such as in printed circuit boards, can be degraded (e.g. by way of changes in electrical properties) after absorbing moisture from the air [5]. (Plastics are generally hygroscopic [1].) (e) Moisture in the air can cause staining and the formation of deposits on optical components, such as lenses [18], and damage to certain hygroscopic infrared optical materials. (f) Mould or fungi growth may take place, particularly when the environment is humid and warm. This can result in the production of corrosive agents, and may produce a low-resistance path in electrical devices [19]. These organisms can damage optical components (see page 335). Mold growth starts to become a major problem above a relative humidity threshold of 75%. (g) Moisture can trap grit and dust, that would otherwise escape, on surfaces [19]. In general, the exposure of laboratory equipment to conditions that might lead to condensation should be strictly avoided.
67
3.4 Effects of environmental conditions
3.4.2.3 Some locations and conditions where moisture might be a problem Moisture can cause difficulties in the following places: (a) basements and unheated storage buildings, or other cold areas (high humidity and possible condensation), (b) poorly ventilated spaces (high humidity and possible condensation), (c) areas near the sea (moisture in the air consisting of spray laden with salt) [19], (d) in or near apparatus using water cooling (condensation and dripping), (e) the vicinity of windows in cold weather (moisture condenses in such places, particularly if the windows are only single glazed [20]), (f) in or near apparatus containing liquid cryogens, or underneath pipes carrying cold gases from cryostats (condensation and dripping), and (g) in cooled devices such as certain optical detectors (condensation). Humidity levels are considerably higher in the summertime than during winter months, and can also rise significantly during rainstorms. A troublesome problem that may accompany high humidity levels in some places is the presence of salt in the air. Difficulties due to this (such as the corrosion of electrical contacts or damage to optics) can occur in coastal areas. Salt spray can be carried by the winds from the ocean up to 15–30 km inland or possibly even further [21].
3.4.2.4 Avoiding moisture problems A relative humidity range that is usually comfortable for people and suitable for most types of equipment is 45–65% [22]. For particularly sensitive equipment, such as delicate electronic measurement or computing systems, a range of 45–50% is better, and provides a time buffer in case the environmental control system breaks down. Generally, steel does not rust at relative humidity levels of less than about 60%, but rusts quickly when they are greater than 80% [23]. As a rule of thumb (which can vary depending on the character of the items at risk), one should be concerned about placing many types of laboratory equipment (designed for indoor use) in environments where the relative humidity exceeds about 75% [24]. It turns out that rapid humidity changes are even more important than a high relative humidity in inducing corrosion. If possible, conditions should be arranged so that the RH changes by no more than 5% per hour [19]. Locations near windows can have problems with large humidity swings, with condensation in the vicinity of the window when it is cold outside, and lower moisture levels when the area is warmed up by the sun. This cycling can be daily and seasonal. Using double- or triple-glazed windows can help to reduce such variations [20]. The most common approach for controlling humidity in a general laboratory environment is through the use of air conditioners or (at somewhat lower cost) by dehumidifiers. In some cases, such as when high power laser optics are involved, the maintenance of a low humidity level and the removal of airborne contaminants associated with high humidity levels, through the use of some form of dehumidification can be essential [25].
68
Basic issues concerning hardware systems
Note that air conditioning systems often do not have the ability to control humidity independently of the temperature. This is because average conditions in temperate climates makes this unnecessary. Furthermore, in hot and humid climates straightforward cooling of the air leads to condensation of water from the air at the air conditioner, which can actually raise room humidity levels. Sensitive items that must be exposed to humid atmospheres should be protected from condensation. One way of doing this is by heating them above the dew point with electric heaters. This is a method used to protect hygroscopic optical elements and electronic devices in damp environments. Special “enclosure heaters” are available commercially that are intended for the prevention of condensation and frost in electronic enclosures. The application of suitable conformal coating or potting materials to the items might also be an effective method of protecting them in some cases (e.g. with electronic circuits). Methods of protecting cooled optical instrumentation from condensation are discussed in Ref. [18]. Condensation can occur when apparatus consuming substantial amounts of power (such as large computing equipment) is switched off and the room temperature drops. Such condensation is most probable in environments that are normally air conditioned, or in lightweight buildings, and can generally be prevented by providing background heaters [10]. Another condensation problem can take place if an air-conditioning system is not operated continuously, possibly for reasons of cost. When the air conditioning is switched off, so that the dehumidification that it normally provides is no longer present, condensation can form on cold surfaces when warm humid air enters the environment. While a high humidity is normally the most common kind of humidity problem, an excessively low humidity can also be an issue. Probably the most serious consequence of low humidity is the increased probability of electrostatic discharge (ESD) damage to electronic devices. This is discussed in more detail on page 390. Some plastics can warp under low-humidity conditions [19]. The use of humidifiers may be beneficial when humidity levels are too low. One should not use devices that raise humidity levels by producing a mist, since these produce uneven levels of humidity in a room, and nearby objects will get soaked [20]. Humidifiers that create water vapor by evaporation are preferable.
3.5 Problems caused by vibrations 3.5.1 Introduction Vibrations are an important source of reliability problems. For the present purpose, it is convenient to divide vibration difficulties into two categories: large-amplitude vibration problems, and disturbances to measurements. The first of these often takes the form of damage to equipment, devices and parts, frequently caused by adjacent vibrating machinery.
3.5 Problems caused by vibrations
69
Also, because large vibrations are often accompanied by acoustic noise, personal discomfort can be an issue. Large vibrations can, of course, also lead to the second problem. While the control of vibrations is a firmly established discipline and is amenable to quantitative analysis, the solving of vibration problems by research workers is in practice generally done by using empirical methods. One important reason for this is that the calculation of the vibration behavior of real structures is generally very difficult. Moreover, the information that would be needed to carry out a quantitative analysis, such as the vibrational properties of a sensitive instrument, is often not available. However, if particularly difficult vibration problems should arise, and/or if large capital expenditures are involved, it might be worthwhile to employ the services of a qualified vibration consultant. Such people would typically have an engineering background in structural dynamics, and should have knowledge of the issues associated with very sensitive equipment. For example, they may have experience in solving vibration problems arising in the use of electron microscopes.
3.5.2 Large-amplitude vibration issues If the amplitude of vibrations is sufficiently great, damage or degradation may be caused owing to, for instance: (A) fatigue failure in, for example (a) solder joints, used for vacuum sealing or electrical contact, (b) wires and cables, (c) bellows (e.g. metal bellows in vacuum systems), and (d) electric filaments (e.g. in certain types of vacuum gauge) [1, 26], (B) (a) chafing of electrical insulation, hoses, and seals, and (b) fretting of electrical contacts and machinery parts, (C) loosening of parts, such as threaded fasteners, electrical connectors, and vacuum fittings [1], (D) shifting of parts and structures [19], (E) dislodgment of foreign particles into filters, pumps, bearings, electronic apparatus [19], and valves, (F) in stationary (non-rotating) ball bearings: rubbing of the rolling element against the raceway, leading to early failure [27] (see page 225). The large amplitude vibrations that cause these problems can be caused by: (A) (B) (C) (D)
nearby large items of machinery, such as pumps and compressors, fluid or gas flow (e.g. cooling water flowing through bellows), cooling fans, electromagnetic forces (e.g. 50 Hz magnetic fields acting on conductors, such as metal bellows).
Fatigue failure is discussed on page 101. Large vibrations are generally harmful to equipment, and should be reduced as much as is reasonably practical. While most well-designed and properly maintained machinery should generate little vibration, a certain amount may be unavoidable. Furthermore, in time, rotating machines that are initially adequately balanced can become unbalanced, with the uneven buildup
70
Basic issues concerning hardware systems
of dust or rust on rotating components, or because parts have loosened. Worn bearings, misaligned drive shafts or damaged drive belts can also cause excessive vibrations. In such cases, it may be possible to eliminate or reduce the problems just by having the offending items serviced (and especially rebalanced). Moreover, if a machine is vibrating excessively, it is liable to cause further damage to itself. Some machines, such as cylindertype compressors, are intrinsically prone to generating large vibrations, even when they are in good condition. Most vibration problems in the large-amplitude category are caused by conditions of resonance [26]. That is, the natural frequency of some structure corresponds to the frequency of a forcing vibration. For example, a cantilevered pressure gauge attached to a compressor by a piece of copper tube may vibrate in resonance with the compressor, and cause failure of the tube as a result of fatigue. Large vibrations can frequently be detected just by the noise that they create. (In fact, as indicated, such noise may be a major problem in its own right.) If the vibrations are large enough, it may also be possible to see them. One can often sense even relatively small vibrations by using the fingertips. Resonances are removed by detuning the vibrating item. For instance, this can be done by reducing the stiffness of support members, in order to make the vibrating structure less rigid [28]. In some situations (as in the case of bellows) the vibrating object can be made less flexible, by securing it to a rigid structure. Detuning may also be accomplished by adding or subtracting mass to or from the vibrating object. If stiffening members are to be added, then this must not be done in such a way as to increase cyclic stresses on items that are susceptible to fatigue failure. For instance, in the aforementioned example involving the cantilevered pressure gauge, it would probably not be a good idea to reduce vibration by anchoring the gauge to a wall, which could merely increase the cyclic stresses in the copper tube and thereby accelerate fatigue. Instead, a better approach in this case might be to provide additional mechanical support between the tube and the compressor. Another method of reducing vibrations is to increase the internal damping of the vibrating item. Several methods of doing this are available. For example, a structure that is bolted together may exhibit considerably greater vibrational energy dissipation than one which is of monolithic, or even welded, construction [26]. This is because of the friction that takes place between surfaces in a bolted structure. An additional approach is to sandwich a viscous or viscoelastic material (such as rubber) between two non-viscous members (such as steel or aluminum plates) that are undergoing vibrational flexure. In this arrangement, vibrational energy is dissipated by the shear deformation of the viscous substance [26]. Special materials that are intended for use as the dissipative medium in such vibration damping arrangements are available commercially. It is also possible to get vibration-damping tiles that are intended to be epoxied onto the surface of a vibrating member. For thin structures, such as metal sheets, coatings are also available for the purpose of providing damping. Keep in mind that certain types of rubber degrade with time, with an accompanying loss of damping properties. More information about vibration damping techniques can be found in Ref. [26].
71
3.5 Problems caused by vibrations
Metal bellows are often used to join tubes that undergo relative movement (e.g. from a pipe on a vibrating compressor to one mounted on a fixed structure). However, the bellows are particularly susceptible to damage in the form of fatigue failure if they are allowed to resonate. This problem can be greatly reduced by using bellows that are enclosed in metal braid. Such items are available commercially. More information on this topic is provided on page 168. The prevention of large-amplitude vibration problems in cables is discussed on pages 455 and 464. Methods of avoiding the damage to bearings that occurs in the presence of vibration are discussed on pages 225–226.
3.5.3 Interference with measurements 3.5.3.1 Measurement difficulties and sources of vibration The degradation of measurements by vibrations is a problem that is encountered with, for example: (a) optical systems (e.g. noise in interferometric measurements, and frequency noise in lasers), (b) microscopes in general, including electron microscopes, scanning tip-type devices, such as STMs, and optical microscopes (image blurring), (c) electronic instruments and setups, in the form of noise due to movements of wires in magnetic fields, cable flexure in high impedance circuits, and vibration sensitivity of some electronic components and detectors (these are said to be “microphonic”), (d) microwave resonators (frequency variations), (e) ultralow temperature apparatus (heating), (f) analytical balances, such as semi-micro- and microbalances (inaccurate weight measurements), (g) very precise positional measurements (destabilization of dimensional chains), and (h) magnetic susceptibility measurements in magnetic fields (noise). Many of these vibration problems are of an intermittent nature. The sources of vibrations include, for instance: (a) mechanical pumps and compressors (especially low-speed cylinder, or reciprocating, types) [9], (b) building heating, ventilating and air-conditioning (HVAC) systems (especially their fans – very common vibration sources) [9], (c) lathes, grinding machines, and other workshop equipment [9], (d) standby generators, (e) cooling water chillers, (f) human activities, such as footfalls on non-rigid floors supporting sensitive equipment, and the slamming of doors [9], (g) activities in loading bays and storerooms (movement of large and heavy objects, such as gas cylinders) [9],
72
Basic issues concerning hardware systems
(h) nearby traffic on roads, and particularly highways and busy intersections (in the frequency range 15–70 Hz) [9], or on raised, ground-level or underground railways (in the frequency range 10–40 Hz), (i) building vibrations due to sonic booms, low-flying aircraft [29], wind, and thunder, (j) movement of elevators and gantries, (k) building maintenance work, (l) earth tremors due to natural causes – for instance stormy weather (random seismic noise) [30], waves breaking (near the coast), lightning strikes, and earthquakes, (m) earth tremors due to other artificial causes – for instance explosions created during construction work and military exercises [29], pile-driving activities [9], new building construction (e.g. heavy earth-moving equipment), road construction (e.g. soil compactors), (n) flow of gases or liquids (e.g. cooling water) through pipes, and vibrations from distant sources (e.g. water pumps) carried along pipes [9,31], (o) heavy power transformers (vibrations at the second harmonic of mains frequency) [9], (p) in low-temperature apparatus, the sporadic boiling of cryogenic liquids – especially liquid nitrogen (called “geysering” [32]), operation of 1 K pots, Taconis oscillations, and cracking ice on cryostats, and (q) sound waves (e.g. from heating, ventilating and air conditioning (HVAC) systems, or footfalls on hard floors) – generally the dominant source of vibrations above 50 Hz [30].
The most serious vibration problems can be caused by large, slow, rotating machinery such as cylinder-type (reciprocating) compressors. The low-frequency (e.g. 5–10 Hz) disturbances produced by these machines can travel great distances. In one reported situation, vibrations were detected by observing waves set up on the surface of liquid mercury, and interference occurred with the operation of an electron microscope, some 400 m away from such a compressor [9]. Low-frequency vibrations in general are much more difficult to deal with than high-frequency ones, since the effectiveness of passive vibration isolators is greatly reduced at low frequencies. Furthermore, vibration isolators that are capable of functioning well at low frequencies tend to be very expensive. Some types of machine are liable to produce vibrations just by the nature of their operation. For example, pumps and compressors that move gases or liquids in pulses, rather than continuously, fall into this category. Building heating, ventilating and air-conditioning (HVAC) systems are another potentially troublesome source of vibrations. The large fans used in these systems are often the main source of the disturbances. Large air-conditioning compressors (especially reciprocating types) can be another major source. Fans in HVAC systems are the most common source of periodic (as opposed to random) floor vibrations in buildings [30]. Unfortunately, HVAC machinery is often located on the roof or an upper-floor of a building (rather than on the ground floor), which generally makes the disturbances much worse than they would otherwise be. The problems can be especially severe when the vibrations produced by the machinery correspond with natural modes of the building. Such resonant behavior is often
3.5 Problems caused by vibrations
73
noted when the systems are started up or shut down, and vibration frequencies are swept over a range. Rotating workshop machinery is often prone to generating large very low-frequency vibrations [9]. For example, the workpieces that are mounted in lathes are often not symmetric, and hence may not be dynamically balanced. Traffic on nearby roads and railways can be a very problematic source of low-frequency vibrations [9]. The main problems arise because of the movement of heavy objects, such as long-distance trucks, locomotives, and rolling stock. The frequencies of the vibrations generated by these objects are often at the resonant frequencies of surface strata (soil and rock). (Electric railways and the like are also a potential source of electromagnetic interference.) The heavy earth-moving equipment used in new building and road construction work can generate very large low-frequency vibrations. The path by which vibrations enter apparatus is an important consideration when trying to combat them. Entry from the ground by way of structural supports is an obvious and often-considered route. However, one should always keep in mind the possibility that vibrations will travel along other paths, such as service lines for vacuum, cooling water, and gas. For example, the coupling of vacuum-pump vibrations into apparatus by way of interconnecting tubes is fairly common. Electrical leads can propagate vibrations from cooling fans, located in electronic equipment, into vulnerable apparatus. (Vibrations transmitted along pumping and gas-handling lines, as well as perhaps electrical leads, can easily be of greater significance than floor vibrations.) Yet another route of entry is via sound waves impinging on the apparatus enclosure (the external walls of a cryostat, for example). One should not neglect internal sources of vibration, which may in some cases be the dominant ones. These include cooling fans, motor-driven devices such as optical choppers, mechanical cryocoolers, the turbulent motion of water through cooling passages, and the boiling of cryogens such as liquid nitrogen and helium.
3.5.3.2 Preventing measurement vibration-problems Selecting a suitable site In many cases the most serious source of vibrations is ground or floor motion. If in such instances one is in a position to select the location for the susceptible apparatus, the most important measure that one can take is to find a suitable site. This should be as far as possible from major sources of vibration.2 Upper-level floors in buildings are often prone to vibrations, so ground-level or basement sites are usually the best ones.3 If a good ground-level site can be found, it may not be necessary to take any further measures to eliminate ground-borne vibrations. (Or perhaps relatively simple measures may be sufficient.) In many cases one may not have any options about where apparatus is to be located, and so floor vibration isolators of some type may have 2 3
NB: Vibrations can travel through a building via very circuitous routes, and hence vibration levels may not be a strictly monotonically decreasing function of the distance from the source. That is, those sites where the floor is in direct contact with the underlying ground.
Basic issues concerning hardware systems
74
to be used. Nevertheless, it must be emphasized that vibration isolators should generally not be used as a substitute for the selection of a good site. Even passive air-spring isolators can be expensive, must be set up correctly and may need maintenance, whereas there is little that can go wrong with a reinforced concrete floor in a good location. Moreover, if an experimental setup that does not use isolators is found to be inadequate, such devices can probably be added without too much difficulty. On the other hand, if one is hoping from the outset that a set of vibration isolators will solve all problems, it should be kept in mind that even high-performance isolators do not provide perfect isolation. (Vertical and horizontal vibration reductions of about 40 dB and 26 dB respectively at 10 Hz can be expected from a top-quality passive vibration isolator. These decrease to about 28 dB and 17 dB respectively at 5 Hz.) Finding out that one’s isolators are insufficient, and then having to relocate to a better site after a laboratory has been set up, can be very difficult and time consuming (or perhaps not even practical). A good way of finding a suitable location is to carry out a vibration survey with the aid of a “seismic-grade” accelerometer and a spectrum analyzer.4 These items can be rented, if necessary. Other, qualitative, methods of detecting vibration, such as looking at ripples on liquid mercury, have been used in the past. However, these are not very objective, and with mercury there is of course a potential health risk. With an accelerometer and a spectrum analyzer, one also has the option of recording the data for later analysis, and objectively comparing it with other data. The measurement of the vibration velocity is often the best way of quantifying the severity of vibration level, although displacement and acceleration can also be used [33]. Vibration analysis is not a trivial undertaking, and it is easy to get meaningless results if the spectrum analyzer is not set up correctly [34]. Coherent noise (produced by devices such as HVAC fans, compressors, etc.) and random noise (generated by footfalls, wind, etc.) must be analyzed differently. The amplitude spectrum is meaningful in the former case, whereas the amplitude spectral density is relevant in the latter. A spectrum analyzer should be able to determine both these types of spectra. Also, floor vibration spectra tend to be highly non-stationary – containing a large number of spectral components with amplitudes that vary in an erratic way [29]. One useful way of taming the data is to make use of the ability of some spectrum analyzers (and spectral analysis software) to provide “centile spectra” (or “percentile spectra”). The centile spectrum Ln of a given vibration time-series is the envelope of vibration amplitudes (as a function of frequency) that are exceeded for n% of the time. Thus, a spectrum analyzer may provide, for instance: L1, L10, L50 and L90 centile spectra (see Fig. 3.1). The L1 spectrum would represent vibrations that are relatively large, but also rare. The L50 or the L90 spectra are often suitable for making vibration surveys. The analysis of vibrations is discussed in Refs. [29] and [34]. The most problematic areas are likely to be those near heavy, slow-moving machinery, such as large cylinder-type compressors, rotary piston pumps, and machine-shop equipment. Some other areas to avoid are those close to main roads, and near loading bays and 4
Vibration-sensitive instruments of commercial origin, such as electron microscopes, are frequently provided with floor-vibration criteria that must be met. Their manufacturers will often do a survey of a customer’s premises (measuring floor vibration and acoustic levels) prior to installation.
3.5 Problems caused by vibrations
75
RMS velocity amplitude (10–8 m•s–1)
1000
100
L1 10
L10 L50 L90
1 1
10
100
1000
1/3 Octave band center frequency (Hz)
Fig. 3.1
Hypothetical statistical distribution of vibration amplitudes, illustrating the centile spectrum concept. Actual vibration data are often not nearly as good as the above plots might suggest (see Ref. [29]).
storerooms. A major difficulty here is that if the latter areas produce problematic vibrations, essentially nothing can be done to reduce them at the source. The best locations may well be those that already have research laboratories with vibration-sensitive equipment. This is because the workers in these areas are presumably already able to carry out their sensitive work, and also because they may be able to provide assistance of a political nature in dealing with large vibration sources if they should appear. The possibility of intermittent sources being present in some area (such as large pumps that are used only occasionally) should be considered when searching for a site, since the accelerometer measurements may not uncover these. It may be a good idea to look around a given area for rooms containing large machines. Such devices are sometimes located in very inconspicuous and unlikely places. The spaces above false (or “dropped”) ceilings can sometimes contain vibration-producing items – including elevator motors, and even complete heating, ventilating and air-conditioning (HVAC) systems. It can be very helpful to arrange to run potentially problematic machines when vibration measurements are made. Another potentially useful step is to make vibration measurements at those times of the day when building HVAC equipment is turned on and off, since when such equipment is changing speed, building resonances may be excited. Floors that are above ground level are especially susceptible to vibrations caused by footfalls. In situations where sensitive equipment must be located on these, it is best to choose locations that are distant from heavily traveled parts of the building [35].
76
Basic issues concerning hardware systems
Even within a single room, some spots may be preferable to others. This is especially true of above ground-level floors, where parts of the floor near a wall or a support pillar are likely to have smaller vibration levels than those near the center of a room [35]. Floors made of rigid materials, such as reinforced concrete, are superior to those that are more flexible, such as wooden types. Suspended wooden floors in particular can be very problematic [9].
Isolating sensitive apparatus from floor vibrations Often it is necessary to isolate the sensitive apparatus from floor vibrations by means of vibration isolation devices. What one normally tries to implement is a mechanical low pass filter, which prevents ground vibrations from passing up through it to the sensitive apparatus. In essence, this means mounting the apparatus on springs, so that the combination of spring and the mass of the device attached to it acts as a resonant system. Although in principle springs of any kind can be used, pneumatic devices (“air springs”) are preferred over metal or solid rubber ones, since they make it possible to achieve a lower cutoff frequency more easily. Typically, one tries to arrange things so that attenuation of the vibrations begins at frequencies of no more than a few Hz. This is because in general most of the energy of floor vibrations falls in the frequency range from 5 to 30 Hz, and can even be as low as 2–4 Hz [36]. In fact, the use of rubber or composite pads as isolators is generally avoided because the characteristic resonance frequencies obtainable with these are in the range where floor vibrations are highest. Such an approach would lead to the amplification, not reduction, of vibrational disturbances. It is difficult to obtain characteristic frequencies of much lower than about 1 Hz using practical passive vibration isolators of any kind. The sophistication of air springs range from simple homemade air-bladder arrangements (using, e.g., tire inner-tubes), through to much more elaborate commercial types intended for the most sensitive applications. The latter devices (called “pneumatic vibration isolators” or “pneumatic isolators”) may incorporate specially designed pneumatic damping arrangements that reduce the amplification of vibrations at the resonant frequency, while keeping this frequency low. (In the case of simple air springs, one usually relies on the incidental damping provided by material that comprises the device.) A basic pneumatic vibration isolator is shown in Fig. 3.2. Air-bladder isolation systems are inexpensive, and sufficiently effective for many purposes. If inner tubes are used as vibration isolators, their performance may be very poor unless they are set up correctly. A discussion of the best ways of using such items can be found in Ref. [37]. Good commercial pneumatic isolators offer advantages such as: very low cutoff frequencies, good isolation for vertical and horizontal vibrations, and effective damping arrangements. (Horizontal vibrations, which are mainly a problem on elevated floor levels in buildings, are not so effectively dealt with by the simple systems.) The sophisticated commercial pneumatic isolators also offer automatic leveling, in which changes in the mass of the load that would otherwise result in a change in height, are automatically compensated for.
3.5 Problems caused by vibrations
77
isolated equipment support
rolling diaphragm
piston
spring chamber
damping orifice
damping chamber
Fig. 3.2
Schematic diagram of a simple pneumatic vibration isolator (see Ref. [36]). Damping is provided by a flow restrictor (the damping orifice), which impedes the flow of air between the spring chamber and the damping chamber. The chambers are typically pressurized to about 105 –106 Pa.
Simple air springs, sometimes called “air mounts,” are also available commercially. These devices are sold as general-purpose isolators, and are not specifically intended for the support of vibration-sensitive apparatus. For instance, they are often used to prevent the transmission of disturbances from vibrating machinery to the floor. However, they may be suitable for the support of sensitive apparatus when modest levels of isolation in the vertical direction (but very little or none in the horizontal) are adequate. A step up in sophistication from pneumatic isolators (which are passive devices) are “active isolators,” which are sometimes called “active vibration isolation systems” or “active vibration cancellation systems.” In these devices, vibrational movements are detected by accelerometers, and electronic circuits use this information to operate actuators in order to oppose the motions. The main advantage of active vibration isolators over passive ones is the ability to provide isolation effectively at very low frequencies that are beneath the range of the passive devices. Against this, one must consider the extra cost of the active isolators, which can be very expensive. Furthermore, in some situations, active isolators can be unreliable when compared with passive ones [29]. Specifically, when they are subjected to vibrations outside of their specified frequency range, active isolators will amplify these [38]. Almost all vibrational problems encountered in practice can be dealt with very effectively with passive isolators. Active devices should be considered only when passive methods for eliminating vibrations have been thoroughly explored. Vibrations can readily travel through a rigid floor from one end of a building to the other, possibly resulting in problems even from relatively distant vibration sources within
Basic issues concerning hardware systems
78
the building. Hence, a useful (and often perfectly practical) method of reducing these is to place sensitive apparatus on their own foundation, which is physically separate from that of the rest of the building [39]. (In doing this it is possible to be careless and “short-circuit” the gap between the separate foundations – by, for example, laying a floor over the top of these.5 ) The space between the foundations should be filled with a flexible material, such as rubber. This use of a separate foundation may not be effective, and can even be counterproductive, if the major source of vibrations is external to the building, as in the case of those caused by road traffic.
Isolating vibrations in pumping lines, electrical cables, and pipes The isolation of vibrations in a small-diameter vacuum pumping line is usually effected by making some of the line from flexible metal tubing (or bellows), which is hung in a long loose loop. For this purpose, rolled bellows (as opposed to edge welded ones) are normally used. A section of the bellows is anchored to a large mass. The resulting spring–mass combination acts as a mechanical filter, which decouples the vibrations [40]. One way of doing this is to embed a straight section of pumping line (in series with the bellows) in a large piece of cast concrete (an “inertia block”) – see Fig. 3.3. Such a block may either be floated on air springs, in order to isolate it from ground vibrations, or supported directly on the ground. It may, for example, be part of a wall that is positioned between a room containing the pumping equipment and the one holding the sensitive apparatus. Sometimes (when the lowest vibration levels are desired, and the vacuum conditions permit it), a combination of bellows and rubber tubing in series is used. The bellows decouple low-frequency vibrations, and the rubber tubing does so for the high-frequency ones. Such an arrangement is described in Ref. [41]. Another approach is to place a section of the bellows in a large box filled with sand. This method has the advantage that the friction between the grains of sand and the convolutions in the bellows acts to damp vibrations. For this purpose, the sand must be dry. Obtaining such material and keeping it dry can be a problem. The presence of a large quantity of sand in a laboratory may sometimes be objectionable, since the sand can migrate. In the case of very large-diameter pumping lines (much above about 4 cm), the use of long bellows in this way may be impractical, because of forces generated by atmospheric pressure. In such situations, pressure-compensated devices such as “cross-bellows isolators” are sometimes used [40]. So-called “double-gimbal isolators” have been proposed as a more effective alternative to the cross-bellows design [42]. However, the former employ pivots as part of the isolation mechanism. It should be noted that vibration isolation systems in general that make use of bearings involving contact sliding or rolling friction (such as ordinary sleeve bearings, or even pivots or ball bearings) will inevitably have at least a small level of static friction to overcome before they can operate. As a result, for extremely small levels of vibration, isolators that make use of such bearings (such as double-gimbal isolators) may not be very effective [36]. The use of flexural pivots is a possible way of avoiding this problem. 5
Very generally, the inadvertent short-circuiting of vibration-isolation arrangements is a common mistake.
79
3.5 Problems caused by vibrations
Fig. 3.3
In order to decouple vibrations, pumping and gas-handling lines are passed through a concrete inertia block before entering a cryostat. The pumps are located on the other side of the wall seen in the background. (Courtesy of I. Bradley, Lancaster University.)
The isolation of vibrations in electrical cables can be dealt with in a way that is similar to that employed with small-diameter pumping lines. The cable should be made highly flexible, and clamped firmly to a massive rigid object, such as an optical table. The cable should be secured to the object at two or more separate points [43]. The reason for having more than one fixing point is that a single clamp has limited torsional rigidity about axes perpendicular to that of the cable. Hence, using just a single clamp can allow the cable to vibrate laterally. If a multiconductor cable is involved, it may be desirable to clamp each conductor separately, in order to more completely subdue the vibrations. In particularly troublesome cases, cables can be embedded in sand. Vibrations traveling through liquids (e.g. cooling water) or gases in pipes can be troublesome. These may originate from noisy pumps, or perhaps (in the case of water lines) as a result of water hammer. Commercially available “surge suppressors” or “pulsation dampers” are effective at reducing these [29]. As an alternative, such vibrations can be damped by making part of the line from flexible rubber hose. In some cases, vibrations are caused by the turbulent flow of a liquid (such as cooling water) inside the vulnerable apparatus itself. In such cases, the only thing that can usually be done is to minimize the flow rate.
Basic issues concerning hardware systems
80
Controlling vibrations at their source The judicious selection or modification of equipment and machinery to minimize vibrations created by these is a possible way of solving vibration problems. This approach is often the best one, if it is practical to use it. Vibrations from a single source can affect many devices and experiments, and hence controlling vibrations at the source is generally preferable to removing them at the receiving end. However, since noisy items can be far away from the sensitive apparatus, and are very often not under one’s control, such a strategy is impractical in many cases. This situation is similar to that encountered when dealing with a.c. power problems and electromagnetic interference issues. Cases in which it may be very worthwhile to control vibrations at source include those in which heavy machinery such as compressors or pumps are responsible. Keep in mind that troublesome vibrations are often produced because machines have become unbalanced, or otherwise defective, and are in need of servicing (see page 69). When a large machine is to be selected, high-speed devices are preferable to low-speed ones, since high-frequency vibrations are attenuated more easily. For example, rotary vane compressors running at 1500 RPM will produce vibrations at a fundamental frequency of about 100 Hz, since there are four pulses per revolution [9]. Normally it should be fairly easy to isolate vibrations of this frequency. For example, considerable attenuation could be achieved by the installation of air mounts underneath such compressors. On the other hand, cylinder compressors operating at a speed of 300–600 RPM produce vibrations with fundamental frequencies ranging from 5 to 10 Hz, which are much more difficult to isolate. In cases in which vibrations produced by a machine are exciting building structural resonances, it may be possible to greatly reduce the problem by changing the operating speed of the device. Large machinery should be located on the ground floor, if possible, and far away from areas with sensitive equipment. Ideally, such machinery should have their own foundation, or even placed in a separate utility building. Note that ad hoc modifications of the supports of heavy machinery without a complete understanding of the vibrational characteristics and stresses in these can easily make matters worse. The larger vibrations and stresses may also lead to damage of the machines. Hence, such changes should normally be carried out by professionals, or under their guidance [9]. In the case of small laboratory machines, which may even be part of the sensitive apparatus, the task of vibration reduction can be made much easier by the appropriate selection of such devices. Vacuum pumps are probably the most common culprits in this category. For example, cryopumps create very high levels of vibration, and should generally be avoided if these are a problem. Turbomolecular pumps (turbopumps) can also be troublesome. The use of magnetically levitated turbopumps, rather than the normal types that employ ball bearings, can considerably reduce both vibration and noise.6 Other methods of reducing turbopump vibrations are discussed in Ref. [44]. Ion pumps produce no vibrations, and are often a good choice for pumping sensitive high- or ultrahigh vacuum equipment. Vacuum pumps are discussed in Chapter 7. 6
NB: Magnetically levitated turbopumps are relatively expensive.
81
3.5 Problems caused by vibrations
Special ball-bearing turbomolecular and primary (scroll and rotary vane) vacuum pumps are available that produce very little vibration [45]. These devices were originally designed for use with electron microscopes. In the case of the turbopumps, the combination of a low-vibration design and a special vibration damper result in unusually low vibration levels, even in comparison with magnetically levitated devices. The electronic drive systems of the scroll and rotary-vane pumps makes it possible to vary their rotation speeds, in order to minimize vibration levels and avoid structural resonance frequencies. Vibrations caused by foot traffic in hallways or rooms outside a laboratory containing sensitive apparatus can be reduced by installing resilient underlay (or “acoustical floor underlayment”) on the floors in these areas [9].
Vibrations in instrument support structures The occurrence of resonant vibrations in instrument support structures can be a problem during measurements. For optical work, support tables for the optical elements are designed to be very stiff and light, and are usually provided with damping arrangements, in order to reduce such effects. Special steel honeycomb structures are often employed in these to provide the desired stiffness and weight. Such “optical tables” are available from commercial sources. If it is necessary to build one’s own optical table, some designs discussed in Ref. [37] may be useful. For apparatus that is supported by passive isolators, the damping of internal resonances is especially important. This is because vibrational energy that has entered the apparatus by an acoustic path, or from vibration sources within the apparatus, will otherwise not be able to escape. As a consequence, the apparatus may start to ring at its resonant frequencies. Some aspects of vibration damping are discussed on page 70.
Rigidity and stability of optical mounts In optical experiments, it is the optical mounts (rather than the table which supports them) that determines the vibration sensitivity of the optical setup [37]. This is generally the case unless the table is not very rigid or is lightly damped. In situations where vibrations are important, every effort must be made to ensure that the mounts are rigid, and that the couplings between the optical table and the mount, and the mount and the optical component that it supports, are firm and stable. The heights of the optical components and mounts should be as low as possible. High-quality commercial mounts are to be preferred over homemade types. Otherwise, the number of separate parts in the mount should be minimized, and mount designs that maximize rigidity should be chosen. These issues, and others concerning vibrations problems in optics, are discussed in Ref. [37]. Devices for adjusting the position of optical components can be a serious source of instabilities – see page 223. (NB: When optical systems are being set up, it is best to avoid placing sensitive components near the table corners, since vibration amplitudes are largest at these places.)
Basic issues concerning hardware systems
82
Vibrations caused by sound waves Acoustic noise can be an important cause of vibrations in some situations, especially at frequencies above about 50 Hz [30]. Such problems can often be dealt with by enclosing sensitive apparatus within an acoustic isolation chamber (or acoustic enclosure). If the sound attenuation requirements are modest, these are relatively easy to build, using compressed fiber panels [43]. If greater noise reduction is needed, the use of a more complex sandwich structure may be necessary. These comprise an outer skin of a highdensity material such as mild steel, an inner one of a sound-adsorbing material such as thick cloth or rubber, and an adsorbing filler between the outer and inner skins consisting of compressed fiber or polystyrene. Sound waves can diffract through small holes or slots in an enclosure, so having an airtight seal between the various parts of the structure is highly desirable. At permanent junctions (along edges, etc.), this can be done by caulking small gaps with an acoustical sealant [46]. Two beads of sealant, applied on inside and outside edges, are normally used. On enclosure doors, such sealing is typically carried out with two well-separated bands of some resilient sealing material, such as rubber weatherstripping (see the discussions in Refs. [43] and [46]). Ready-made acoustic enclosures can be obtained from commercial sources, although they tend to be very expensive. The installation of an instrument within an acoustic enclosure may not always be practical or convenient. In such cases, it may be necessary to soundproof the entire room. This can be a complicated undertaking, since many paths exist for the sound waves. For instance, heating, ventilating, and air-conditioning (HVAC) systems are often a major source of acoustic disturbances.7 Designing or retrofitting these with sound-reducing devices may be essential for the reduction of noise levels. For such reasons, the task of soundproofing a room is normally best left to professional noise-control specialists. The subject of acoustical noise control is discussed in detail in Ref. [46].
Other measures One potentially effective and simple method of avoiding measurement difficulties due to the vibrations of large pieces of machinery, or those caused by human activities, is to arrange for such measurements to be done at night or on the weekend. Similarly, experiments can be scheduled to avoid disturbances due to adverse weather conditions. For some types of measurements that are susceptible to interference due to vibrations, the problems can often be ameliorated by using some form of cancellation scheme. For example, in the case of magnetic susceptibility measurements in magnetic fields, one can use two adjacent coils that are wound in opposition. One of the coils contains the sample, while the other senses only the background field. With the use of suitable circuits, one can combine the signals from these coils in such a way that spurious signals due to the vibration of these coils in the magnetic field are cancelled out [47]. In a similar way, in optical interferometry, one can force the reference beam to travel the same path as the measuring beam [48]. This arrangement, called a “common path interferometer,” ensures that changes 7
NB: Apparatus can also be disturbed by fluctuating forces generated by air currents from nearby HVAC vents.
83
3.6 Electricity supply problems
in the optical path length caused by vibrations have no affect on the output signal, since the two beams are affected equally. One of the best references on vibration problems in general is Ref. [26]. Very useful information on sources of vibration and the correct location of vibration-sensitive equipment can be found in Ref. [9]. An extensive discussion of passive vibration isolation is provided in Ref. [29]. Useful information on vibration control can also be found in the catalogues and websites of manufacturers of pneumatic isolators and optical tables.
3.6 Electricity supply problems 3.6.1 Definitions and causes of power disturbances Anomalies in the mains power, such as blackouts, brownouts, transient overvoltages, and electrical noise, are almost inevitable events on all public electricity supply systems. They pose a considerable risk to electronic devices and systems of interference with sensitive measurements, loss of data, temporary or permanent malfunctions of equipment, and the gradual deterioration of electronic components. These anomalies are often associated with adverse weather conditions, including in particular lightning storms; but can also be caused by other random events, such as accidents, animals coming into contact with power-line equipment, flooding of mains electrical equipment, and the failure of power utility equipment. They are also associated with the switching on and off of electrical equipment, and correlations in the demand for power by electricity users. Some of the power anomalies are defined as follows. The nomenclature that one generally finds being used for these is not completely consistent.
Brownouts and sags Brownouts may be defined as temporary reductions in the line voltage by 5% to 35%, and occur when the electricity supplier is unable to meet the power demand [49]. They last for periods of minutes or hours. A sag can be defined as a reduction in voltage lasting from a half a.c. cycle to seconds in duration [50]. These disturbances are problematic because most electronic equipment is designed to cope with voltage variations of only ±10% [49]. Linear power supplies in particular, which are often used with low-power and/or noise-sensitive equipment, are intolerant of low mains voltages. Switching power supplies, which tend to be used with high-power equipment and computers because of their efficiency, tend to be more tolerant of these anomalies [50]. Very large voltage reductions can damage some equipment, such as induction motors. Brownouts often occur near 6 p.m. on weekdays in the winter or summer, as people return home from work and switch on appliances such as electric heaters, air conditioners, and ovens. They are also common on particularly cold days, when people turn on electric heaters to supplement the normal heating provided by electricity, gas, or oil [50]. In offices,
84
Basic issues concerning hardware systems
laser printers and copiers, with their periodically activated internal heaters, are a common cause of line voltage reductions.
Blackouts and drops “Blackouts” or “outages” have been defined as the complete absence of useful power. Very short losses of power are called “drops.” These brief outages are very common, and may not even be noticeable (in the form of flickering lights, etc.). Yet, they frequently cause lockups of computers and other digital equipment. (See also the discussion of computer-related power quality issues on page 495.) Even very brief blackouts can often cause considerable inconvenience, owing to the time that may be needed to set up apparatus again after the loss of power. Aside from the obvious deleterious effects of a blackout, there is one that is perhaps not very widely recognized. When power is restored after a blackout, there can be surges, dips, brownouts, and multiple on–off cycles before the supply stabilizes. This can cause unprotected power supplies to fail if equipment is still plugged in and switched on when power returns.
Swells and transients A “swell” has been defined as an abnormal increase in the RMS line voltage by a factor of at least 1.08, and with a duration of at least half a cycle [50]. Note that the term “surge” is sometimes used as a substitute for “swell”, as well as for the events that are defined below as “transient overvoltages.” “Transient overvoltages” (“transients,” or “surges”) may be defined as increases in the mains voltage in the form of short spikes. For example, such spikes can exceed the normal mains voltage levels by factors of 10 or more [49]. In some cases, these can occur at rates of about 10 000 a year. Spikes of 1 kV to 5 kV occur occasionally on all a.c. power lines [5]. However, there is a great deal of variation in the frequency of occurrence of transients from place to place, so it is hard to give a meaningful value for a “typical” rate. The duration of transients can range from 1 µs (possibly with a rise time of a few nanoseconds) up to about 10 ms, or a half of the a.c. power cycle. The main problem caused by transients (and swells) is the damage that they can cause to equipment. Very large transients may result in immediate failure. However, a series of smaller ones can cause incremental and cumulative damage, which will ultimately result in failure [51]. Transients can also cause temporary lockups of computers and other digital devices. Transients often occur when electrical equipment, such as motors or heaters, is turned on or off [49]. Laser printers and copiers are a common cause of transients [52]. They are also troublesome in places such as buildings with triac-controlled lights or heaters, and near elevators [5]. Transients frequently occur during severe weather conditions, such as lightning storms, high winds, heavy snowfalls, etc. [53]. Lightning strikes in particular can create very large, and potentially devastating, voltage spikes.
85
3.6 Electricity supply problems
Transients are capable of passing with little hindrance through d.c. power supplies and into the circuits they operate, in spite of the presence of filter capacitors and voltage regulators in the supplies [54]. They can, however, be removed with surge suppressors, as discussed below.
RF electrical noise Electrical noise takes the form of a non-transient broadband (radio-frequency, or RF) disturbance that rides on the 50 or 60 Hz a.c. waveform. It is frequently caused by sparking or arcing somewhere in the distribution network. More specifically, such disturbances can be created by: switching power supplies, arc welders, variable-speed motor drives, and other noisy items connected to the power line. It is also possible for building mains power wiring to act as an antenna and transmission line – picking up radiated RF energy in one area and propagating it to another. Conversely, RF disturbances already present on the mains wiring may be radiated by it. (Radio-frequency interference issues are discussed on page 370.)
3.6.2 Investigating power disturbances Instruments are available commercially that are specifically designed to analyze mains power anomalies. These devices, called “power-line monitors,” “power quality analyzers,” or “power disturbance analyzers,” are able to detect and characterize disturbances. Usually they have the ability to record the nature of any disturbance, and to apply a “time stamp” to it. This makes it possible to correlate such events with any problems that might be caused by them. A procedure for determining whether equipment is being disturbed by mains power anomalies is discussed in Ref. [55]. Making systematic measurements of power disturbances in the hope of identifying a trouble source is not always the best strategy, however. Mains power anomalies are often intermittent events. A great deal of effort may therefore be needed to firmly establish whether they are the cause of any difficulties. Hence, in general the easiest and most reliable approach is to take all reasonable precautions if there is any suspicion that mains power anomalies are causing problems, even in the absence of hard evidence for such a conclusion. This would involve a blanket installation of surge suppressors, a.c. line filters, and/or uninterruptible power supplies in affected apparatus, as discussed below. Such an approach does not have to be expensive. The use of these devices (especially the first two) can be considered good standard practice in any event. Although small irregularities in the a.c. mains voltage are sometimes blamed for noise and drift in experimental measurements, it is open to doubt as to whether this is often the case. The d.c. power supplies in experimental instruments have filter capacitors and voltage regulators, which (if the supplies are of high quality, and are working properly) should deal with these. Very large swings in the mains voltage (greater than roughly ±10%, depending on the duration), radio-frequency noise, blackouts, etc. are another matter.
86
Basic issues concerning hardware systems
3.6.3 Measures for preventing a.c. power problems 3.6.3.1 Surge suppression A variety of devices have been devised to cope with shortcomings in the electricity supply. The most important of these, and also one of the least expensive, is the “transient voltage surge suppressor” (TVSS), which is sometimes just called a “surge suppressor” or “transient suppressor.” As the name implies, these devices are used to keep surge voltages at safe levels, by directing surge currents from the live conductor to ground. The suppression is normally accomplished by components called “metal oxide varistors,” or “MOVs.” Special extension cords and power strips can be obtained with these devices built-in. Such units are also often supplied with filters to remove RF electrical noise. Since repeated surges with eventually destroy a TVSS, high-quality surge suppression units come with some indication, such as a special lamp or an audible alarm, which indicates that the suppressor is no longer effective. The best ones cut off the power just before the TVSS fails [53]. Surge suppressors can be rated at energy absorption levels of thousands of joules. A suitable one for use with a small computer should be able to absorb at least 750 J. The clamping response time of the TVSS should be 1 ns or less [53]. It is recommended that all systems be protected by surge suppressors [50]. Mains-powered devices that are connected together should all be protected by surge suppressors, since transients can travel from one device to another by the interconnecting cable [50]. For example, a computer and a measuring instrument that are connected by an interface bus should both be protected. The best way of removing transients caused by the switching of reactive loads such as motors, solenoids, and fluorescent lamps, is undoubtedly to place surge suppressors at or near the sources, so that the transients are not allowed to pollute the mains [50]. However, in a university laboratory environment, where coordination between workers in separate areas tends to be fairly loose, it is generally not a good idea to count on such measures. The most effective strategy is to always install suppressors near the susceptible equipment, which has the advantage of being normally under the user’s control. This will also take care of transients that are not caused by the switching of reactive loads. In order to minimize the effects of transients on sensitive equipment caused by high current devices (e.g. pumps, compressors, air conditioners, laser printers, and copiers) these two types of equipment should preferably be installed on separate electricity supply circuits [53]. The general issue of protecting electronic devices from overvoltages is discussed on page 394.
3.6.3.2 Reduction of RF electrical noise The removal of radiofrequency electrical noise from the power line can be accomplished by a.c. line filters (or power-line filters). These devices are often built into power supplies for electronic equipment that either create such noise, or is sensitive to it. As mentioned above, stand-alone a.c. line filters can be obtained commercially as packaged units that are
87
3.6 Electricity supply problems
incorporated (usually along with surge suppressors) into power strips and extension cords. (In using these arrangements, one should ensure that the input cable is well separated from the output one, in order to minimize the capacitive coupling of RF energy between them. Because of this possibility, a.c. line filters that are fixed in the wall of the shielded enclosure of an electronic device can be more effective than these stand-alone units.) The a.c. line filters should be able to deal with both differential-mode (DM) and common-mode (CM) interference. These a.c. line filters are not expensive, and can be considered to be practically essential in most electronic equipment. Particular attention should be given to ensuring that devices which contain switching power supplies are provided with filters. Strategies for using such filters are discussed in Ref. [55]. Sometimes, when very low-level voltages are being measured, batteries are used to isolate the sensitive equipment (e.g. preamplifiers) from residual (perhaps indirectly coupled) RF and 50–60 Hz mains interference.
3.6.3.3 Line-voltage conditioners The reduction of effects caused by changes in line voltage, such as brownouts, sags, and swells, can be achieved by using a line-voltage conditioner. Frequently, such devices accomplish this by automatically changing taps on a transformer. This can be done either by using mechanical relays, or (with greater reliability) by employing solid-state devices. A further advantage of line voltage conditioners is their ability (because of the transformer, and when properly designed) to attenuate differential-mode noise below 0.5 MHz to a much greater degree than is possible using the inductor–capacitor filter modules normally employed for the removal of line noise [50]. Line-voltage conditioners also provide isolation between the input and output terminals that, in combination with the capacitance of the load, reduces common mode transients. They often also have some sort of builtin transient overvoltage suppression and additional differential and common-mode noise filtering. If one is using devices that are sensitive to incorrect line voltages, such as linear power supplies, the use of line-voltage conditioners may be beneficial. However, it can often be argued that as long as the effort is being made to buy and install such a device, it may be better to just get a suitable uninterruptible power supply (see below).
3.6.3.4 Uninterruptible power supplies The most capable device for mitigating the effects of mains power anomalies is the “uninterruptible power supply” or “UPS”. Uninterruptible power supplies have the ability to supply a.c. power even in the event of a blackout. They are also capable of dealing with brownouts, swells, transients, and electrical noise. UPSs are generally stand-alone, selfcontained devices that do not replace internal equipment power supplies, but compliment them. Three kinds are commonly available.
88
Basic issues concerning hardware systems
Passive-standby UPSs One of these, the “passive-standby” or “off-line” type, provides mains power essentially directly from the a.c. line, when the latter is working properly. (That is, the output of the devices is connected to the input.) However, when an outage is sensed, the devices takes d.c. power from batteries, and creates a.c. power with the correct mains frequency and voltage level (using an inverter). After the mains power is lost, there is an interval of a few milliseconds during which no power is provided, while an internal relay switches the load over to the battery-operated power source. (Hence, strictly speaking, passive-standby UPSs are not really uninterruptible power supplies.) When mains power is present, the UPS recharges its battery. This temporary loss of power during switchover is not normally an issue if the device being run by the UPS contains a high-quality power supply (which, in the case of computer supplies, may be able to function in the absence of input power for 20 ms or more at nominal mains voltage [13]). However, in other situations it can be a problem. Inexpensive power supplies (such as “a.c. adapters” or “power bricks”) may not have enough inertia to carry them through even very brief losses of power. The better types of passive-standby UPS cope with brownouts and sags with the aid of a kind of built-in line-voltage conditioner. These devices, which are called “line-boost” UPSs, compensate for reductions in line voltage by changing between taps on a transformer. Lower-cost passive-standby devices, on the other hand, deal with such events by switching over to battery mode. This results in greater use, and hence more rapid deterioration, of the battery [13]. As discussed below, battery failure is the most common problem with uninterruptible power supplies.
Line-interactive types Another, more advanced, type of UPS, called a “line-interactive UPS,” essentially combines the features of a passive-standby UPS with those of a sophisticated line-voltage conditioner. When the mains voltage drops, the inverter (using battery power) is employed to boost the voltage. During a rise in mains voltage, the inverter is used to buck it. In this way, a single device is able to cope with brownouts, sags, and swells, as well as outages. However, unlike the passive-standby UPS, the line-interactive type does not exhibit an interruption in the supply of power when an outage starts. Also, because the inverter interactively shares the load with the electricity mains at all times, the power supplied by a line-interactive UPS is relatively clean and steady compared to that provided by passive-standby devices. One potential source of confusion is that line-boost passive-standby UPSs are sometime referred to by their manufacturers as “line-interactive” ones [13]. However, the lineinteractive devices described above (also known as “single-conversion online UPSs”) use a different and more effective technique to control the line voltage.
Double-conversion types A third type of UPS, which is even better (with regards to providing reliable highquality power) than the previous kinds, is called a “double-conversion UPS,” “true UPS,”
89
3.6 Electricity supply problems
or “on-line UPS.” It takes a.c. power from the mains supply, converts it to d.c., and uses this to charge the battery. An inverter continuously converts power from the battery back to 50–60 Hz a.c., which supplies the load. Unlike the passive-standby UPS (but like the line-interactive type) there is no interruption in the provision of power when the mains power is lost. In this double-conversion scheme, the load is effectively isolated from the mains supply. Since the conversion of a.c. power to d.c. allows the power to be very effectively filtered of disturbances, and since the inverter produces an a.c. waveform from scratch, independently of the incoming a.c. mains waveform, on-line UPS devices are able to supply almost perfect a.c. power on a continuous basis. Because the inverter must operate continuously to supply power (which may be between 200 and 800 W for a small UPS) with high reliability, a double-conversion UPS is more expensive than a passive standby or line-interactive version of comparable power capacity. A disadvantage of on-line uninterruptible power supplies is that the battery is always in use, and hence typically requires more frequent replacement than is the case with passive-standby devices [51].
Selection and use of UPSs The better varieties of UPS use inverters that produce a low-distortion sinusoidal output voltage waveform, whereas some cheaper kinds generate square waves. Others generate a sawtooth waveform, or some other crude approximation to a sine wave. Mains waveforms with a high harmonic content, of which a square wave is an extreme example, can be harmful to many types of equipment. For example, computer power supplies may eventually be damaged if they are operated from square-wave power for long periods [13]. Other types of electronic equipment are rapidly damaged if they are run from such power. Hence, one should try and get a UPS with an output voltage that most closely approximates a sine wave. All types of UPS provide the benefit, when battery power is being used, of providing a relatively clean (spike-free) constant-voltage waveform to the load. Inverters in the better uninterruptible power supplies generally create 50–60 Hz sinusoidal waveforms by producing a series of pulses of variable width, at frequencies of several tens of kilohertz, which are then filtered to produce a smooth sine wave. If such inverters are not sufficiently filtered, they can actually be sources of high frequency power line electrical noise [55]. Typically, uninterruptible power supplies can provide power from the battery for about 10 minutes. Longer run times are available with the addition of additional batteries. However, the great majority of outages are of less than about one-second duration, and hence even UPSs with small batteries can be useful [13]. The most common type of reliability problem with uninterruptible power supplies is battery failure, and the batteries must generally be replaced every few years. In some locations, where power failure is commonplace and outages are long (so that the battery is often deeply discharged), it may be necessary to replace the battery every year, or possibly even more frequently [51]. Keeping the temperature of their environment as close as possible to normal room temperature (22 ◦ C) is very important in optimizing battery lifespan. When buying a UPS, it is very important to select one that allows the battery to be replaced by the user. (Inexpensive devices often have to be sent back to the factory if the
90
Basic issues concerning hardware systems
battery fails.) The desirability of having a UPS with user-replaceable batteries is hard to exaggerate [51]. A very useful feature that is provided by some UPSs is the ability to automatically test the battery. This makes it possible to have confidence that the latter is in good condition (i.e. not worn out) without having to make a special effort to do a manual test. The question of whether to buy a passive-standby, line-interactive, or a double-conversion UPS usually hinges on the issue of cost. The quality of the power provided by the doubleconversion types is generally better than that produced by the line-interactive ones, which in turn is superior to that of the passive-standby devices. One pays a significantly higher price for improved performance. Furthermore, batteries in the more sophisticated units must be replaced more frequently. Passive-standby UPSs (especially line-boost types) are adequate for many applications. Although UPSs generally contain an internal surge suppressor, it is best to supplement this with an external stand-alone device, placed between the UPS and the wall receptacle [13]. This external surge suppressor provides additional protection for the (comparatively expensive) UPS and equipment connected to it. Furthermore, an external surge suppressor can be replaced with little cost and effort if it should become damaged.
3.6.3.5 Standby generators Because of the high cost of storing large amounts of energy in a battery, uninterruptible power supplies are not, by themselves, appropriate for supplying large amounts of power for long times in the event of a blackout. If power comparable to the maximum capacity of the UPS is to be supplied to loads for more than 30 minutes, a better solution is to use a combination of a UPS and an engine-generator set. The UPS is still needed because the generator will take some time to start up and stabilize after the start of a blackout. In the presence of varying loads, the stability of the voltage and frequency of the output power produced by generators can change. If a varying a.c. voltage is a problem, then a normal off-line UPS will not be satisfactory, and a line-interactive or an on-line UPS will be needed. If a varying frequency is problematic, then an on-line UPS will be the only appropriate kind. If a UPS is to be used in conjunction with a standby generator, one should ensure that it is suited for this purpose. Standby generators are often powered by diesel engines, which have possible problems associated with pollution, maintenance, and fuel storage. Diesel oil has a limited shelf life, and consequently storage tanks have to be drained and refilled every 6–12 months [56]. However, clean-burning and relatively low-maintenance sets running from the natural gas supply are also available. (Natural gas can be much more reliable than the mains electricity supply, and gas can be stored on-site if necessary.) An in-depth discussion of many issues related to power quality has been provided in Ref. [50]. Useful reviews of uninterruptible power supplies and their selection (particularly if they are intended for use with computers) can be found in Refs. [13] and [51]. See also the discussion on page 495.
91
3.7 Damage and deterioration caused by transport
3.7 Damage and deterioration caused by transport 3.7.1 Common difficulties Gross damage or even destruction of laboratory instruments and other equipment during transport (shipping, etc.) is not unusual. Some of the more subtle forms of harm caused by transport are: (a) loss of calibration in measuring devices (e.g. electronic instruments and sensors) due to shocks, vibrations, or temperature cycling, (b) degradation of optical devices (e.g. misalignment, due to shocks or vibrations, or contamination of optical surfaces), (c) loosening of parts (such as vacuum seals, screws and other threaded fasteners, and electrical connectors inside equipment) by shocks or vibrations, (d) degradation of electrical connectors (e.g. deformation of protruding panel-mounted connectors on electronic instruments due to impacts, or deterioration of contacts due to corrosion, contamination, or vibration), (e) fatigue failure of wires and solder joints (e.g. crack formation and permanent or intermittent open circuits) due to vibrations, (f) degradation of mechanical devices due to corrosion (e.g. rusting), or subtle mechanical damage (e.g. brinelling of bearings, caused by impact), (g) bending of electrodes in vacuum-electronic devices (e.g. vacuum tubes) due to shocks, (h) leaks in apparatus (including especially vacuum devices) utilizing seals (e.g. leaks at joints caused by vibration-induced fatigue damage or corrosion; or because of damage to unprotected sealing surfaces), and (i) leaks in metal bellows (due to vibration-induced fatigue, or corrosion). If rodents are able to enter a transport container, they are liable to chew on electrical insulation, rubber hoses, and the like, or cause harm in other ways. A very serious problem can arise if water is left in the cooling channels of water-cooled equipment during transport. If the water is allowed to freeze, it will expand and may damage or destroy the equipment. This risk can be eliminated by blowing any water out of the cooling passages with compressed air prior to transport. Another cause of damage that is reported from time to time is the examination of unusual, delicate, and puzzling items, such as diffraction gratings, by customs inspectors.
3.7.2 Conditions encountered during transport The level of severity of mechanical abuse during transport varies with the mode of transport, as well of course with the degree of protection provided by the packaging. During truck and rail transport, vibration levels tend to be highest between about 3 Hz and 100 Hz, with typical values being roughly 0.5 g for the former and 0.2 g for the latter [57]. Vibration levels in these cases drop off very rapidly above 100 Hz. The range from 3 Hz to 30 Hz
92
Basic issues concerning hardware systems
corresponds roughly to the resonance frequencies of many kinds of packaging [58]. Aircraft vibrations may be at levels of roughly 1 g in the range from 100 Hz to 2000 Hz, and tend to diminish significantly below about 100 Hz [57]. The potential for damage from these is lower than those of trucks or railcars. Ship vibrations can be at levels of roughly 0.1 g in the range from 1 Hz to 100 Hz. Shock acceleration extremes have been measured at 10 g on trucks and 300 g on railroad cars [19]. The very large (horizontal) shocks observed in the last example take place during railcar coupling. These probably represent the most destructive impacts that can happen during transportation which are not the result of handling [58]. Generally, the greatest hazard faced by packages during transport is rough treatment in mechanical handling systems, or during manual handling [58]. Shocks resulting from large drops or lateral collisions are often the main problem. In comparison, for instance, the most severe shocks encountered during transportation by road correspond to drops from a height of only about 15 cm [57]. Nevertheless, repetitive shocks during transport, or bouncing of the package within the vehicle can still be very damaging. About 5% of all small parcel shipments (<68 kg) are subjected to at least one impact that is equivalent to a drop height of greater than 76 cm [59]. For uncushioned items inside cardboard containers, a drop from this height would result in an acceleration of about 100 g. Items that are sent as passengerplane freight receive considerably more rough handling than those sent in containers on freighter-planes [58]. Other physical conditions that may cause problems during transport include extremes of temperature and humidity, and the reduction of air pressure. In the contiguous USA, it is possible that temperatures may vary from –51 ◦ C to 60 ◦ C, depending on the place and time of year [58]. (The latter temperature is the upper limit of what might be expected in a carrier vehicle or a warehouse.) For worldwide transport, extreme values of –62 ◦ C to 71 ◦ C are not out of the question. In controlled environments, such as large air-conditioned cargo aircraft, temperature ranges of 20 ◦ C to 23 ◦ C could be expected [59]. Note that plastic packaging materials, as well as the items being transported, may be affected by temperature extremes. Humidity levels in the general transport environment may reach 100% [59]. High humidity and swings in temperature can lead to condensation. Paper-based packaging materials may be weakened by high humidity levels [58]. On small non-pressurized feeder aircraft traveling to remote areas, pressures corresponding to an altitude of 6100 m could be reached, whereas on pressurized cargo jets, an equivalent altitude of 2400 m can be expected [59]. For road transport, pressures corresponding to altitudes of about 3700 m would represent an extreme case. The most attractive feature of shipping items by sea is the low costs involved, especially when large objects are being transported long distances. However, the large concentrations of airborne salt spray that are encountered in the shipboard environment can be very damaging to many types of equipment. Hence, it is usually much safer to use airfreight for this purpose [60]. Static compression of transport packages resulting from stacking these is yet another possible source of damage. This is particularly relevant for items in cardboard packages, as opposed to those in wooden or metal ones [58].
3.7 Damage and deterioration caused by transport
93
Information on the conditions commonly encountered during transport (particularly shock and vibration data) is provided in Ref. [57].
3.7.3 Packaging for transport It has been found that labels such as “fragile” and “handle with care” have only a minor effect on the way that packages are treated [57]. Similarly, and in particular for small-parcel shipments, labels indicating a preferred parcel orientation are not always effective [59]. This implies that one must rely solely on the design of the package8 and the choice of the mode of transport to ensure the safety of a packaged item. It does not mean, however, that one should neglect the proper labeling of packages. For instance, the presence of such labels may be necessary for insurance purposes. Usually the process of packaging an item for transport is based on empirical considerations, with little guidance from engineering methods making use of calculations. While engineering treatments of packaging design are available (see Refs. [61] and [62]), they tend to require information that is not normally available to those who are performing one-off packaging tasks. Such information might include maximum acceptable acceleration levels of the item being packaged, vibrational resonance frequencies and damping characteristics of the item, and quantitative values for the disturbances that the item will be exposed to during transport. An engineering procedure might indicate that, if the first three of the above quantities cannot be calculated from first principles, then they should be determined on the basis of tests. Again, this is not usually practical for one-off packaging tasks that might be encountered in a laboratory. Sometimes, equipment manufacturers can provide information on maximum allowable shock and vibration levels. For example, typical values for large computer systems are as follows [10]. (1) Maximum shock: 5 g for 10 ms, with 5 s between shocks, to 30 g for 11 ms, in the form of a half sine wave from three axes. (2) Maximum vibration: 38 µm maximum displacement at 10–31 Hz, and 0.15 g at 31– 100 Hz; or: 508 µm maximum displacement at 5–10 Hz, and 0.25 g at 10–100 Hz; or: 305 µm peak to peak displacement at 10–55 Hz, and 1 g at 44 Hz. In the absence of such information, one possible way forward would be to guess what the values might be, on the basis of the values for similar items of equipment, and to allow suitable margins of safety. The amount and type of cushioning material used to protect the item can then be determined without too much difficulty. One very important aspect of packaging for transport is to ensure that there can be no relative motion between different sections of the packaged item. This is done by blocking and bracing protruding and moving parts, making sure that the loads are distributed so that stress concentrations are minimized. Particular attention should be paid to the protection of vulnerable areas, such as vacuum joints, for example. The shafts of large rotating machines 8
This includes cushioning materials, moisture barriers, and the exterior “transport container.”
94
Basic issues concerning hardware systems
should be blocked so that these cannot vibrate and cause damage to the bearings. Items should be secured within their transport container (using padding, etc.) so that they are not free to move around inside it. A useful package design involves “floating” an interior container containing the supported item, within an exterior container that is separated from the interior one by the cushioning material. This arrangement has the advantage of simplifying any shock and vibration calculations. A typical configuration might involve the use of relatively rigid plastic foam in the interior container, which is cut to fit and support the item. The space between the two containers is partially filled with softer foam, which serves to isolate the inner container from shocks and vibrations. An example of this approach is provided in Ref. [61]. For high-value items, costing more than a few thousand dollars, transport containers should be rigid wood or metal, not cardboard. Transport containers are sometimes punctured by, for instance, other containers (perhaps during mechanized sorting operations), or by forks on lift trucks [58]. The strength of the container walls should reflect this possibility. In the case of large and heavy items, the maintenance of a particular orientation during transport can be made more certain by mounting the item on a base that has been provided with skids. This arrangement permits the use of a forklift during handling, and ensures that the package will always be oriented with the base down [61]. If the item being transported is of commercial origin, the manufacturer should be able to supply information on appropriate packaging. If it is necessary to return the item to the manufacturer, it may or may not be possible to reuse the original packaging, depending on its condition, since packaging is generally intended to be used only once, not repeatedly. Furthermore, most boxes for small items of equipment are intended to be grouped in clusters on a pallet, not sent through the parcel delivery network individually. In the case of electronic equipment, an appropriate way of reusing the original packaging is by double boxing it. The original package is used as the inner container in a “floating” arrangement, as discussed above. It may be possible to obtain new packaging materials from the manufacturer of an item, at little or no cost. One should be wary of the use of loose-fill materials, such as expanded polystyrene peanuts or crumpled paper, as cushioning materials. These substances can shift and settle during transportation, and allow flat, narrow, or dense items to migrate within a package, thereby exposing them to possible damage. Another potential problem with polystyrene peanuts is that they can present a serious risk of electrostatic discharge (ESD) damage to vulnerable electronic components [63]. Sometimes optical equipment is degraded by fragments and dust that are released by cushioning materials during handling [64]. The jostling that accompanies transport causes these particles to work their way into remote areas inside the equipment. These may cause damage in some cases, and significant effort may also be required to remove the contamination. The websites of small-parcel carriers (such as the United Parcel Service of America, UPS [65]) can provide useful information on packaging methods. In general, the greater the dimensions of a package are, the lower is the height that it is likely to be dropped from [58]. Increasing the weight of the package also reduces the likely height of any drops. (The latter point is not so important if a package is handled by mechanized sorting systems, as when they are sent via small-parcel carriers.) If manual
95
3.7 Damage and deterioration caused by transport
handling is involved, the provision of handholds on a package will also reduce the probable height of any drops. The placement of a sensitive item within a completely sealed moisture barrier is usually very important. Bags used for this purpose are often made of a single layer of polyethylene or nylon. For extra moisture resistance, laminates involving multiple layers of plastic and metal (“moisture barrier bags”) are frequently employed. The sealing of such bags is generally most effectively carried out using electrical heat-sealing devices. The inclusion of some desiccating material (such as silica gel) within the barrier will also be necessary. However, if desiccants are placed in environments that are not completely sealed (so that moisture can get in from the outside), they are likely to quickly become saturated, and will actually turn into sources of moisture [58]. Desiccants in general can be problematic sources of dust [18].
3.7.4 Specialist companies for packaging and transporting delicate equipment For the packaging of very expensive or delicate equipment, it is often worthwhile to use the services of firms that specialize in industrial or military packaging. These companies have experience in the processing of such equipment, and may even have government training and certification for the packing of delicate hardware. In designing packages for delicate apparatus, they are usually able to estimate the maximum levels of shock and vibration that such equipment can tolerate, based on experience with similar items. The cost of hiring these firms need not be excessive. If equipment must be transported repeatedly, another option is to have “shipping cases” made that are designed for this purpose. Similarly, the transport of costly and delicate equipment may be done most effectively by specialist “high-value equipment” movers. These sometimes exist as subdivisions of general-purpose moving firms. Such companies routinely move very expensive and delicate apparatus, such as large computers and medical instruments (e.g. CT scanners). In order to do this with minimum risk they can make use of specialized equipment, such as climatecontrolled vehicles with air suspensions. Most importantly, the handling and transport is carried out by trained personnel.
3.7.5 Insurance The arrangement of adequate insurance cover for the transportation of equipment is a very important, but often neglected, issue. The receipt of an insurance payout in the case of damage or loss is clearly a fallback position – one would really prefer that the apparatus arrive in good condition. Nevertheless, at least in the case of high-value items that could not easily be replaced using normal funds, obtaining proper insurance is essential. While the freight carrier often provides some form of basic insurance for items that they transport, it is often a good idea to obtain separate insurance cover from a third-party insurer. This makes it possible to ensure that the coverage is relevant to a particular situation, and that the limits of liability are sufficient. (For instance, the basic insurance provided by the freight carrier may not adequately cover delicate items, such as objects made of glass or ceramic.) It may also be worthwhile to obtain insurance with an “all risks” clause, especially if for
96
Basic issues concerning hardware systems
some reason the item must be sent via sea freight. For the purpose of making claims in the event of damage, photographs should be taken of equipment prior to packaging.
3.7.6 Inspection of received items For the purpose of making successful insurance claims in the event of damage during transport, it is normally essential to inspect transported items immediately after delivery. If any damage is found, the freight carrier should be notified without delay.
3.7.7 Local transport of delicate items Delicate apparatus, such as computer equipment, electronic instruments, leak detectors, etc., are often moved within a building on wheeled carts. This can involve exposing the apparatus to potentially harmful shocks as the carts are rolled over steps or gaps in the floor. A solution to this problem may involve fitting the carts with special “shock absorbing casters,” which are available commercially.
3.8 Some contaminants in the laboratory 3.8.1 Corrosive atmospheres in chemical laboratories Owing to the reactivity of the vapors released by many substances used and stored in chemical laboratories (e.g. volatile mineral acids), their atmospheres tend to be corrosive. It can be hard to find commercial equipment that will not deteriorate under such conditions [12,66]. Also, the organic vapors present in these environments can form surface films on the exposed components of optical instruments, owing to photopolymerization by ultraviolet light (see page 323). Similar conditions can be found in other areas, such as the “chemical rooms” used in the etching, chemical polishing, etc., of material samples.
3.8.2 Oil and water in compressed air supplies Laboratories are often supplied with air from a compressor, which is used to actuate devices such as valves, and to allow debris (such as swarf in a workshop) to be blown from objects. However, the as-supplied compressed air usually contains oil, water, and dirt that may contaminate and damage pneumatic devices and sensitive items. The water generally forms as a result of condensation – i.e. it is originally present as moisture within the air itself. Normally, it is a good idea to pass compressed air through suitable filters before using it. General-purpose commercial filtration units, designed for removing liquid and solid contaminants from a compressed air supply at a coarse level, are suitable for many purposes. These can, if necessary, be followed in the air line by more specialized types that are intended to remove small quantities of specific contaminants. It is also possible to obtain oil-less air compressors, which will eliminate problems due to this form of contamination.
97
3.8 Some contaminants in the laboratory
For contamination-sensitive applications, such as the cleaning of optics, it is essential to avoid using ordinary compressed air. An alternative clean source of gas, such as dry nitrogen from a gas cylinder, should be employed instead. Even with such a source, one should normally use a filter to completely remove any particulates.
3.8.3 Silicones Substances based on “polyorganosiloxanes,” usually referred to as “silicones,” are very commonplace, and can pose a serious surface contamination risk. Products containing silicone oil, such as silicone diffusion pump fluid and silicone “high-vacuum grease,” are often found in the laboratory. Silicones are also employed in some lubricating aerosol sprays, certain heat sink compounds used in electronic devices, bonding agents for high-temperature adhesive tapes, and other substances used for technical work. They are commonly found in cosmetics, hand creams, antiperspirants, hair-care products, and some eyeglass cleaning tissues [67]. Silicones are also applied as mould release agents in the manufacture of plastic items, such as storage containers. One sometimes finds them in household cleaning products as well. Silicone rubbers (including those prepared and cured by the user) are generally not nearly as troublesome as silicones taking the form of liquids or gels. Because of their inertness, silicones are very useful for many purposes. However, they are problematic in a number of ways. Silicones will wet almost all materials. Moreover, owing to their inertness, silicones can be very difficult to remove from surfaces with most organic or aqueous solvents. Furthermore, they tend to be very prone to creep (migration across surfaces). It is also easy for silicone contamination to be spread from one place to another on a worker’s hands, and perhaps retransferred via work surfaces, door handles, tools, etc. The use of aerosol sprays containing silicones can be a serious menace, since the mist produced by them can drift around a room and land in unexpected and sensitive places. Some silicone products have sufficiently high vapor pressures that evaporation, and subsequent condensation of the silicone vapor in another location, is a significant contamination route in certain cases. Even trace amounts of silicone can create problems in many situations. With regards to causing troublesome surface contamination, a little silicone goes a long way. Silicone diffusion pump oils have a strong tendency to migrate into vacuum chambers, which makes them unsuitable for vacuum systems that must be kept very clean [68]. If a surface that is being coated in a vacuum system is contaminated by silicone diffusion pump oil, it may be impossible to get coatings to adhere to it afterwards. Silicones in general (including in particular silicone vacuum grease) can cause serious and often unrecoverable contamination problems in vacuum systems for which cleanliness is very important. Silicones can also interfere with soldering operations [69,70]. For instance the soldering of electrical leads that have been contaminated results in poor-quality joints. Silicones can also prevent adhesion between contaminated surfaces by glues or epoxies. Painting or otherwise coating surfaces that have silicone contamination can be difficult or impossible. The spoiling of glass by silicones is also a problem. For example, it is known that silicone oil adheres to glass [71]. The result may be that, for example, affected areas of laboratory
98
Basic issues concerning hardware systems
glassware (such as glass dewars) can no longer be reworked, and optical elements may become permanently contaminated. Silicones can degrade in the presence of sparks produced at electrical contacts; in such things as switches, relays, and motors with commutator brushes; to form an insulating deposit that can cause electrical noise, contact failure, and abrasion of contacts [72]. In vacuum systems, in the presence of an electron beam, silicones can degrade and contaminate surfaces that should be conductive by forming insulating deposits of silica [73]. (For example, this can happen to the electrodes of ionization gauges.) Also, silicone grease and oil attacks silicone rubber [74]. Silicone oils and greases are generally not even particularly good lubricants. (An exception to this is the lubrication of dynamic O-ring seals, where silicones are superior to hydrocarbon lubricants [73].) Contamination by silicones is a general and very serious problem in the industrial world. Alternative electric contact lubricants, diffusion pump fluids and vacuum greases, such as types based on perfluoropolyether (PFPE), are available if one requires: a high degree of inertness, high-temperature stability, and no tendency to form insulating deposits in the presence of electric discharges or electron beams. However, perfluoropolyethers, and particularly greases containing these, can also be difficult to remove from surfaces. Like silicones, PFPEs are also more prone to creep than hydrocarbon-based substances. If one does not require a high level of inertness and high-temperature stability, hydrocarbonbased oils and greases may be preferable, since they are generally much easier to remove than either silicones or perfluoropolyethers, have a much lower tendency to migrate, and (if properly selected) have no tendency to form insulating deposits. Silicones are such a problem in industry that one often finds “silicone-free” substitutes for the standard products that normally contain silicone (one example is heat sink compound). The best way of reducing the possibility of silicone contamination is to prevent siliconebased substances from coming near sensitive surfaces. It is a good idea to use gloves when handling silicones (especially silicone grease) and their containers. If silicone does get on the hands, it may be necessary to use one of the slightly abrasive heavy-duty industrial hand cleaners to remove it. Aerosol sprays containing silicone (such as lubricants) should not be used in rooms where contamination could be a problem. In some areas, it may be desirable to ban the use or presence of silicones and silicone-containing products entirely. If contamination has occurred, it may be necessary to use mechanical methods, at least in part, to remove it. This is especially true in the case of silicone grease, which is thickened with a filler of highly inert fumed silica. Regarding the use of chemicals, it has been found that most silicones are soluble in heptane, hexane, and toluene, but cannot be removed with isopropyl alcohol, unless the contaminated item is immersed in a bath of it for periods of up to 20 minutes, or at a slightly elevated temperature [75]. Volatile methyl siloxanes (VMS), and especially linear methyl siloxanes, have been used successfully to remove silicones in those cases where only mild solvents can be used, because of concern about possible damage to surfaces caused by more aggressive types [67]. Special commercial cleaning products intended for removing silicone residues are available. Cleaning agents for the removal of perfluoropolyether-based oils and greases are also made. More information about the removal of silicone contamination can be found in Ref. [76].
3.9 Galvanic and electrolytic corrosion
99
Table 3.1 The galvanic series (from Ref. [4]). The anodic end of the series is most susceptible to corrosion, while the cathodic end is least susceptible. The “passive” nickel and stainless steel (items 20 and 21) have an oxide surface layer produced by immersion in a strongly oxidizing acidic solution Anodic end Group I Group II
Group III
1. Magnesium
Group IV
13. Nickel (active) 14. Brass 15. Copper 16. Bronze 17. Copper–nickel alloy 18. Monel 19. Silver solder 20. Nickel (passive) 21. Stainless steel (passive)
Group V
22. Silver 23. Graphite 24. Gold 25. Platinum Cathodic end
2. Zinc 3. Galvanized steel 4. Aluminum 2S 5. Cadmium 6. Aluminum 17ST 7. Steel 8. Iron 9. Stainless steel (active) 10. Tin–lead solder 11. Lead 12. Tin
Corrosion Moisture
Cathode
Fig. 3.4
Anode
Galvanic corrosion produced when two dissimilar metals are in contact, with moisture on the surface of both.
3.9 Galvanic and electrolytic corrosion When two dissimilar metals are placed in contact in the presence of impure water, a chemical wet cell is produced which leads to a type of dissolution called “galvanic corrosion” [23]. The rate of this corrosion depends on the moisture content of the environment and the relationship of the two metals in the “galvanic series” (see Table 3.1). The farther apart the metals are in this series, the faster will be the ion transfer in the wet cell, and therefore the corrosion rate. It is the more anodic of the two materials that will corrode (see Fig. 3.4). A particularly severe, but common, problem can arise owing to the combination of aluminum and copper. In such a case, it is the aluminum that corrodes. In experimental work in physics, this type of corrosion probably assumes its greatest importance in water-cooling systems and water-cooled equipment. Here, one has a number
100
Basic issues concerning hardware systems
of different metals that could be in intentional contact in the presence of water (in the pipes, valves, and fittings in the water flow path), or in unintentional contact in its presence (because of leaks or condensation onto external metal surfaces). This issue is discussed in more detail in Section 8.4. Galvanic corrosion can also be an important problem for electrical connectors and contacts in damp environments. An important measure to prevent galvanic corrosion is to eliminate moisture. This may be done by dehumidifying the environment, by heating the materials, or by covering them with a non-conducting moisture-resistive coating. When this is impractical, one may try and choose metal combinations that are as close together as possible in the galvanic series. Normally, one tries to use metals from the same group in the series. The use of a plating material to provide protection is yet another method for preventing corrosion. In the case of the aluminum–copper combination discussed previously, coating the copper with tin–lead solder slows down the corrosion considerably, since such solder is closer to aluminum in the series. Yet another method is to attach a “sacrificial anode” to the more anodic material. The former item is a piece of metal (often made of zinc or magnesium) that is even closer to the anodic end of the series than the object being protected. The sacrificial anode, which must also be in contact with the water, is destroyed by corrosion eventually, and must be periodically replaced. A further method is to prevent electrical contact between the two metals by placing an insulator between them. Finally, when one has control over the nature of the water, such as in cooling water systems, galvanic corrosion can be reduced by increasing its purity. Another, similar, type of corrosion process is called “electrolytic corrosion” [4]. This occurs when a direct current is passed between two metals (which may be identical) with an electrolyte such as impure water between them. The rate of corrosion is dependent on the conductivity of the electrolyte and the size of the current. This type of corrosion can be problematic in cooling water systems used for electrical apparatus, and in electric circuits in damp environments. Measures to eliminate this effect involve removing the water, coating the problematic items with a non-conducting material, or (for example) in cooling water systems, decreasing the conductivity of the water by purifying it.
3.10 Enhanced forms of materials degradation related to corrosion Under certain conditions, the rate of corrosion produced by a chemical agent may be greatly increased over what would normally be the case. In other circumstances, corrosion may combine with other forms of material degradation to intensify these, so that the combined effects are greater than would be the case if each one acted independently. For example, if a conducting aqueous medium (e.g. impure water) is trapped in a crevice formed by two metal surfaces in close proximity, or between metal and non-metal surfaces, a situation can be set up in which corrosion takes place much more readily that if the medium were out in the open. Examples of such crevices include spaces between flanges, threaded joints, the space underneath washers, etc. [23]. This effect is called “crevice corrosion.” It can be a source of troubles in water-cooling systems (see page 270).
3.11 Fatigue of materials
101
Corrosion can also combine with other forms of material degradation to enhance the severity of these. For instance, “stress corrosion cracking” is an effect whereby mechanical stress on a piece of material, and the action of corrosive agents on it, combine to cause the formation of a crack, which may then propagate through the material [23]. Another effect, called “corrosion fatigue,” is a related phenomenon whereby the failure of materials by the repeated application of stresses (i.e. “fatigue failure”) is made to occur sooner (after a smaller number of stress cycles) and at lower stresses than would otherwise be the case. This is again caused by the presence of corrosion. Stress corrosion cracking and corrosion fatigue can be a problem for stainless-steel objects (e.g. bellows) in vacuum systems. Even seemingly innocuous chemicals, such as chlorinated solvents and chlorine-containing cleaning agents, can cause difficulties.
3.11 Fatigue of materials 3.11.1 Introduction When metals are subjected to cyclic or fluctuating tensile stresses,9 they are often subject to a form of degradation known as “fatigue.” This can ultimately lead to fracture or rupture of the material. The source of tensile stress in a material could be a purely tensile load, or it could result from bending forces. The important parameter in the fatigue process is not the total time under stress, but the total number of times the stress is changed within a given range. The maximum tensile stress that will lead to fracture after a given number of applications of this stress will always be less (and often substantially less) than the static yield strength or ultimate tensile strength of a material. This characteristic of fatigue is often a source of confusion following a failure, since objects can fracture when subjected to changing stresses that could easily be withstood if they were purely static. For instance, parts that are subjected to a static load may fail because there is a small superimposed time-dependent load (perhaps caused by vibration) that has gone unnoticed. For instance, a cantilevered copper tube, connected to a pressure gauge at one end and pipes leading to a vibrating compressor at the other, can fail in this way. Other materials besides metals can also undergo fatigue – plastics and (rarely) ceramics are but two examples. However, the rest of this section is primarily of relevance to metals only.
3.11.2 Prevalence and examples of fatigue Fatigue is a very common cause of failure in the world at large. In general engineering practice, it is responsible for about 50% of all failures [77]. In physics laboratories, items that can undergo fatigue failure include, for example: 9
Stress is measured as force per unit area.
Basic issues concerning hardware systems
102
(a) (b) (c) (d) (e) (f) (g) (h)
solder joints used in electrical connections (very common), wires and cables (also common), metal bellows (e.g. in vacuum systems), vacuum joints (especially in cryogenic equipment), electric filaments (e.g. in vacuum gauges), diaphragms in diaphragm pumps, some types of rupture disc (used for pressure relief), mechanisms that contain springs.
Soft solder joints in many applications (e.g. electrical connections, vacuum seals, cooling water joints) are particularly vulnerable to fatigue. A special problem with solder joints is that their fatigue properties degrade over the years, especially at elevated temperatures. Items that are subjected to repeated flexure during manual handling, such as wires, cables, and flexible metal hose (and particularly their terminations), are prone to fatigue failure. Thermal expansion and contraction due to temperature cycling in cryogenic devices and equipment can cause fatigue failure in these. For example, solder joints used as vacuum seals are vulnerable to such faults, as are the transfer lines used for moving cryogenic liquids. Failure can occur owing to changing pressures in gases or liquids. For example, pressure pulses in gas supplied by a compressor can cause fatigue failure of the sensing element in a nearby pressure gauge [78].
3.11.3 Characteristics and causes Fatigue is a cumulative effect. The degradation begins with the formation of a small crack, usually at the surface of the material, which continues to grow with repeated applications of the stress (subject to certain conditions) until the material eventually fractures or ruptures. Fatigue failure is a probabilistic, not a deterministic, phenomenon. There is no way of predicting the exact number of stress cycles that are needed to cause failure. Hence, fatigue is a relatively unpredictable mode of failure, unlike that which takes place in tough materials under a sufficiently high static load (at the yield strength or the ultimate tensile strength). In many steels, and some titanium-based alloys, the maximum stress versus the number of stress cycles has the form shown in the upper curve in Fig. 3.5. It can be seen that if the stress is lowered sufficiently, a point is reached at which the number of stress cycles that can be withstood diverges – in principle, becoming infinite. This stress is known as the “endurance limit” or “fatigue limit.” In the majority of other metals, including aluminum and copper alloys, the fatigue curve has the form illustrated in the lower curve in Fig. 3.5. In such cases, the material will always fail eventually, no matter how low the applied stress is. In most situations in the real world, interest in the fatigue properties of materials concentrates on situations involving very large numbers of stress cycles – usually greater than about 104 , and often up to 107 or more. Such large numbers of stress cycles are encountered in machinery, and where vibrations (often produced by machinery) are present. Fatigue that
3.11 Fatigue of materials
103
5 Some Fe- and Ti- based alloys
4
3
Stress
Fatigue Limit
(arbitrary units)
2
1 Other alloys (e.g.: Al, Cu)
10
3
10
4
10
5
10
6
10
7
10
8
10
9
Cycles to failure
Fig. 3.5
Qualitative behavior of stress versus number of cycles to failure for two different classes of metals – those that have a fatigue limit (upper curve), and those that will fail eventually for a sufficient number of cycles no matter how low the stress (lower curve).
takes place under such large numbers of stress cycles is called “high-cycle fatigue.” Most of the information on the fatigue properties of materials, and most discussions on fatigue, refer (often implicitly) to high-cycle fatigue. In this regime, fatigue life is generally highest in metals that have limited ductility. (The latter generally must not be too small, however, since some toughness is usually also needed.) Examples of such metals are maraging steels and beryllium copper. However, there are many situations in which materials are subjected to varying stresses that are sufficiently large to cause plastic deformations of metals. Under such conditions, reaching the number of stress cycles achievable in high-cycle fatigue situations (which involve almost entirely elastic deformations) is out of the question. Fatigue occurring in this regime, which extends up to roughly 104 stress cycles, is called “low-cycle fatigue.” Large movements leading to inelastic deformation are often encountered in situations in which temperature variations induce dimensional changes in materials. For example, they occur in solder joints in high-power electronic devices, and in vacuum joints in cryogenic equipment. During handling, flexible items such as metal hose are also routinely subjected to the repetitive application of stresses sufficient to cause plastic deformations. Under conditions of low-cycle fatigue, maximum strain10 is a more predictable parameter than maximum stress. It is found that under such conditions, greatest fatigue life (maximum number of cycles at a given strain) is achieved by using materials that are highly ductile. Examples of these are OFHC copper, and carbon and carbon–molybdenum steels.
10
Strain = L/L, where L is the change in dimension of the object and L is the original dimension.
104
Basic issues concerning hardware systems
3.11.4 Preventive measures A number of approaches can be used to maximize the fatigue life of an item subjected to varying stresses and strains. These include the following. (a) If possible, eliminate sources of stress concentrations in the design of an item, such as notches, grooves, or discontinuities in section. Fabrication defects, such as bad welds and poorly radiused inside corners, also often act as stress concentration sites. They can also arise as a result of damage (e.g. nicks or isolated scratches) due to sloppy practices during use. In practice, most fatigue failures are directly the result of the presence of such stress concentrators [79]. (b) Avoid subjecting parts and equipment to large-amplitude vibrations. In practice this usually means avoiding conditions of mechanical resonance (see page 70). (c) Avoid overbending items that are susceptible to fatigue due to handling (e.g. flexible metal tubing, and electrical cables). Also consider installing strain-relief devices at the terminations of such items. (d) Minimize temperature changes, nonlinear temperature gradients, and rates of temperature change in situations in which thermal expansion and contraction is a potential source of low-cycle fatigue (e.g. in cryogenic apparatus). Avoid introducing stresses into fatigue-susceptible items by allowing thermal expansion and contraction to occur freely – without restrictions produced by external constraints [77]. In structures containing multiply connected parts, match thermal expansion coefficients, in order to reduce stresses caused by relative differences in thermal expansion or contraction. (e) Employ flexural elements to separate members undergoing relative motion that may lead to high cyclic stresses. For example, copper pipes carrying high-pressure gas from compressors resting on antivibration mounts (used to prevent transmission of vibrations to the floor) are liable to undergo high-cycle fatigue failure. It is recommended that flexible high-pressure hoses which have been designed for this purpose be used instead [78].
3.12 Damage caused by ultrasound The use of ultrasonic cleaning to remove contamination from small items is commonplace in the laboratory. However, the method poses risks to delicate objects that is perhaps not very well known. The ultrasonic vibrations used in this method (with frequencies of a few tens of kilohertz) can set up resonant vibrations in small parts, which can then fail by fatigue [69]. Examples of this include electronic components (especially delicate ones such as diodes, transistors, and integrated circuits) [22], small metal bellows [76], and the filaments of ionization vacuum gauges [80]. Optical assemblies might also be placed at risk by this process.
105
Summary of some important points
Ultrasonic cleaning achieves its result by the formation and violent collapse of small bubbles in the cleaning liquid, in an activity referred to as “cavitation.” This can be an aggressive process, which may result in a form of surface damage known as “cavitation erosion” or “cavitation burn.” Objects that are at risk from this effect include those with polished surfaces, and particularly items made of soft metals such as copper or aluminum. Soft surface coatings, such as paint, can be removed from objects by cavitation. While the use of ultrasonic cleaning is perfectly acceptable for many items, caution should be exercised when it is being considered for use on small brittle parts, those with very delicate structures (such as lead wires inside integrated circuits) or parts with delicate surfaces (such as certain optical components). One technical standard, concerning the fabrication of electronic equipment for space flight, recommends that ultrasonic cleaning should not be used on electronic parts generally, and requires that it not be used on electronic assemblies containing delicate components [22]. In those areas of research that involve the study of material samples, such samples may also in some cases be at risk of damage from this process. Variations on the usual ultrasonic cleaning technique have been developed in order to address the above problems. One example is an arrangement in which the frequency of the ultrasound is swept back and forth in order to avoid exciting resonances. Another design (called “megasonic cleaning”) avoids problems associated with resonances and cavitation by operating at an extremely high frequency. Neither of these methods are likely to be employed in the ultrasonic cleaners ordinarily found in the laboratory, however. See Ref. [76] for more information on them. Ultrasonic vibrations are sometimes used to facilitate the soldering of certain materials, and damage can also result from this process (see page 426).
Summary of some important points 3.2 Stress derating (a) Derating (i.e. providing safety margins) is a way of improving the reliability of devices exposed to stresses (i.e. voltage, electric current, temperature, pressure, etc.). This is done by reducing stress levels in the items of concern to some fraction of their maximum rated values. (b) The use of wide safety margins is particularly important in laboratory work, because of the time and effort involved in making repairs, and the labor-intensive nature of research. (c) Derating factors can be interlinked. For instance, derated power levels are dependent on the operating temperature of a device. (d) Some devices can malfunction, or may even be damaged, as a result of excessive derating. It is necessary to ensure that a parameter to be derated (e.g. voltage) is kept within the correct range for a particular device.
106
Basic issues concerning hardware systems
3.3 Intermittent failures (a) Intermittent problems are common, and can be extremely difficult to solve. (b) In electronic devices, the most frequent causes of intermittent failures are poor electrical contacts (in solder joints, connectors, switches, etc.), and open circuits in damaged or improperly made conductors (wires, cables, etc.). (c) Conducted or radiated electromagnetic interference (including ground-loop effects) is another potential source of intermittent trouble in electronics. (d) With potential faults of this kind, the most useful approach is to take all reasonable precautions to prevent their occurrence, rather than assuming that they can be identified and corrected after they appear. (e) Solving very difficult intermittent problems is often more straightforward if the apparatus is modular. If it is possible to localize the cause of the intermittent to a particular module, the fault may be corrected by exchanging it with a working one.
3.4.1 Excessive laboratory temperatures and the cooling of equipment (a) High room temperatures can cause a multitude of problems, including measurement errors in temperature-sensitive instruments, and malfunction or failure of equipment. (a) Solar heating is probably the most common cause of this difficulty. This can be reduced, at relatively low cost, by carefully choosing the site of a laboratory, avoiding rooms with windows that face the sun (especially if temperature regulation is important), and by fitting windows with external blinds. (b) Air conditioning has several important benefits, in addition to cooling. These include reducing humidity and dust levels in a room, and (in the case of the more modern devices) providing heating in cold-weather conditions. (c) Some elementary, but sometimes forgotten, measures for preventing the overheating of individual items of equipment are ensuring that side cooling vents are not blocked, not stacking objects on top of vented chassis, keeping air filters clean, and regularly checking for the proper operation of cooling fans. (d) Inside equipment, buildups of dust on electronic components, power supply grills, heat sinks, and cooling fans can also cause overheating, and hence (for this and other reasons) periodic cleaning may be desirable.
3.4.2 Moisture (a) High humidity levels and condensation (especially) can cause a number of serious reliability problems. (b) These include chemical corrosion (e.g. rusting of bearings), galvanic corrosion (e.g. in electric contacts involving different metals), current leakage in very high-impedance circuits, corona and arcing in high-voltage circuits, and staining and the formation of deposits on optical components.
107
Summary of some important points
(c) Generally, the exposure of laboratory equipment to conditions that might lead to condensation should be strictly avoided. (d) Laboratory equipment should generally not be exposed to relative humidity levels of more than about 75%, and preferably no more than 65%. (e) With regard to corrosion problems, rapid changes in humidity are especially harmful.
3.5 Problems caused by vibration 3.5.2 Large-amplitude vibration issues (a) Large vibrations are capable of damaging or degrading laboratory equipment and components owing to fatigue failure, chafing and fretting, loosening of parts, etc. (b) Such vibrations can be caused by large items of machinery (e.g. compressors and pumps), fluid or gas flow, cooling fans, and electromagnetic forces. (c) Generally, large-amplitude vibrations are harmful to equipment, and should be reduced as much as is reasonably practical. (d) Usually, large-amplitude vibration problems are caused by conditions of resonance. (e) Vibrations in this category can be subdued by detuning the vibrating item, so that its resonant frequency no longer corresponds to the driving frequency. (f) It is also possible to reduce vibrations by increasing damping in vibrating items.
3.5.3 Interference with measurements (a) Some important sources of floor vibrations in a laboratory environment are mechanical pumps and compressors; heating, ventilating and air conditioning systems (especially their fans); workshop equipment; human activities (e.g. footfalls); and traffic on nearby roads and railways. (b) In dealing with vibration problems in this category, the most important measure that one can take is to find a suitable site for the sensitive apparatus. Vibration isolators should not be considered as a substitute for this. (c) Upper-level floors in buildings are often vibration prone, and so ground level sites (away from troublesome areas, such as rooms containing large machinery, workshops, loading bays and storerooms, etc.) are usually best. (d) Isolation of floor vibrations can usually be accomplished very effectively using pneumatic vibration isolators. (e) The resonant frequency of a mass-spring system comprising the apparatus and its vibration isolators should be as low as possible (preferably no more than about 3 Hz, in both the vertical and horizontal directions). If the resonant frequency is much higher than this, the isolators will amplify floor vibrations. (f) Rubber or composite pads usually make unsatisfactory vibration isolators, because the resulting high resonant frequencies correspond to the frequencies at which floor vibrations are greatest.
108
Basic issues concerning hardware systems
(g) Vibrations can also enter apparatus through vacuum pumping lines, electrical cables, and cooling water and gas lines. These can be much more serious transmission routes than the floor. (h) Vibrations traveling along such paths can be isolated and damped with the aid of bellows and cross-bellows isolators, inertia blocks, sand boxes, and pulsation dampers. (i) While most large machines (pumps, building ventilation fans, etc.) are designed to produce little vibration, they may become troublesome if they become unbalanced or otherwise defective. Hence, significant improvements may be achieved just by having such items serviced. (j) When large machines must be installed in a laboratory building, high-speed devices are preferable to low-speed ones, since the high-frequency vibrations from the former are attenuated with relative ease. (k) In the case of optical instrumentation that has been set up on an optical table, unless the table is not very rigid or is lightly damped, it is the optical mounts (not the table) that determine the vibration sensitivity of the apparatus. (l) Sound waves are generally the dominant source of vibrations in apparatus at frequencies above 50 Hz. When these are a problem, substantially different techniques to those used to control low-frequency vibrations are needed.
3.6 Electricity supply problems (a) Power line disturbances, such as surges, blackouts, brownouts, etc., pose a significant risk of malfunctioning or damage to equipment, corruption of computer files, noise in experimental measurements, etc. (b) Large transient overvoltages (often known as “surges”) can degrade equipment over time (ultimately resulting in failure), or may cause failure outright. (c) Very brief blackouts (called “drops”) may not be noticeable (in the form of flickering lights, etc.), but frequently cause lock-ups of computers and other digital equipment. (d) When power is restored after a blackout, there can be damaging surges, dips, brownouts, and multiple on–off cycles before the supply stabilizes. (e) Brownouts that involve very large reductions in voltage can damage some equipment, such as induction motors. (f) Radio-frequency disturbances on the power line (due to noisy devices such as arc welders and variable-speed motor drives) can cause electromagnetic interference in sensitive apparatus. (g) Power-line disturbances are often intermittent. Hence, the best way of dealing with them is to take precautions to ensure that, if they do occur, they will be removed. (h) All electronic systems should be protected by surge suppressors and a.c. line filters. (i) Uninterruptible power supplies (UPSs) are capable of providing high-quality power to equipment even in the event of blackouts, brownouts, and other power line disturbances. (j) UPSs that produce a square wave output can cause harm to many types of equipment over long periods, and can even cause rapid damage in certain cases. It is generally desirable to use UPSs with output waveforms that most closely resemble a sine wave.
109
Summary of some important points
(k) Battery failure is the most common type of reliability problem with UPSs. When making a purchase, it is very important to select one that allows the user to replace the battery. (l) Passive-standby UPSs (especially line-boost types) are adequate for many applications, and do not require such frequent battery replacement as other kinds. (m) If the inverter in a UPS is not adequately filtered, it can be a source of radio-frequency power line noise.
3.7 Damage and deterioration caused by transport (a) Gross damage or subtle degradation of laboratory instruments and other equipment during transport is not unusual. (b) Although packages can suffer mechanical damage as a result of shocks or vibrations while moving (e.g. in a truck), the greatest hazards usually arise from rough treatment during handling operations. (c) In the contiguous USA, extreme temperatures ranging from −51 ◦ C to 60 ◦ C may sometimes be encountered during transport (depending on the place and time of year). (d) It is possible for relative humidity levels to reach 100% during transport. Condensation is a strong possibility. (e) Items that are sent as passenger-plane freight receive considerably more rough handling than those sent in containers on freighter-planes. (f) Owing to the harmful effects of airborne salt spray on many types of equipment, air transport is much preferable to sea transport when long distances are involved. (g) Labels such as “fragile” and “handle with care” have only a minor effect on the way that packages are treated during transport. Hence, one must rely solely on the design of the package and the choice of the mode of transport to ensure the safety of a packaged item. (h) One very important aspect of packaging for transport is to ensure that there can be no relative motion between different sections of the packaged item. (i) For items costing more than a few thousand dollars, transport containers should be rigid wood or metal, not cardboard. (j) The likely height of any drops during handling can be reduced by making a package larger, and (in the case of manual handling) making it heavier and providing the package with handholds. (k) It is generally important to place sensitive items within a completely sealed moisture barrier, and including a desiccating material inside the barrier. (l) It is often worthwhile to hire specialist packaging companies to package very expensive and/or delicate equipment. (m) Similarly, the transport of such equipment may be done most safely by specialist “highvalue equipment” movers. (n) Especially for delicate and costly items, it is very desirable to obtain separate insurance cover from a third-party insurer (i.e. distinct from the basic insurance that may be provided by the freight carrier).
110
Basic issues concerning hardware systems
3.8 Some contaminants in the laboratory 3.8.1 Corrosive atmospheres in chemical laboratories r The corrosive atmospheres that are generally present in chemical laboratories, chemical rooms used for sample preparation, etc., can be very harmful to equipment.
3.8.2 Compressed air (a) The compressed air that is often distributed through a laboratory for the purpose of operating pneumatic devices, and cleaning parts and machines in workshops, generally contains water and oil. (b) If it is necessary to use compressed gas to clean sensitive items and equipment (such as optics), filtered dry nitrogen from a gas cylinder should be employed.
3.8.3 Silicones (a) Silicones, which are very commonplace substances, pose a serious surface contamination risk. (b) They are found in some diffusion pump fluids, high-vacuum grease, heat sink compounds for electronic devices, some lubricating aerosol sprays, and in many other products. (c) Silicones will wet almost all materials, can easily migrate over surfaces, may vaporize at one place and condense at another, and are extremely difficult to remove. (d) Operations that involve adhesion (e.g. vacuum coating, painting, epoxying, and soldering) are at risk from silicone contamination. (e) Silicones can degrade in the presence of sparks produced at electrical contacts (e.g. in switches and relays), and in electron beams in vacuum devices, to form harmful insulating deposits.
3.9 Galvanic and electrolytic corrosion (a) When two dissimilar metals are placed in contact in the presence of impure water, a chemical wet cell is produced which leads to a type of dissolution called “galvanic corrosion.” (b) A frequently occurring combination of metals that often has this problem is aluminum and copper. (c) Galvanic corrosion can be a source of trouble in water-cooling systems and water-cooled equipment, and with electrical connectors and contacts in damp environments. (d) A similar type of degradation, called “electrolytic corrosion,” can take place if a direct current is passed between two (possibly identical) metals with impure water between them.
111
References
3.10 Enhanced forms of materials degradation related to corrosion (a) Crevice corrosion is a kind of enhanced corrosion that can be troublesome in watercooling systems. (b) Stress corrosion cracking and corrosion fatigue are phenomena in which the formation and propagation of cracks in a material is promoted by the presence of a corrosive agent. They can be a problem for stainless-steel parts (e.g. bellows) in vacuum systems.
3.11 Fatigue of materials (a) Fatigue failure occurs when materials are exposed to cyclic or fluctuating tensile stresses, which result in the formation and propagation of cracks. It is a very common cause of failure in many different types of apparatus and systems – both mechanical and non-mechanical. (b) The fatigue process depends on the amplitudes of the stress (or strain) variations, and the total number of stress or strain cycles. (c) The word “fatigue” normally refers to degradation that follows a very large number of stress cycles (more than roughly 104 ) – often resulting from vibrations. This is called “high-cycle fatigue.” Low-ductility materials tend to survive best under such conditions. (d) Less frequently, “fatigue” is used to describe degradation resulting from a small number of stress cycles (less than about 104 ), in which high strains (often resulting from large temperature changes, and causing plastic deformations) are the important factor. Highly ductile metals are the most satisfactory ones under these “low-cycle fatigue” conditions. (e) Fatigue failure can be prevented or delayed by avoiding situations in which stress concentration sites (such as notches, grooves, or discontinuities) are present in the material under stress; and taking steps to minimize stress or strain levels, vibrations, and temperature excursions.
3.12 Damage caused by ultrasound (a) Small parts (such as delicate electronic components, small bellows, and filaments) are at risk of damage from ultrasonic cleaning, due to fatigue resulting from resonant vibrations. (b) Damage to objects with soft polished surfaces can arise due to cavitation erosion. (c) Material samples that are the objects of research can also be damaged by these phenomena.
References 1. P. D. T. O’Connor, Practical Reliability Engineering, 4th edn, John Wiley & Sons, 2002.
112
Basic issues concerning hardware systems
2. A. M. Cruise, J. A. Bowles, T. J. Patrick, and C. V. Goodall, Principles of Space Instrument Design, Cambridge University Press, 1998. 3. J. Moore, C. Davis, M. Coplan, and S. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. 4. H. W. Ott, Noise Reduction Techniques in Electronic Systems, 2nd edn, John Wiley & Sons, Inc, 1988. 5. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn, Cambridge University Press, 1989. 6. P. C. D. Hobbs, Building Electro-optical Systems: Making it all Work, John Wiley and Sons, 2000. 7. R. V. Jones, Instruments and Experiences: Papers on Measurement and Instrument Design, John Wiley & Sons Ltd, 1988. 8. D. J. Agans, Debugging: the 9 indispensable rules for finding even the most elusive software and hardware problems, AMACOM, 2002. 9. R. H. Alderson, Design of the Electron Microscope Laboratory, North-Holland, 1975. 10. R. Longbottom, Computer System Reliability, John Wiley & Sons Ltd, 1980. 11. D. A. Muller, E. J. Kirkland, M. G. Thomas, J. L. Grazul, L. Fitting, and M. Weyland, Ultramic. 106, 1033 (2006). 12. E. Bright Wilson, Jr., An Introduction to Scientific Research, Dover, 1990. 13. R. B. Thompson and B. F. Thompson, Repairing and Upgrading Your PC, O’Reilly, 2006. 14. K. J. Chase, PC Disaster and Recovery, SYBEX, 2003. 15. J. B. Slater, J. M. Tedesco, R. C. Fairchild and I. R. Lewis, in Handbook of Raman Spectroscopy: From the Research Laboratory to the Process Line, I. R. Lewis and H. G. M. Edwards (eds.), CRC Press, 2001. 16. CRC Handbook of Chemistry and Physics, 59th edn, Robert C. Weast (ed.), CRC Press, Inc., 1978. 17. H. C. Shields and C. J. Weschler, HPAC Engineering, May 1998, p. 46. 18. P. C. D. Hobbs, Building Electro-optical Systems: Making it all Work, John Wiley and Sons, 2000; Chapter 20: Thermal Control (see the author’s web site at http://users. bestweb.net/∼hobbs/). 19. J. Pecht and M. Pecht, Long-Term Non-Operating Reliability of Electronic Products, CRC Press, 1995. 20. M. H. Ellis, The Care of Prints and Drawings, Altamira Press, 1995. 21. E. Kuffel, W. S. Zaengl, and J. Kuffel, High Voltage Engineering: Fundamentals, 2nd edn, Newnes, 2000. 22. ECSS Secretariat, ESA-ESTEC Requirements & Standards Division, Space product assurance: The manual soldering of high-reliability electrical connections (ECSS-Q70–08A), ESA Publications Division, 1999. 23. A. Cottrell, An Introduction to Metallurgy, 2nd edn, Edward Arnold, 1975. 24. R. Tricker and S. Tricker, Environmental Requirements for Electromechanical and Electrical Equipment, Newnes, 1999. 25. James Doty, Photonics Spectra 34, 113 (2000). 26. Shock and Vibration Handbook, 4th edn, Cyril M. Harris (ed.), McGraw-Hill, 1996.
113
References
27. A. H. Slocum, Precision Machine Design, Prentice Hall, Inc., 1992. 28. G. Osterstrom, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 29. E. I. Rivin, Passive Vibration Isolation, ASME Press, 2003. 30. From Technical Manufacturing Corporation (TMC). www.techmfg.com/techbkgd/ techbkgd_1.html 31. R. W. Assmann, W. Coosemans, S. Redaelli, and W. Schnell, Status of the CLIC Studies on Water Induced Quadrupole Vibrations, Proceedings NANOBEAM 2002: 26th Advanced ICFA Beam Dynamics Workshop on Nanometre Size Colliding Beams, CERN. 2002, pp. 117–123. 32. B. A. Hands, in Cryogenic Engineering, B. A. Hands (ed.), Academic Press, 1986. 33. Measuring Vibration, booklet printed by: Br¨uel & Kjaer, DK-2850 Naerum, Denmark. http://www.bksv.com 34. P. G. Nelson, Understanding and Measuring Noise Sources in Vibration Isolation Systems, available from Technical Manufacturing Corporation (TMC). www.techmfg. com 35. D. D. Watch, Building Type Basics for Research Laboratories, John Wiley and Sons, 2001. 36. Newport 1983–84 Catalog, Newport Corporation, 1791 Deere Ave., Irvine, CA 92606, USA. www.newport.com 37. I. Filinski and R. A. Gordon, Rev. Sci. Instrum. 65, 575 (1994). 38. A. Preumont, Vibration Control of Active Structures: An Introduction, 2nd edn, Springer, 2002. 39. J. H. Ferris et. al., Rev. Sci. Instrum. 69, 2691 (1998). 40. R. Movshovich, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 41. G. R. Pickett, Rep. Prog. Phys. 51, 1295 (1988). 42. W. P. Kirk and M. Twerdochlib, Rev. Sci. Instrum. 49, 765 (1978). 43. S. T. Smith and D. G. Chetwynd, Foundations of Ultraprecision Mechanism Design, Gordon and Breach, 1992. 44. G. Osterstrom, in Vacuum physics and technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, Inc., 1979. 45. R. Cerruti, Microsc. Microanal. 13, 1488 (2007). 46. D. Jones, in Handbook for Sound Engineers, 3rd edn, G. M. Ballou (ed.), ButterworthHeinemann, 2002. 47. D. Shoenberg, Magnetic Oscillations in Metals, Cambridge University Press, 1984. 48. W. H. Steel, Interferometry, 2nd edn, Cambridge University Press, 1983. 49. K. J. McGowan, Power 130, No. 7, July 1986, pp. 55–60. 50. R. B. Standler, Protection of Electronic Circuits from Overvoltages, John Wiley & Sons, Inc., 1989. 51. R. B. Thompson and B. F. Thompson, PC Hardware in a Nutshell, 3rd edn, O’Reilly, 2003. 52. Electrical Noise and Transients, Fluke Application Note, Fluke Corporation, PO Box 9090, Everett, WA, U. S. A. http://www.fluke.com
114
Basic issues concerning hardware systems
53. P. Bugge, ASHI Reporter 18, No. 9, 2001. 54. H. C. Cooper and R. Mundsinger, Power Protection: Reduce Electronic Downtime, Power Quality ‘89, Official Proceedings of the First International Conference, Intertec Commun. 1989, pp. 251–69. 55. M. Mardiguian, EMI Troubleshooting Techniques, McGraw-Hill Professional, 1999. 56. G. M. Jones, in Pumping Station Design, 2nd edn, R. L. Sanks (ed.), ButterworthHeinemann, 2001. 57. F. E. Ostrem and W. D. Godshall, An Assessment of the Common Carrier Shipping Environment, General Technical Report FPL 22, Forest Products Laboratory. www. treesearch.fs.fed.us/pubs/5841 58. A. H. McKinlay, Transport Packaging, Institute of Packaging Professionals, 1998. 59. Transport Packaging Committee, Guide to Packaging for Small Parcel Shipments, Institute of Packaging Professionals. www.iopp.org 60. M. A. Levin and T. T. Kalal, Improving Product Reliability: Strategies and Implementation, John Wiley & Sons, 2003. 61. M. T. Hatae, Packaging Design, in Shock and Vibration Handbook, 4th edn, Cyril M. Harris (ed.), McGraw-Hill, 1996. 62. G. S. Mustin, Theory and Practice of Cushion Design, The Shock and Vibration Information Center, United States Department of Defense, 1968. 63. J. M. Kolyer and D. E. Watson, ESD From A to Z, 2nd edn, Kluwer Academic Publishers, 1996. 64. Department of Defense Handbook: Packaging Cushion Design, MIL-HDBK-304C, 1 June 1997. 65. United Parcel Service of America (UPS). www.ups.com/content/us/en/resources/ prepare/supplies/your_packaging.html 66. T. R. Dulski, A Manual for the Chemical Analysis of Metals, ASTM International, 1996. 67. B. Kanegsberg and E. Kanegsberg, Contamination Control In and Out of the Cleanroom: Silicone Contamination (Parts 1–3). Available from BFK Solutions LLC. www. bfksolutions.com 68. Y. Shapira and D. Lichtman, in Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, Inc., 1979. 69. P. T. Vianco, Soldering Handbook, 3rd edn, American Welding Society, 1999. 70. R. Strauss, SMT Soldering Handbook, Newnes, 1998. 71. L. H. Tanner, J. Phys. D: Appl. Phys., 12 1473 (1979). 72. Electrical Contacts: Principles and Applications, Paul G. Slade (ed.), Marcel Dekker, 1999. 73. N. S. Harris, Modern Vacuum Practice, McGraw-Hill, 1989. 74. J. D. Summers-Smith, Mechanical Seal Practice for Improved Performance, Mechanical Engineering Publications Limited, 1992. 75. K. Luey and D. J. Coleman, The Removal of Silicone Contaminants from Spacecraft Hardware, Aerospace Report No. TR-2002(8565)-6, The Aerospace Corporation, 2002. Available from http://www/stormingmedia.us/
115
References
76. Handbook for Critical Cleaning: Aqueous, Solvent, Advanced Processes, Surface Preparation, and Contamination Control, B. Kanegsberg and E. Kanegsberg (eds.), CRC Press, 2000. 77. D. A. Wigley, Materials for Low-Temperature Use, Design Council & Oxford University Press, 1978. 78. A. J. Croft, Cryogenic laboratory equipment, Plenum, 1970. 79. D. A. Wigley and P. Halford, in Cryogenic fundamentals, G. G. Haselden (ed.), Academic Press, 1971. 80. P. E. Siska, Rev. Sci. Instrum. 68, 1902 (1997).
4
Obtaining items from commercial sources
4.1 Introduction Generally, nearly all of the equipment, devices, substances, and software that are employed in scientific research are of commercial origin. It is clear from everyday experience that the reliability and usability of such products can range from superb to abysmal. (Usability has an impact on overall reliability through its effect on human error.) Low-quality items, coupled with poor documentation (e.g. operating instructions), and lack of after-sales technical support from the companies that made them, can have a very harmful effect on laboratory productivity. The purpose of this chapter is to provide ways of avoiding such problems.
4.2 Using established technology and designs As was pointed out in a general context on page 4, if reliability is important, one should not normally employ devices or software that use a technology which has not previously been in commercial production. Most innovative things have unforeseen problems (sometimes in very significant numbers), which can be found and dealt with only after such things have been tested by many users over an extended period. This is a particularly serious issue with computer software (see page 504). This improvement in reliability over time can also be seen in individual types of product, even if a relatively standard technology is being used. Numerical examples of this effect in the case of large computer systems have been given in Ref. [1]. There it is shown that the failure rate of two different populations of newly introduced computers decreased on average by a factor of 4 in the best case (i.e. that of the most reliable population), and by 6.8 in the worst case, over a five-year period (see Fig. 4.1). These drops took place as the designs were gradually improved in the light of experience. A widely employed rule-of-thumb for avoiding this immature-technology reliability problem is to wait at least until the first major upgrade of a product has appeared before buying. In fact, including usability issues, it generally takes a company at least several (perhaps five or six) attempts to perfect a product [2]. (That is, only after at least several generations of a given product have been produced may most of its shortcomings be corrected.) 116
4.4 Understanding the basics of a technology
117
30
Faults per 1000 hours
25
20
15
10
5
1
Fig. 4.1
2
3 4 5 6 7 Age of product type (six-month periods)
8
9
10
Fault rate of two different computer processor populations versus time following the earliest delivery of a new kind of computer system (data from Ref. [1]). This shows improvements in the reliability of successive delivered units as the design of the product is enhanced in the light of experience.
4.3 The importance of standards The use of equipment and software that employ incompatible standards can be a major source of problems. Such difficulties arise when different items are required to interact with each other. The causes of these include, for example software that is not compatible with a particular type of hardware, electrical connectors that cannot be attached to certain instruments, incorrect voltage levels or data types, etc. These problems can normally be avoided by having a good grounding in the technologies that are being used, so that one is aware of where conflicts can arise. Furthermore, when potentially problematic items are being ordered and there is any doubt, one should not select items from a catalog on the basis of assumptions and wishful thinking, but contact the manufacturer or supplier and get a written guarantee of compatibility. The standardization of equipment and software – using the same items throughout a laboratory, and preferably ones that are widely employed in the world at large – can help to reduce this problem.
4.4 Understanding the basics of a technology It is a good idea to try and understand the fundamental ideas underlying the operation of major instruments or software used in research. These things should generally not be
118
Obtaining items from commercial sources
treated merely as black boxes. One should know about their characteristics, including their strengths and weaknesses. This understanding should be gained at some level, even if the most detailed design information is not known. For example, in the case of an electronic device, although it would generally be unnecessary and too time-consuming to try and understand it at the level of the schematic diagram, knowledge of its major functions at some block diagram level would normally be appropriate. This sort of knowledge permits sensible decisions to be made about items to be purchased and their specifications, makes it possible to interact in an intelligent way with the suppliers, enables the items to be used in the best possible fashion, and makes it much easier to determine what has gone wrong should a failure take place.
4.5 Price and quality The relationship between the cost of an item and its quality (and reliability) is sometimes not completely straightforward. A higher-priced item may not necessarily be more reliable. For example, the extra money spent on a more costly product may go into secondary features (i.e. “accessories”). These may never be used in practice, but increase the overall complexity of the product, and thereby decrease its reliability. A striking counterexample to the principle “you get what you pay for” is free software, and in particular opensource software. This can be of very high quality. For instance, the Linux operating system is open-source software that is widely regarded as being very stable and secure (see pages 488 and 506). Nevertheless, one should think carefully before buying unusually low-cost items – particularly in the case of hardware. Rock-bottom price is often incompatible with reliability. This is especially true for products made in a highly competitive commercial environment, where profit margins are very low (e.g. personal computers). (See also the discussion on counterfeit products on page 120.) Perhaps the best way of putting the matter is that one should never buy on the basis of price alone [3]. The use of high-quality parts, equipment, and software is especially important in laboratory work, which is a labor-intensive, rather than a capital-intensive, activity [4]. The extra cost of obtaining high-quality items in general is largely outweighed by the resulting savings in time that would otherwise be taken up in troubleshooting and repair, or possibly spent on non-research activities while someone else carries out these tasks. The importance of paying for quality is especially great in the case of trouble-prone things such as power supplies, PCs, electrical connectors and cables, valves, and mechanical devices and machinery in general. One area where there is usually a fairly direct relationship between price and reliability is when derating is being applied to improve the reliability of an item under stress (power, electric current, pressure, etc.). In order to provide an increased reserve of capacity, so as to improve the derating factor, the item in question usually has to be physically larger, or provided with extra cooling equipment, etc. All other things being equal, one usually has to pay more for this.
119
4.6 Choice of manufacturers and equipment
In the case of custom or semi-custom equipment and software, an unusually low quotation may sometimes be the result of the supplier not completely understanding the user’s requirements. Trying to hold the supplier to their price under such conditions can result in a huge waste of time and effort.
4.6 Choice of manufacturers and equipment 4.6.1 Reliability assessments based on experiences of product users The best way of reducing the failure risk of a product when the manufacturer is chosen is to examine their background in making such things [5]. This is done by obtaining an “experience list” from the manufacturer, in which the contact details of those who have purchased the product are given. The provision of such lists is something that most reputable manufacturers should be willing to do without hesitation. Customers on the list should be reached, preferably by telephone, to find out what they have to say about the item. For example, questions can be raised concerning its ease of use, reliability and maintainability, and the readiness of the manufacturer to provide technical support. Sometimes, experience lists are provided by companies as a matter of course along with regular product information. However, these do not normally include the contact details of individual users. In some cases, in which large expenditures are involved, it can be very worthwhile to visit a customer’s laboratory, even if this happens to be on another continent. During such a visit, one may examine the product in a working environment, watch it being operated, and find out directly about the experiences of its various users. Searches of the Internet for information about the reliability of products can often be useful. Information about mass-market items, such as computers and their peripherals, are often available on Internet product review sites, such as Consumer Reports [6], Epinions [7] and ConsumerSearch [8]. Another possible route to obtaining the needed information is to ask other manufacturers of similar items, who may be willing to point out weaknesses in their competitor’s products.
4.6.2 Place of origin of a product A point that should be taken into consideration when a product is being chosen is its place of design and manufacture. For example, Switzerland is renowned for the superb quality of miniature mechanical devices, such as small electric motors and electrical connectors, which can often be thought of as the progeny of the watch-making industry. Manufacturers in North America are known for the simplicity, reliability and low-maintenance requirements of their machinery [9]. Perhaps the place with the most outstanding overall reputation for reliability is Japan, which is famous for the quality of its consumer electronics, optical equipment, cars and related machinery, and other things [3]. Other countries are known for excellence in their own areas.
120
Obtaining items from commercial sources
4.6.3 Specialist vs. generalist manufacturers If one has the choice, it is generally better to buy equipment that has been made by a manufacturer (or one of its subdivisions) that specializes in that particular type of equipment, rather than from a firm that makes many different types of things. Thus, the question that one should ask is “What is the manufacturer’s ‘core competency’?”.
4.6.4 Limitations of ISO9001 and related standards In advertisements, one sometimes finds companies stating that they are certified as having “ISO9001” registration. In essence, this registration is bestowed on companies that can show that their methods for designing, developing, and manufacturing products are documented according to the ISO9001 specification and that the procedures contained in this documentation are followed. Sometimes this registration is presented as if it were an indicator of the quality and reliability of a company’s products. However, it has been pointed out that documentation alone cannot guarantee quality, and that it is perfectly possible for a firm that has ISO9001 registration to produce “well-documented rubbish” [3]. Although some firms have apparently generated improvements following ISO9001 certification, such accreditation should not generally be taken as an indication of quality and reliability. This issue is discussed in detail in Ref. [3].
4.6.5 Counterfeit parts The sale of bogus parts and materials is a general problem in industry, although it is particularly prominent in the area of electronics (see Ref. [10]). Major characteristics of such items include purchase from an obscure supplier, peculiar business transaction conditions, an unusually low sales price, and poor or zero reliability and performance. Nevertheless, counterfeit parts may look identical or nearly identical to the real things. The best way of avoiding such problems is to buy parts and materials at a reasonable price from a reputable supplier. When dealing with as-delivered goods, a useful sign of counterfeiting is low quality package marking, such as printing defects (e.g. smudged ink) and spelling mistakes. In the case of electronics, one aspect of this issue is discussed in Ref. [11]. This concerns the practice of certain counterfeiters to “re-label,” and sell as good, integrated circuits from batches that have been found to be unsatisfactory by a reputable manufacturer. Many discussions of similar problems can be found on the Internet.
4.6.6 True meaning of specifications It is important to be careful when interpreting specifications from the manufacturer, which (as one might expect) are often obtained under conditions that present the product in the best possible light. For example, the tests that generated the data represented in the specifications may have been carried out under highly controlled and optimal conditions,
121
4.6 Choice of manufacturers and equipment
rather than typical (sub-optimal) laboratory ones. It is essential to see if they are relevant to one’s own situation. The observations on page 59 concerning derating are significant here. For example, the maximum current level of a power amplifier may refer to peak, rather than RMS values. Generally, information provided in product data sheets is often incorrect. In particular, one should never trust any specification in a manufacturer’s data sheet that has been marked “Advance Information” or “Preliminary” [12]. Somewhat more reliable information may be obtained by talking to the manufacturer. To be really sure, however, one should test the product oneself (see also Section 4.6.8). The fitness of a complex product (such as an instrument) for a particular use may not be evident from the data sheet alone. It is often a good idea to obtain a copy of the instruction manual and read it prior to purchase. Since the quality of the documentation provided with products is generally very important, yet often poor, having a look at manuals prior to making purchases can be useful from this standpoint also. (See also Section 4.8.)
4.6.7 Visiting the manufacturer’s facility In some cases, and especially when the purchase of costly items is being considered, it may be worthwhile to visit the proposed supplier’s factory in order to see how things are done. Possible actions there include the following. (a) Watching for signs of native intelligence, diligence, high morale, and pride in workmanship amongst the personnel. (b) If one has first-hand knowledge of how things should be done, asking questions that can reveal levels of competency. One sometimes finds that the “professionals” who are directly involved in making a product are not aware of the correct methods of construction. NB: It is possible to know how to make something properly without actually being able to do it. (c) Having a look at products under construction, and making a point of studying regions of such that are not usually seen by the customer, but which may be important nonetheless. Products that look good on the outside may sometimes show evidence of poor design or sloppy workmanship in normally unseen places.
4.6.8 Testing items prior to purchase In many cases, manufacturers or their suppliers are often willing to lend out apparatus for the purpose of testing, particularly in the case of small items. Although, of course, only reliability problems of a short-term nature will be revealed by such testing, this may be enough to reveal problems of the kind discussed on page in Section 4.6.6. In the case of large items of equipment, it may be possible to carry out tests on a demonstration unit at the manufacturer’s facility.
122
Obtaining items from commercial sources
4.7 Preparing specifications, testing, and transport and delivery 4.7.1 Preparing specifications for custom-made apparatus A particular matter of great importance is the adequate preparation of written specifications for items that are being made on a custom or semi-custom basis. This should be done in close cooperation with the manufacturer. The specifications should include everything that is expected of the apparatus, so that there can be no misunderstanding about what is required and what should be delivered. To this end, they should also be clear and concise. Sometimes, the manufacturer will agree to provide particular capabilities on a “best effort basis” only, but this too should be included in writing as part of the specifications. It is desirable that experienced researchers who will actually use the equipment participate in the process of selecting it, and (for custom-made apparatus) its design or customization. Such people will be able to bring first-hand knowledge of reliability problems to this process, which may not be possessed by those who are not routinely involved in the dayto-day activities of the laboratory. Furthermore, if the former are engaged in the activity of selection and design, they will have an incentive to take the steps that are needed to make the equipment operate reliably. Although changes in thinking on the part of the purchaser of a complicated piece of apparatus are sometimes unavoidable, alterations of the specifications after construction has started should be shunned as far as possible. Such changes often lead to compromises in the design, or even partial dismantling and reconstruction of the apparatus – both actions that can have an adverse affect on reliability. The specifications for major equipment should include a description of the types of documentation that should be supplied with it (see below). It is, surprisingly, not uncommon for manufacturers to ship equipment (even complex and trouble-prone types) to customers without testing them at the factory first (see, e.g. footnote 1 on page 123). When writing specifications, indicate that the equipment must be fully tested at the factory, and that the test results be provided.
4.7.2 Documentation requirements The documentation that accompanies major equipment should include operating instructions, system diagrams, a list of necessary spare components (i.e. “operational spares”), and troubleshooting, maintenance and repair procedures. The operating instructions should contain warnings about common mistakes. (Troubleshooting procedures and warnings about common mistakes can be extremely useful, but unfortunately are often not included in documentation.)
4.7.3 Reliability incentive contracts One potentially beneficial arrangement for improving reliability involves the use of “incentive payments,” which are given to the supplier on the basis of how long the apparatus
4.7 Preparing specifications, testing, and transport and delivery
123
12 10 Incentive 8 Payment (x 1000 $) 6 4 2
2 4 6 8 Mean Time Between Failure ( x 1000 hours)
Fig. 4.2
10
Example of a reliability incentive payment structure (see Ref. [3]).
operates without failure following delivery. Contracts containing this type of provision are often used in the case of spacecraft, where the payments are made based on reliable operation for periods of, for example, up to two years [3]. Such an arrangement, which involves rewards for success rather than penalties for failure, has the advantage of being relatively easy to negotiate. This is because it stands a good chance of being accepted by the supplier in the form in which it is offered. It is also less likely to lead to arguments or litigation than contracts involving penalties for poor reliability. A possible payment structure is shown in Fig. 4.2. Clearly, this type of contract is really suitable only for items that are manufactured on a small scale – not mass-produced. For example, cryogenic systems and ultrahigh vacuum equipment might be good candidates for this type of arrangement. More information on this, and other types of contracts for the achievement of reliability, can be found in Ref. [3].
4.7.4 Actions to take before delivery In the case of large and costly items, it is often worthwhile for researchers from the laboratory to attend the initial testing session at the factory1 and inspect the equipment before it is shipped. (Final testing should be done at the laboratory during the acceptance trials – see below.) It the case of experimental apparatus, it may even be desirable and feasible to conduct some sort of actual experiment, which can serve to expose weaknesses that might go unnoticed during ordinary testing. Such an experiment would also allow research workers to find out from the experts how the equipment is actually used in practice (since the instruction manual may not give all the necessary details – especially if the system is custom made). This experience can be very valuable. 1
The author is familiar with a large and very expensive piece of research equipment that was not even tested at the factory, despite incorporating some new and previously untested technology. The equipment was paid for without being subjected to an acceptance trial, and subsequently turned out to be defective. At the time of writing, the flaws have not been corrected, and the device is operating at only a fraction of its rated capability.
124
Obtaining items from commercial sources
Even after the apparatus has successfully completed its tests at the factory, there may be events that lead to reliability problems prior to installation and successful startup at the user’s laboratory. Several possible scenarios have been discussed in the context of large computer systems [1]. In such cases, it has been found that parts may be misplaced or substituted with the wrong ones when the system is disassembled and packaged for shipment. For example, a large fraction of the cables are frequently misplaced. Furthermore, if after testing the system is left on the factory floor, there is a tendency for parts that are known to be working to be “borrowed.” Incompatible parts may be introduced if different parts of the system are provided by different sources, to meet up at the destination. One way of minimizing these types of problems is to ensure that all items comprising a complete system are tested together at the factory, under nearly actual conditions. After the apparatus has been disassembled and packaged, it is then taken out of the packages and tested again [1]. The transportation of apparatus often causes damage. For example, in the case of large computer systems, at least one unit in almost every system delivered suffers severe damage as a result of transport [1]. Experience has shown that scientific equipment manufacturers do not necessarily know how to package and transport their products properly. The prevention of damage during transport is discussed in Section 3.7.
4.7.5 Acceptance trials for major equipment If a large and expensive apparatus has been delivered and accepted by the buyer, and then fails within its warranty period, then in principle it will be repaired without undue delay by the manufacturer. However, if payment has already been made, there may be no feeling of urgency on the part of the latter to do this. There may be various problems with the product as delivered, such as significant design deficiencies, manufacturing and assembly defects, damaged equipment, incorrect or missing options, and wrong or insufficient documentation (see Section 4.7.2). Hence, it is useful to have a formal stage in the process involving the evaluation and testing of the product at the user’s facility. Ideally, this should continue until the user is satisfied that the product performs reliably, meets all specifications, and that support services are sufficient. Only when this stage, called the “acceptance trial” or “acceptance test,” is complete does the manufacturer get paid. In industry, this procedure is often used for major items such as large computer systems. A useful discussion of acceptance trials in this case (but having general applications) is provided in Ref. [1].
4.8 Use of manuals and technical support One should always read and understand instruction manuals before using new equipment and software. This is something that is very frequently omitted, and significant problems often arise as a result. Manuals should generally be read thoroughly, from cover to cover, and not
Summary of some important points
125
merely skimmed over.2 (See also the remarks on page 26.) Manufacturers frequently provide auxiliary information concerning the use of their products (sometimes called “tutorials,” “technical notes” or “application notes”). This information is often placed on their websites, and can be very useful. Keep in mind that instruction manuals occasionally contain very misleading misprints and other errors. For instance, the author has seen a manual for an electronic instrument in which a number that was printed in the specifications was missing a decimal point. This resulted in a maximum allowable input voltage being specified as 200 V, rather than 20.0 V. If a voltage anywhere near the former value had been applied (a plausible action in this case), it would certainly have resulted in catastrophic damage to the instrument. If in doubt, always contact the manufacturer! A tendency often exists, especially among physicists, to believe that one should be able to should sort out most problems arising in commercial equipment by oneself, without any assistance from outside. While such an attitude is reasonable in the case of very simple faults, this tendency often extends to subtle problems that clearly fall in the domain of the manufacturer of the equipment. This form of pride is probably a very frequent cause of lost time and even damage to apparatus. After perusing the manual and following the advice contained therein, one should not hesitate to call the manufacturer. The technical support offered by the manufacturer is often free of charge when it is in the form of instructions or other information. Obtaining it is usually just a matter of making a phone call or sending an email to an engineer or a support center. It should be kept in mind that reliable technical information generally comes from engineers or other trained technical personnel, and not from sales staff.
Summary of some important points (a) Generally avoid using completely new types of product. Wait at least until the second generation of a given product has become available before buying it. (b) Beware of potential compatibility problems between products that must interact with each other. (c) In general, do not treat commercial devices as “black boxes” – gain an understanding, at least at a conceptual level, of how the apparatus works. (d) Never buy a product on the basis of price alone. (e) Laboratory work is labor intensive, not capital intensive, and so it is normally worthwhile paying for high quality equipment, parts, and materials. (f) An excellent way of reducing the failure risk of a product when it is being considered for purchase is to obtain an “experience list” (showing previous buyers of the product) from the manufacturer, and to make telephone contact with at least some of these. 2
Some manuals (particularly for certain types of software) are massive tomes, which are presumably not intended to be read from beginning to end. However, in such cases compact introductory manuals may be available.
126
Obtaining items from commercial sources
(g) In the case of items that are to be made on a custom or semi-custom basis, it is essential to pay attention to the preparation of the written specifications – in which all the important properties of the item must be included. (h) In the case of large and expensive items, consider visiting the factory for the testing, and inspect the apparatus before it is transported. (i) For large and expensive items, it may be worthwhile having formal “acceptance trials,” in which the apparatus is evaluated and tested at the user’s laboratory before the manufacturer is paid. (j) Ensure that the documentation which accompanies major newly delivered apparatus is adequate. The former includes operating instructions, system diagrams, a list of necessary spare components, and troubleshooting, maintenance and repair procedures. (k) In general, instruction manuals should be read thoroughly, from cover to cover (not merely skimmed over), before new equipment or software is used.
References 1. R. Longbottom, Computer System Reliability, John Wiley & Sons, Ltd, 1980. 2. D. A. Norman, The Design of Everyday Things, Basic Books, 2002. 3. P. D. T. O’Connor, Practical Reliability Engineering, 4th edn, John Wiley & Sons, Ltd, 2002. 4. J. Moore, C. Davis, M. Coplan, and S. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. 5. H. P. Bloch, Improving Machinery Reliability, Gulf Publishing Company, 1998. As the title suggests, the comment concerns machinery, but the point is a completely general one. 6. www.consumerreports.org 7. www.epinions.com 8. www.consumersearch.com 9. M. S. Gertler, Manufacturing Culture the Institutional Geography of Industrial Practice, Oxford University Press, 2004. 10. M. Pecht and S. Tiku, IEEE Spectrum, 43, No. 5, 37 (2006). 11. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn, Cambridge University Press, 1989. 12. P. C. D. Hobbs, Building Electro-optical Systems Making it all Work, John Wiley and Sons, 2000.
5
General points regarding the design and construction of apparatus 5.1 Introduction The design and construction of scientific apparatus is often accompanied by serious pitfalls. These can be a cause of much wasted time, and a source of frustration to those who are not familiar with the problems. This chapter examines a few of most common difficulties, and some useful general strategies for easing the development of reliable apparatus. Detailed design and construction considerations are discussed elsewhere in the book, and in the references at the end of the chapter.
5.2 Commercial vs. self-made items One should generally not make equipment and parts that can be obtained commercially. An important reason for this is that it very often takes much longer to design and construct such items in a research laboratory than is originally envisaged (see Section 5.3). Most of the things that are done in a laboratory are not new. Hence, much time and effort can be saved by making full use of the multitude of products that are available on the market, and easily located on the Web. Many commercial devices of a given type may be made by a company, and several generations may be designed and manufactured over the years. For this reason, the people who design these devices (often professional engineers) frequently have considerable experience in dealing with their potential problems. Furthermore, since large development costs can be spread out over many units, the reliability, ease-of-use, and performance of these can be much higher, given the cost, than those of equivalent devices designed and built in-house. Also, apparatus that is built in a research laboratory is, in practice, frequently not completely documented, or perhaps not even documented at all. That is, there may be no operating instructions, schematics or other system diagrams, or troubleshooting and maintenance procedures. This often leads to problems, especially over the long term. (Of course, such apparatus should be provided with documentation – especially schematics and the like – and this should always be kept up to date. See also the comments on pages 18–19.) Things are often constructed in-house in order to save money. However, one should keep in mind that research is labor-intensive, not capital-intensive. In general, a better 127
General points regarding the design and construction of apparatus
128
way of overcoming price obstacles is to obtain used equipment and have it refurbished, if necessary. If equipment is needed only occasionally (e.g. electronic instruments), rental is another possibility. Shared equipment purchases can also help to reduce cost burdens. In general, the only really good reason for designing and building equipment in-house is to achieve capabilities that cannot be provided by the commercial suppliers. However, even if no commercial device is available that performs precisely the desired task, it is often possible to slightly modify a commercial item to obtain what is needed. Sometimes, the best approach is to just make some compromises, and accept what is available. If it is necessary to make an apparatus in the laboratory, one should design it so that standard commercial parts can be used as much as possible. If equipment with the desired capabilities cannot be obtained commercially as an offthe-shelf item, a possible alternative to making it entirely within one’s own laboratory is to outsource the task (or at least part of it) to a specialist company. This approach is often used when one needs particular versions of items that, by their nature, can be readily configured to custom requirements – such as vacuum chambers. However, there is often scope for more of the effort, including at least part of the design work, to be outsourced. An example of where such an approach would be suitable is the design and construction of unique high-power electronic equipment, such as power amplifiers and power supplies, which can be difficult to make reliable for those without experience (see Section 11.7).
5.3 Time issues As indicated above, it is completely normal to underestimate (and often severely so) the amount of time needed to design and construct apparatus. Large apparatus development projects in a research laboratory can easily take on a life of their own, at the expense of the scientific activities that they were intended to support. It is not uncommon for apparatus development to take so much time that the original scientific arguments for doing it cease to be valid. (Generally, a design and construction period of more than about two to three years is probably too long, unless the apparatus is of a fairly standard type [1]. Even six months may be excessive, in some rapidly advancing scientific fields.1 ) Furthermore, many development projects never actually result in a working piece of apparatus, or one that is practical to use. Misjudging the amount of time, money, manpower, skill and motivation (of all parties involved), required to design, construct and debug a major piece of apparatus in-house can pose a serious threat to a research program. If one can order custom or semi-custom apparatus from commercial sources, then this has the considerable advantage of probably leading to the production of working hardware in a predictable time. This is often not the case when one is dealing with university-based workshops. 1
These numbers should not be interpreted too rigidly. Also, very-large scale projects, such as the construction of astronomical telescopes and particle accelerators, have to be considered separately.
129
5.5 Making apparatus fail-safe
5.4 Making incremental advances in design If one wishes to improve the performance of a type of apparatus without sacrificing reliability, it is desirable to make small incremental changes in design (rather than large leaps). This is discussed on page 4. It is generally not a good idea to build apparatus that requires two or more new difficult things to work at once – an approach that has been likened to fighting a two-front war [2].
5.5 Making apparatus fail-safe It is very important to design things in such a way that failures do not lead to major damage to the apparatus and/or injury to people. For example: (a) cryostats should have pressure relief devices attached to closed spaces that may unexpectedly permit the entry of cryogenic liquids; (b) motor-driven mechanisms should have limit switches or torque limiters (see page 224); (c) high-power apparatus should incorporate thermal cutouts to prevent overheating; (d) power supplies should have current limiting devices such as fuses or circuit breakers. One should be particularly careful to ensure that events such as: turn-on transients, software failures, or abuses of the inputs and outputs of electronic devices cannot lead to hardware damage or injury [2]. Some types of protective device must be replaced after each overstress event. These include, for example, electric fuses, shear pins (used to relieve excessive mechanical stresses in mechanisms), and rupture discs (for relieving liquid or gas overpressures). If overstresses are a frequent occurrence, such devices have the disadvantage that one must maintain a supply of spares. This is to be contrasted with the situation in the case of protective devices that are either self-resetting after the removal of the overstress, or which can be reset manually with little difficulty without replacing parts. The latter category includes circuit breakers, resettable fuses, torque limiters, and pressure relief valves. Nevertheless, replaceable safety devices may sometimes have a greater intrinsic reliability and/or better performance than their self-resetting counterparts, which makes them preferable in certain situations. This depends on the application. For example, ordinary non-resettable fuses are generally not particularly dependable. They sometimes blow at currents considerably above or below their rated values (the latter is due to a kind of fatigue failure) [3]. On the other hand, rupture discs are considered to be highly reliable (see page 254). Equipment must often be provided with interlock devices, in order to prevent unwanted conditions from arising following certain events. For example, the loss of cooling-water flow in a water-cooled device can be made to result in the activation of a flow switch. This causes the device’s power to be removed – thereby preventing overheating. Another
General points regarding the design and construction of apparatus
130
example is the safety interlock on a high-voltage power supply. The latter can be designed so that if its cover is taken off, a switch automatically disconnects the electricity supply. In some situations, interlock arrangements can be relatively complicated. They may be required to perform a number of different actions, depending on what combination of conditions is detected. These interlock devices may comprise numerous sensors (e.g. flow switches or thermometers) and actuators (e.g. relays or solenoid valves), in addition to a control system. In such cases, one might be tempted to use a computer (i.e. a PC), if a hardwired electronic control system appeared to be too troublesome to design and build, or if some flexibility in the operation of the interlocking device was desired. However, a better approach would be to use a programmable logic controller (or PLC). Such devices are not only more suitable than PCs for such tasks, but also offer greater reliability (see page 491).
5.6 The use of modularity in apparatus design The advantages of modularity have been discussed on page 4. In the present context, another important point on this topic is that one should use off-the-shelf commercial modules as much as possible to perform difficult functions within an apparatus. (Here we are referring to a self-contained apparatus, which would normally be housed within its own enclosure.) In this way, one offloads the difficult design and construction problem of making a device that performs the function onto an experienced manufacturer. Ideally, such modules are robust, compact, easy to use, interact in a simple way with external modules and components, and exhibit stable and predictable behavior. The third requirement depends to a large extent on the isolation that one can achieve between modules – that is, changing the insides of a module has little effect on the way it interacts with other modules, as long as the interfaces are the same. This condition is most difficult to attain with optics, easier with electronics, and most readily done with software [2]. Nevertheless, even commercial modules with less than ideal characteristics can still be very useful in avoiding difficult engineering problems. In general, modules are often made for the use of companies known as “OEMs.”2 Such firms buy these devices, possibly add others, place them in an enclosure, interlink them and install auxiliary components such as switches and connectors, and sell the result as a unit. Good examples are the modular power supplies and power amplifiers employed in electronics. Even if a commercial module has much better performance than one requires, or can carry out more functions than are necessary, their use is often very worthwhile. There is a very large range of modules available for performing many tasks in areas of interest to scientists – including, for example, electronics, mechanical devices, and optics. In the case of electronics, power modules are one example, but there are many others, including things such as quartz crystal oscillators, filters, radiofrequency amplifiers, mixers and frequency multipliers, and even lock-in amplifiers. A good example of a modular approach in the construction of electronics is the NIM system, which is used mostly in nuclear and particle physics for data acquisition. A 2
That is, original equipment manufacturers
5.7 Virtual instruments
131
commercially available mounting frame, called a NIM bin, contains a power supply, and 12 slots into which standard electronic modules can be plugged [4]. Interconnections between the modules are made with coaxial patch cords. A given NIM configuration need not be permanent; modules can be removed from the bin, replaced with others, and reconfigured at will. This gives the NIM system great flexibility. Numerous types of NIM module are commercially available, including: preamplifiers, signal generators, signal averagers, counters/timers, coincidence units, gates, DACs and ADCs,3 and analog arithmetic units (adders, multipliers, etc.). The NIM system is a relatively old one. Two more modern and comparatively sophisticated modular electronics systems, which include a digital bus to allow computer control of the modules, are CAMAC and VME. In the case of mechanical devices, the range of different types of module is not as large. Some commercially available mechanical modules are: bearing systems (e.g. rotary and linear antifriction bearings), gearing arrangements (e.g. speed reducers and rightangle drives), clutches, brakes, and lead-screw assemblies (e.g. micrometer heads for highprecision movement). In the case of optics, if one wants to employ diode lasers (and since providing electrical power to such devices without damaging them can be difficult), complete diode laser modules with integral driver electronics are useful and available. Other modular optical devices include objective lenses, laser beam shapers, spatial filters, camera lenses, and even complete miniature spectrometers.
5.7 Virtual instruments When there is a need for some type of electronic measurement and/or control system, but no requirement that such a system be self contained, the method of creating a “virtual instrument” may be beneficial. With this approach, signals are measured by separate and self-contained electronic devices, such as voltmeters and lock-in amplifiers, and processed by a computer with the aid of commercial data-acquisition software. The resulting information may then be used to produce control signals via other devices, such as voltage sources and signal generators, which can be used to vary parameters in an experiment (e.g. temperature). Data are passed back and forth between the computer and the various devices over data cables, of which the most common is the IEEE-488 type. In this way, by using general-purpose commercial instruments, a computer, and suitable software, the task of building special purpose electronic hardware is avoided. This method can be very useful, especially when the virtual instrument is to be used in an experiment, for which a computer is needed in any case. It is also invaluable when the person who must construct the instrument has no experience in designing and building electronics. The approach has an important advantage, over dedicated hardwired instruments, in that errors in design would usually correspond to software errors, which are much easier to correct than those in hardware. The builder of the virtual instrument can take advantage of the large number of mathematical subroutines (for carrying out filtering, spectral analysis, PID 3
That is, digital-to-analog converters (DACs) and analog-to-digital converters (ADCs).
132
General points regarding the design and construction of apparatus
control, etc.) that are normally provided with data acquisition software. This strategy is very common in research. However, the virtual instrument approach can be very expensive and cumbersome in some cases. Sometimes a large number of costly real instruments (such as voltmeters) are needed in order to do a job that could be handled with much less expense using a single box of hardwired analog electronics. In certain situations, the performance of a virtual-instrument setup (e.g. high-frequency capabilities) may not match what can be achieved using a hardwired apparatus. Furthermore, the physical complexity of a virtualinstrument system can be relatively high, compared with that of an equivalent dedicated hardwired instrument. This can have consequences for the reliability of the system. Hence, in situations where the construction of a dedicated hardwired instrument is feasible, then this is sometimes a better solution.
5.8 Planning ahead Finding the right balance between planning and improvisation in the construction of apparatus is sometimes difficult. Devices that are hastily put together can have numerous reliability problems, if they work at all. On the other hand, if they do work, they allow the experimenter to quickly proceed with their primary scientific activities. In contrast, devices that are built only after careful thinking and planning are likely to take much longer to put into operation. However, when completed, such devices stand a greater chance of working and doing so reliably, and of being straightforward to use. (It seems that, in a lot of experimental work, the main problem is that too little thinking and planning is done, rather than too much.) Improvisation is probably most justified when it is used to confirm that a design concept is valid, with the intention that it will be followed up by something more permanent immediately afterwards. Unfortunately, improvised arrangements are often left in place, and remain a burden to users of the equipment. This issue is discussed in Ref. [1]. Two important activities that are often ignored during the planning phases are: looking at the published literature on similar devices (see page 6), and discussing the proposal with people who have had experience with such apparatus. One should plan things carefully enough that it will not be necessary to make changes to the design of the apparatus after construction has started, or to partly dismantle and reconstruct anything, both of which can adversely affect reliability. If someone else, such as a machinist or an instrument maker, is going to build the apparatus, it can be very useful to discuss the plans with them before these are finalized. Such people may suggest alternative ways of making the item or simplify its design, which can help to improve reliability and ease maintenance. The provision of clear and complete design information, in written form, to the builders of the apparatus is essential. Key points should be reviewed verbally to ensure that there are no misunderstandings. In the case of mechanical components, remember to include tolerances with the dimensions. Having good drawings, either of mechanical structures, or schematic diagrams in the case of electronics, is also important. Computer aided design
133
5.9 Running the apparatus on paper before beginning construction
(CAD) systems are commonly available and (at least in some cases) easy to use. These programs commonly have 3-D visualization capabilities, which can make the interpretation of mechanical structure drawings very straightforward. The creation of mechanical drawings and schematic diagrams are discussed in Refs. [4] and [5] respectively.
5.9 Running the apparatus on paper before beginning construction Much time, effort and money can be saved if a newly designed and potentially expensive piece of apparatus is subjected to a quantitative analysis to determine its probable behavior before construction is started. Of course, this is just standard engineering practice. Nevertheless, it is often avoided by experimentalists, who frequently prefer an intuitive hands-on approach to the construction of apparatus. For instance, vacuum systems are often provided with much larger pumps than are needed for the task, leading to considerable unnecessary expense. Even worse, it is also sometimes found that the pumping capacity of a new vacuum system is inadequate. It may be that the desired level of vacuum can be achieved with an empty vacuum chamber, but is too low when sources of outgassing such as mechanisms, evaporation sources, bellows, etc., are placed inside it. These sorts of unpleasant surprises can be avoided by making use of vacuum theory (usually described in books on vacuum technique [6]) to predict the performance of a proposed system. Engineering analyses should begin with order-of-magnitude estimates of the relevant quantities (actually, an important task even in the beginning stages of design), and followed up by a more detailed analysis, if necessary. Often, an order-of-magnitude estimate is sufficient. (Experimenters have more latitude in this regard than engineers engaged in designing commercial products, who generally have to do accurate calculations in order to minimize production costs [1].) Indeed, one sometimes finds that the data needed for a more detailed calculation are just not available. Occasionally, one finds that even an order of magnitude estimate is impractical, either because the necessary data are lacking, or because of the complexity of, or lack of knowledge about, the system under consideration. However, this is probably fairly uncommon. Frequently, empirical or semi-empirical “rulesof-thumb” exist and can be useful in such cases. In those instances where the behavior seems too difficult to model, it is worth considering whether the design can be changed so that it can be modeled. If it is not possible to obtain an estimate of the size of some quantity, it may be sufficient to place a lower or upper bound on it, so that one can ensure that it is “big enough” or “small enough.” In some situations it will be found that a helpful mathematical analysis is virtually impossible or too time consuming. In these cases, a trial-and-error approach or pilot studies may be the only way forward. One is not necessarily obliged to create mathematical models of apparatus from the ground up. Most of the systems used in the laboratory involve phenomena that are very well understood, and which are well covered – either in books or in engineering or
General points regarding the design and construction of apparatus
134
instrumentation journals. For detailed analyses of complicated behavior it is possible to turn to one of the many computer-simulation programs that are commercially available. These cover the operation of just about every imaginable type of apparatus, including, for example: (1) (2) (3) (4) (5) (6)
electronic devices, photon and electron optical instruments, vacuum systems, mechanical structures and systems, electromagnets, and RF cavities and waveguides.
Self-contained computer applications are available to solve the equations of, for instance: (1) (2) (3) (4) (5)
heat transport, the propagation of electromagnetic waves, sound and structural vibrations, continuum mechanics (i.e. deformation of materials under mechanical loads), and fluid flow.
5.10 Testing and reliability The best way of creating reliable apparatus is not to assume that faults can be detected and corrected after the item has been designed and made. The most effective approach is to ensure that the apparatus is designed and built to be dependable right from the beginning, and not to rely on testing to expose defects. Potential and actual reliability problems should be anticipated, detected, and corrected continuously as one moves through the design and construction process.
5.11 Designing apparatus for diagnosis and maintainability An important, but perhaps not often consciously considered, part of design work should aim at enhancing the ability to diagnose and repair failures. Points to consider include the following [7]. (1) Since those who might have to repair the apparatus may have little training or motivation, ensure that minimum maintenance skills are needed. (2) Cluster subsystems and parts to facilitate their location and identification. (3) Design the apparatus for easy visual inspection. (4) Ensure that parts, and especially potentially unreliable ones, are adequately spaced, easily accessible, and can be replaced without too much difficulty. For example, avoid layering printed circuit boards in electronic devices, so that some have to be removed in order to reach others.
135
5.14 Ergonomics and aesthetics
(5) Provide troubleshooting test points. For instance, in electronic devices, wire loops can be provided at critical parts of the circuit for the attachment of test lead clips, or rear panel connectors can be installed that are joined to these points. (6) Provide labels for the parts, components, and structures in the apparatus in locations where they can be easily seen during maintenance. (7) Install hand holds or handles on heavy parts to facilitate handling. (8) Design so that maintenance requires the minimum number of, and only commonly found, tools and instruments. (9) Design the apparatus to require minimum adjustments – the need to adjust for drift, shift, and degradation should be avoided, if possible. Furthermore, the requirement for scheduled maintenance should be reduced as much as is practical [8].
5.12 Design for graceful failure It is often desirable to arrange things so that when apparatus fails (if it must fail), it does so gradually, so that repairs can be postponed to a more convenient moment. This is especially true in situations that involve long turnaround times – as in the case of many types of low-temperature experiment. Redundancy can be helpful here. For example, in the case of certain types of cryogenic work, several experiments may be done simultaneously in one cryostat, so that if one of them fails, the others can carry on.
5.13 Component quality Laboratory work is a labor-intensive, rather than a capital-intensive, activity [4]. Hence, the use of top quality components is especially important when apparatus is being constructed. The extra cost of employing such components is largely outweighed by the resulting savings in time that could otherwise be taken up in troubleshooting and repair if the least expensive parts are used. Some good examples of items for which this could be significant are electrical connectors, switches, and valves.
5.14 Ergonomics and aesthetics Because of the prevalence of human error in research (see Section 1.3), it is important to design the apparatus so that the interaction between it and the operator is straightforward and convenient. The number of controls seen by the operator should be minimized. Controls that must be worked simultaneously should be grouped together [10]. If there are controls that should not ordinarily be manipulated, it may be useful to make this difficult to do. This can
General points regarding the design and construction of apparatus
136
be done by, for instance: placing such items behind a lockable door on an instrument rack, or by requiring the use of tools in order to operate them. Other considerations include (a) the logical layout of controls, readouts, and connectors, (b) the clear and accurate labeling of these, and (c) placing important and frequently used items of this kind in places where they are easily seen and manipulated. A discussion of graphical-user-interface design in programs written for data-acquisition4 is provided in Ref. [9], and includes issues of this kind. Although it is generally not accorded great importance, the aesthetic appearance and finish of apparatus can have an effect on its reliability. This is because an attractive and nicely finished device will encourage the user to handle it carefully, since it is likely to be given respect, and because even a small amount of damage will show very clearly. Objects that are not well finished tend to wear out much more quickly than would otherwise be the case [10]. A corollary of this observation is that one should not allow the appearance of an instrument to degrade any more than necessary.
Further reading A very good discussion on the design of apparatus can be found in Ref. [1]. This contains numerous useful points regarding general aspects of apparatus design, although some of the detailed considerations (concerning specific areas, such as electronics) are out of date. Reference [2] contains helpful information on design that is relevant to the development of apparatus in general (although the focus of the book as a whole is on electro-optical systems). In particular, it provides strategies for improving the likelihood that apparatus development projects will be successful. Much useful information concerning the details of designing and building a variety of different types of scientific apparatus can be found in Ref. [4]. The classic work on the design and construction of electronics is Ref. [5].
Summary of some important points (1) Do not make equipment or parts that can be obtained commercially. (2) It is completely normal to underestimate (and often severely so) the amount of time needed to design, construct and debug things in the laboratory. (3) If cost is an issue, consider getting used items. (4) Think about whether it would be possible to modify commercial equipment or parts that are not quite appropriate so that they do what one requires. (5) Consider the possibility of outsourcing potentially difficult design and construction tasks to commercial firms. (6) The creation and maintenance of documentation for apparatus is very important. This includes especially system diagrams, such as schematics for electronics. 4
That is, the design of front panels on virtual instruments.
137
References
(7) Apparatus should be designed to be fail-safe – it must be possible for the apparatus to fail without suffering major damage or causing personal injury. (8) Avoid difficult design and construction problems that could lead to poor reliability by using commercial modules as much as possible to perform difficult tasks within an apparatus. (9) Spend time carefully determining requirements and designing the apparatus before beginning construction. (10) Perform engineering calculations, as necessary and if possible, in order to ensure that the proposed apparatus will behave as expected. (11) Reliability must be designed and built into apparatus from the beginning – one should not expect that things can be made reliable by testing for, and correcting, problems at a later stage. (12) Take future maintenance needs into consideration during design and construction – but arrange things so that the requirement for scheduled maintenance is reduced as much as possible. (13) Since laboratory work is labor-intensive, rather than capital-intensive, use top-quality components in order to reduce reliability problems.
References 1. E. Bright Wilson, Jr., An Introduction to Scientific Research, Dover, 1990. Except for some minor modifications, this is a republication of a work that was originally published by McGraw-Hill in 1952. 2. P. C. D. Hobbs, Building Electro-optical Systems: Making it all Work, John Wiley and Sons, 2000. A second edition of this book has been published (John Wiley and Sons, 2009). 3. R. A. Pease, Troubleshooting Analog Circuits, Elsevier, 1991. 4. J. H. Moore, C. C. Davis, M. A. Coplan, and S. C. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. A fourth edition of this book has been published (Cambridge University Press, 2009). 5. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn, Cambridge University Press, 1989. 6. G. F. Weston, Ultrahigh Vacuum Practice, Butterworths, 1985. 7. J. Pecht and M. Pecht, Long-Term Non-Operating Reliability of Electronic Products, CRC Press, 1995. 8. P. D. T. O’Connor, Practical Reliability Engineering, 4th edn, John Wiley & Sons, Ltd, 2002. 9. P. A. Blume, The LabVIEW style book, Prentice Hall, 2007. 10. W. Trylinski, Fine Mechanisms and Precision Instruments: Principles of Design, Pergamon Press; Warszawa:Wydawnictwa Naukowo-Techniczne, 1971.
6
Vacuum-system leaks and related problems
6.1 Introduction Leaks in vacuum systems are an age-old and common problem in scientific research. Although they are nuisances in many situations, leaks are generally most troublesome when they appear in ultrahigh vacuum (UHV) systems and cryogenic apparatus. In both cases, even minuscule leaks, which are difficult to detect and can be intermittent, may be sufficient to disable the equipment. The thermal stresses on both types of equipment can cause cracks to form in the vacuum envelopes and joints that can lead to leaks. Turn-around periods for these systems can be very long. In the case of UHV devices, bakeout of the vacuum chambers is usually required, which may take a day or more. In order to use cryogenic apparatus, one must usually cool down and warm up the equipment very slowly. Time scales for this range from about an hour to a day or more, depending on the type of equipment. Hence, the leak testing of these systems can be very time-consuming and nerve-wracking. In the case of cryogenic systems in particular, the relative fragility of the equipment, and the presence of large thermal and mechanical stresses during the routine operation and handling of these devices, promotes the formation of leaks. In these systems, leaks may occur only when the apparatus is cold, and therefore often inaccessible to direct investigation. Also, in liquid helium at sufficiently low temperatures, it is possible for a very severe type of leak (called a “superleak” – see page 139) to form that may be undetectable at room temperature. Cryogenic devices are often covered with delicate electrical wiring, which can make leaks difficult to access for repair. All these problems combine to make the appearance of leaks in cryogenic equipment a dreaded occurrence for low-temperature experimentalists. When small leaks are present in vacuum equipment, it is possible for contaminants (such as moisture in the form of condensation) to plug these, usually only temporarily [1]. Just breathing on a molecular flow leak (roughly < 10−7 Pa·m3 ·s−1 ) can deposit enough moisture to cause it to close, at least for a while [2]. Particulate matter in the air can also temporarily partly or fully block these small leaks [2]. Since this sealing action may be intermittent (depending on humidity levels in the room, for example), such leaks can be very difficult to locate. In general, leaks grow larger with time [2]. A rarely encountered exception to this is the case of extremely fine molecular leaks (>10−12 Pa·m3 ·s−1 <10−11 Pa·m3 ·s−1 ) under ambient conditions, which will probably plug up permanently when exposed to air for a few 138
6.2 Classifications of leak-related phenomena
139
hours [1]. Leaks1 in air smaller than about 10−12 Pa·m3 ·s−1 are not observed, apparently because they become plugged immediately in this environment [1]. If vacuum apparatus is being made in a laboratory workshop by technical staff, it is should be kept in mind that they may not necessarily know about certain important points of design and construction (such as the correct selection of materials for use in vacuum, reliable joining techniques, etc.). It is essential for the research workers who will depend on this equipment to ensure that all the relevant personnel are aware of these details (which may be specific to the area of work, such as UHV or cryogenics), and willing to take appropriate measures.
6.2 Classifications of leak-related phenomena The term “leak” (or formally: “real leak”) refers to a passageway connecting the outside to the inside of a vacuum chamber, which permits the passage of air or gas into the latter. Most of the discussions in this chapter concern such defects. Sometimes one can have a situation in which a reservoir of gas is accidentally connected to the vacuum via a small passageway. For example, this may occur if a screw is threaded into a blind hole in a vacuum chamber. When the chamber is evacuated, air slowly leaks out of the cavity at the end of the screw, along the threads and into the chamber. Such a fault is called a “virtual leak”. These are relatively rare compared with real leaks. In cases where the inside wall of a vacuum chamber is contaminated with a volatile substance, such as water or some organic fluid, the evaporation of this into the chamber is called “outgassing.” Even very small amounts of such substances can cause problems in vacuum operations. For example, in the case of UHV systems, the presence of a fingerprint on a surface exposed to the vacuum can lead to the presence of intolerable levels of gas. In cryogenic apparatus, outgassing is not normally a problem, because the vapor pressure of most substances becomes negligible at low temperatures. The evolution of adsorbed gases from surfaces within vacuum systems is another form of outgassing. In some cases, the diffusive movement of gases through solid materials can be a significant contribution to gas loads in vacuum systems. This is called “permeation leakage.” For example, in ultrahigh vacuum chambers with metal walls, the movement of hydrogen through the metal limits the pressures that can be achieved with these systems. In the case of glass dewars used in cryogenic work, the movement of helium through the walls of the dewar and into the vacuum space can be a serious problem – requiring the regular re-evacuation of these items. Finally, diffusion of helium through rubber O-rings may be a nuisance during leak-hunting activities using a mass spectrometer leak detector. In cryogenic experiments involving the use of liquid helium at sufficiently low temperatures, it is possible for a particularly severe type of leak to arise, which is known as a “superleak.” Such faults, otherwise known as “superfluid leaks” or “λ leaks,” occur when liquid helium enters the superfluid state below the λ point at 2.2 K. In this state, the viscosity 1
In particular, “real leaks,” as opposed to virtual or permeation leaks – see Section 6.2.
140
Vacuum-system leaks and related problems
of the helium vanishes, and tiny leaks that would be manageable at higher temperatures, or in the absence of helium, become intolerably large. For example, a capillary crack of diameter 0.2 µm in a piece of tubing will pass superfluid helium at a rate of 10−7 Pa·m3 ·s−1 regardless of its length. If, however, one has a 1 cm long crack of this diameter, the temperature of the system is fixed at 2 K, and if there is helium gas at 3 kPa pressure on one side, and a vacuum on the other, the helium will flow at a rate of only 10−12 Pa·m3 ·s−1 . If the same crack is at room temperature, and the pressure differential between the helium gas and the vacuum is 100 kPa, the flow rate is also about 10−12 Pa·m3 ·s−1 [3].
6.3 Common locations and circumstances of leaks Leaks in vacuum systems are usually found at singular (i.e. atypical) places, such as: solder joints, welds, insulator-to-metal seals, connections, seams, braces, sudden bends in pipes, etc. In general, the most relevant locations are at permanent and demountable joints [4]. Only rarely will leaks spontaneously form in the solid walls of a stainless steel vacuum chamber (although this does happen occasionally) [1]. Hence, every effort should be made to ensure the integrity of the system at these special locations. For example, it has been found that joints created by welding are more reliable than those made using demountable connections [5]. Some parts of these special locations may be particularly prone to leakage. For example, although leaks are more likely to occur along the length of a weld joint than in the solid walls of a chamber, they are particularly likely to be found at the termination of a weld [6]. Another probable location is where two welds intersect – as, for example, when stainless steel tubing with a welded seam is welded to another item. Superleaks that are undetectable during room-temperature leak testing (with a mass spectrometer leak detector) are often caused by porous brass plates, overheated silversolder joints, or oxygen-containing copper that becomes porous during heating [7]. The presence of leaks can depend on the condition of the system. For example, some leaks may appear only when a system is hot (as in the case of UHV apparatus during bakeout). Others may occur only at low temperatures (as in the case of cold leaks or superleaks in cryogenic systems). Sometimes particular mechanical stresses (possibly dependent on the orientation of the apparatus) may determine whether a leak is present at any given time. If a vacuum system contains solder joints, and particularly if these are exposed to thermal stresses (as in a cryogenic system), or mechanical ones (perhaps due to vibration or handling), then these are particularly likely locations of leaks. In fact, when leak testing is done on cryogenic systems, any solder joints should normally be (after demountable seals) the first items checked. Leaks at joints are particularly likely if these are hard to see, or difficult or awkward to reach during installation (for example, in order to tighten a UHV flange). Hence, inaccessibility is a condition that can lead to vacuum leaks – and can also make them difficult to detect, locate, and repair. When vacuum systems are designed, accessibility should be kept
6.4 Importance of modular construction
141
firmly in mind while deciding where to locate vulnerable items such as demountable seals, bellows, and feedthroughs. Thin-walled (e.g. 0.1 mm) components, such as tubes used in cryogenic work, and metal bellows, are especially susceptible to leaks due to corrosion. Brass, bronze, and even stainless steel are vulnerable to this degradation. The most common agents responsible for the corrosion are probably the fluxes used during soldering and brazing. However, other substances, such as cleaning chemicals, air-borne salt from the sea, and industrial pollution, may also contribute to such problems. Galvanic corrosion is yet another possibility. Some specific common causes of leaks include: (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n)
problems with sealing surfaces, such as the presence of scratches or particulate matter, unevenly tightened and/or misaligned flanges, valves not closed completely, damaged O-rings, or old ones that have deteriorated and cracked or taken a “set,” use of incompatible seals in demountable connections, seals (e.g. O-rings) that have not been seated properly, in the case of indium seals in cryogenic apparatus, insufficient compression of the indium, poorly made, fatigued, or corroded: weld, braze, or (especially) solder joints, undetected porosity in raw materials (e.g. brass), cracked (e.g. from fatigue), punctured, corroded, or porous bellows, and especially edge-welded bellows, damaged mechanical motion feedthroughs (especially types using sliding seals), damaged items involving glass-to-metal or ceramic-to-metal seals (e.g. vacuum windows, electrical feedthroughs, etc.), worn or particle-contaminated valves, and especially pressure relief valves, all-metal valves, and valves with sliding-seal motion feedthroughs, and small cracks in glass ionization gauge tubes.
Issues concerning demountable seals, mechanical motion feedthroughs, and valves are discussed in Sections 8.2.8, 8.2.9 and 8.2.10.
6.4 Importance of modular construction In vacuum work generally, and especially in those areas where hard-to-find leaks can often appear (e.g. cryogenic and UHV devices), it is very desirable to use a modular system design. During assembly, such an approach makes it possible to easily clean, dry, and leak test each new subsection before it is added onto the rest of the system. This strategy also makes it easy to carry out repairs if leaks should develop, since a leaky module can be removed, fixed and tested away from the rest of the system. Furthermore, since replacing items that are merely suspected of being leaky is, in some cases, preferable to definitively identifying and repairing the cause of the leak (e.g. re-welding a cryostat vacuum joint with a cold leak), modularity can be beneficial in this way as well. There is sometimes a
142
Vacuum-system leaks and related problems
tendency to avoid modular construction because of the extra costs and labor involved in providing the required additional flanges. However, other benefits – in the form of ease of assembly provided by a modular design, and the flexibility that allows modifications to be made to the system at a later time – come into play to make the use of modular construction well worth the extra effort [8].
6.5 Selection of materials for use in vacuum 6.5.1 General points The proper selection of materials for use in vacuum is usually very important. It is especially significant in the case of ultrahigh vacuum or cryogenic apparatus, where even the smallest leaks can disable a system. Leaks in vacuum chamber materials caused by porosity may not appear immediately after the chamber has been assembled and evacuated. Oil, grease, or other contamination on the chamber surfaces (perhaps introduced during fabrication) can temporarily obstruct leak paths. Later on, when the contamination is inevitably altered and/or disturbed by some process (such as UHV bakeout) the leaks will reveal themselves [1,9]. During cool-down in cryogenic apparatus, contamination such as soldering flux or grease will embrittle and undergo thermal contraction, and this can open up a previously blocked leak path [10]. If cracks are present in a material, but do not initially fully penetrate it so as to cause leaks, they may grow in size under the effect of varying mechanical stresses. Such stresses can be caused by handling, or by temperature changes during, for example, UHV bakeouts or cryogenic operations. As they become larger, these cracks can eventually develop into fully fledged leak paths. Porosity is found in metals that have been cast in air, without any subsequent processing in the form of rolling or forging that can close the pores in the material. Air castings in brass or iron are particularly problematic in this regard [4]. In vacuum work, one should generally not use air-cast metals, but only those that have been subsequently either rolled or forged. Multi-axis forging is the preferred process, since it closes voids in a material and breaks up potential leak paths. However, forging is not always an option for some materials that are useful in creating vacuum devices. Vacuum-cast materials are also usually free of porosity, and are therefore acceptable [4]. If rolled metal is used to fabricate a vacuum vessel, care should be taken to ensure that the material is oriented in such a way that the voids that remain after rolling cannot become leak paths (see the discussion in Section 6.5.3 concerning stainless steel). For example, slices from plates of rolled materials should not be used to make UHV flanges, because of the transverse pores in these that could act as leak paths [9]. In order to avoid such problems, the vacuum wall should always be parallel to the direction in which material is rolled. The best material for flanges is usually that which has undergone forging (or “multi-axis forging”) [4].
143
6.5 Selection of materials for use in vacuum
Sintered metals, which are created not from a melt, but by the consolidation and compaction of a precursor powder material, are often porous and should generally not be used in vacuum systems. It is often necessary to employ ceramics in vacuum equipment. These are normally sintered, and unless they are carefully chosen, can be prolific sources of outgassing. Both sintered metals and ceramics may be nonporous if they have been processed in a particular way – using a hot isostatic pressing (HIP) technique. Their suitability should be established before such materials are used in a vacuum system. One should also be careful about the use of metals that may become porous upon heating (e.g. silver soldering), possibly owing to the presence of a volatile phase in the material. An example of this is “free-machining brass,” which contains lead [11]. The presence of even very small imperfections that extend through a piece of metal – particularly in the case of narrow materials such as thin-walled tubing – can result in the formation of leaks if the item is taken through many temperature cycles (as in the case of cryogenic apparatus) [12]. Porosity is occasionally found at or near the seams of welded tubing. Since many metals that are otherwise useful in laboratory work can cause difficulties in vacuum applications, the best question to ask when selecting materials is not “What materials can cause problems?”, but rather “What materials can be relied upon?”. If a particular joining technique is being contemplated (e.g. welding, or hard soldering), one must also find out whether a given material is compatible with that technique. In critical vacuum applications, such as UHV or cryogenic work, one should avoid making assumptions and using anonymous materials. Metals that are to be employed for such purposes should be unambiguously identifiable, with a known provenance from a reputable supplier. Even alloys that are normally suited for these applications can have flaws that may cause leaks. Hence, one should purchase only material of the highest quality. Top quality grades of high-performance materials such as stainless steel are generally provided with a test certificate. These indicate conformance to industry standards with regard to chemical composition and possibly also other parameters, such as strength and ductility. Such materials are also provided with a batch number, so that they can be traced to the original production run.
6.5.2 Leak testing raw materials It is desirable to leak test materials used for critical applications before construction begins. The testing of raw materials permits the early identification of defects, long before the final machining and assembly of the apparatus under construction. Such measures are especially useful in the case of products that have a history of causing problems (such as brass bars), but are less important for those materials that tend to be very reliable (such as stainlesssteel forgings). For instance, it is straightforward to leak test thin-wall tubing (which is known to sometimes contain defects) using a mass-spectrometer leak detector [12]. One can also test other raw material forms, such as bar and plate stock, using this method, if a suitable fixture can be devised to allow the object to be connected to a leak detector. For example, stainless-steel bars are evaluated for UHV applications by testing sample slices taken from their ends (both ends of a given bar) [15]. Objects being tested must be
144
Vacuum-system leaks and related problems
very clean and dry, since contaminants can block leak paths [13]. A good way of ensuring this is to place the material in an ultrasonic cleaner, in order to remove oils and other contaminants, and then air bake it, so as to drive off moisture and cleaning solvents [15]. (Leak testing is discussed in more detail later in the chapter.) It is also important to identify the remainder of the batch that defective raw material has originated from so that it can be scrapped.
6.5.3 Stainless steel The most common types of stainless steel for use in vacuum apparatus are the “austenitic” or “300 series” alloys. Examples include the 304, 316, 321, 347, and 348 grades. The 303 “free machining” grade, which contains sulfur, should be avoided. Austenitic stainlesssteel alloys are very frequently used in vacuum equipment because they are strong, easy-to-weld, stable, nonmagnetic, and relatively corrosion resistant. Stainless steels have a reputation for being very reliable materials for vacuum work (unlike, for example, brass). Despite the name, these “stainless steels” are not as impervious to chemical attack as is commonly believed. They are susceptible to galvanic corrosion and stress corrosion (see pages 99 and 101). This can lead to problems if such materials are exposed to certain chemicals that might be used without hesitation in the creation of vacuum vessels (certain solder fluxes, for example). In particular, they are susceptible to stress corrosion attack by chemicals containing chlorine, such as some solder fluxes, chlorinated solvents, certain adhesive tape ingredients, some marking inks, salt water (e.g. spray from the ocean – see page 67), and certain cleaning chemicals [14]. This form of degradation, which can result in the formation of pinhole leaks, can sometimes be a problem – particularly for the stainless steel bellows that are often used in vacuum systems. If stress corrosion in bellows is a serious issue, it may be worthwhile to avoid using stainless steel, and consider employing a more corrosion-resistant metal, such as the R R R C-276 or the Ni–Cr–Mo–W alloy Hastelloy C-22 [14]. Ni–Mo–Cr alloy Hastelloy These materials are, like stainless steel, relatively easy to weld. For applications in which soldering must be done (as in the case of thin-wall tubing in some cryogenic apparatus), copper–nickel tends to be easier to wet with normal solders than stainless steel, and is more reliable in the presence of active solder fluxes – see below. Particularly in critical applications, such as cryogenic and UHV systems, the correct orientation of the rolling direction of the stainless steel relative to potential leakage directions is very important. Impurities in a cooling ingot of stainless steel tend to concentrate at its top and center (see Fig. 6.1)[15]. After the top of the ingot (which contains most of the oxide and sulfide impurities) has been removed, the impurities that remain are drawn out by the rolling process into long filamentary inclusions. These are aligned along the rolling direction. If an injudicious selection of source material is made, these inclusions can act as leak paths. For example, if the base of the vacuum can in a cryogenic instrument is made from bar stock, leaks along axial inclusions (which may be undetectable under ambient conditions) can be unacceptably large when the can is at cryogenic temperatures
6.5 Selection of materials for use in vacuum
145
}
Removed Before Rolling
Voids and Inclusions
Ingot
Direction of Rolling
Leak Paths
Billet and Bar Stock
Fig. 6.1
Leak Tight
Leak Tight
Tubing
Plate and Sheet Stock
Origin and nature of inclusions in stainless steel (see Ref. [15]).
[16]. As discussed earlier, the use of multi-axis forged material can reduce the threat posed by inclusions. Another helpful measure is the use of stainless steel with a low impurity content, such as electro slag refined (ESR) material [5]. Series 300 stainless steels are susceptible to a form of deterioration that occurs during welding. This effect, known as “weld decay,” involves the reaction of carbon with the chromium in the region near the weld, and the consequent depletion of chromium in this area. The resulting chromium-deficient material is no longer stainless steel, and is particularly susceptible to corrosion. Furthermore, embrittlement of the resulting metal can lead to the formation of small cracks when the material is cooled to low temperatures, which can result in leaks. Weld decay can be avoided by using stainless steels with a low carbon content, such as the 304L or 316L grades. These have the disadvantage of being somewhat weaker than their higher-carbon counterparts (although in practice this does not usually cause problems), but are relatively easy to weld compared to other common stainless steel alloys. (As stainless steels go, such low-carbon alloys are also comparatively easy to braze and solder.) Alternatively, it is possible to obtain special “stabilized” grades, which contain titanium, niobium, or tantalum additives that preferentially react with the carbon, and thereby prevent the depletion of chromium. These include the 321, 347, or 348 alloys. In cryogenic work, the use of titanium-stabilized 321 alloy in thin-wall tubing is fairly commonplace. (However, this alloy is difficult to torch braze or solder.) The niobiumstabilized 347 grade is slightly preferable to the titanium-stabilized one, because the latter is known to exhibit porosity under certain conditions. In fact, for this reason it is usually a good idea to leak test 321 tubing before accepting delivery [17]. The issue of weld decay is discussed in more detail in Ref. [9], and Ref. [17].
146
Vacuum-system leaks and related problems
6.5.4 Brass Brass has a mixed reputation in vacuum work. Although it is not used in UHV applications, owing to the high vapor pressure of the zinc, it is frequently used in cryogenics. It is relatively easy to machine, and to join by soldering or brazing. However, porosity or other sources of leaks can sometimes be a problem. For example, some brass (particularly extruded bars) can have cracks [18]. Some brass is made from recycled metal, and this is reputed to occasionally contain anomalies in the middle of a bar. The use of extruded brass should be avoided, if possible. In general, brass bars often contain longitudinal pores (as discussed in the case of stainless steel) that can act as leak paths [4]. Rolled brass sheets tend to have fewer flaws than extruded bars and sections. Hence for items such as blanks, rolled sheet or plate stock should be employed [12]. Brass that has been cast in air is likely to be porous. In vacuum work, only wrought metal (produced e.g. by rolling the cast material) should be used. A particular grade of brass is often used in laboratory work because it is easy to machine. This composition, called “free machining brass” or “free cutting brass” contains up to 4% insoluble lead. Such material may be porous, or can become porous when it is heated during hard soldering. Hence, it should not be used when hard soldering is to be carried out, or in structures involving thin sections [11]. A suitable alloy for vacuum work is the 70/30 Cu/Zn material known as “cartridge brass” [11].
6.5.5 Phosphor bronze This copper alloy is considered useful because of its strength. However, only the highest quality material is likely to be nonporous [11].
6.5.6 Copper–nickel Thin-wall tubing made of the 70/30 Cu/Ni alloy is considered to be a reliable vacuum material. It is used frequently in cryogenic work, partly because of its low thermal conductivity. This metal is wetted easily by ordinary tin/lead solders, and has good resistance to corrosion, and in particular that caused by active solder fluxes. For these reasons, copper–nickel tubing is generally preferable to stainless steel when soldering must be done.
6.5.7 Copper Copper is thought to be a reliable material for vacuum work [11]. The main problem with it is that it is difficult to weld, because of its high thermal conductivity. Generally, only certified “oxygen-free high conductivity” or “OFHC” copper is used in critical vacuum applications, because it is non-porous and otherwise well behaved [19]. Other coppers that contain oxygen can develop porosity if they are exposed to hydrogen at high temperatures – for example, during brazing or firing in a hydrogen atmosphere [9].
147
6.6 Some insidious sources of contamination and outgassing
6.5.8 Aluminum In many respects, aluminum (in particular, 6000 series alloys) is a good material for vacuum work, and is employed extensively in systems used in particle-physics research. However, it is relatively difficult to join by soldering, brazing, or welding. With sufficient expertise and care aluminum can be welded to the quality needed to contain ultrahigh vacuum. However, it is more difficult to weld than stainless steel, and the welds tend to be porous [20]. This is discussed in more detail on page 155. Furthermore, aluminum is soft, scratches relatively easily (compared with stainless steel – making it less useful for O-ring sealed flanges), and is difficult to clean [21]. In order to be completely UHV compatible, aluminum must be treated in order to minimize surface porosity. For these reasons, aluminum is not often used in ultrahigh vacuum work in cases where other materials are adequate. Alloys containing large amounts of zinc should be avoided [4].
6.6 Some insidious sources of contamination and outgassing 6.6.1 Cleaning agents The chemicals that are used to remove contaminants from surfaces are themselves often sources of outgassing and contamination. In critical vacuum work, it essential to ensure that cleaning agents and their impurities are not left behind at the end of a cleaning operation [4]. Solvents containing halogens (e.g. chlorine) can be especially difficult to remove from surfaces, and should therefore not be used to clean contamination-sensitive vacuum equipment. The solvent 1,1,1-trichloroethane (which is a good degreasing agent), will combine with adsorbed water vapor on nickel surfaces to form hydrochloric acid, which then attacks the nickel [21]. Furthermore, chlorinated solvents (as well as other chlorine-containing cleaning agents) present a risk of stress corrosion when used on some items such as stainless steel bellows [14]. Isopropyl alcohol is a useful solvent, which does not pose these problems [21]. Some particularly important examples of problematic substances are acid or alkali cleaning or pickling agents, and phosphoric acid electropolishing electrolyte – all of which can cause corrosion [4]. Another possible consequence of using acid cleaning is the “hydrogen embrittlement” of metals (such as welds, for example, if the cleaning is done before welding) [6]. If for some reason it is thought necessary to use such chemicals, expert advice should be sought. Complete washing and drying of the treated parts is essential [4]. A low-temperature anneal (150–200 ◦ C) is necessary to remove dissolved hydrogen from acid-cleaned components prior to welding [6]. Substances that are used to mechanically abrade surfaces, such as abrasives and buffing soaps, are another cause of contamination [4]. The media used in sand or bead blasting operations can imbed themselves in surfaces. A consequence of the cleaning of stainless
Vacuum-system leaks and related problems
148
steel items with steel wool is that particles of iron can get trapped in their surfaces, which can result in the formation of corrosion pits and defects [19]. Unless parts that are cleaned in solvents (such as acetone or isopropyl alcohol) are properly dried (by warming to about 80 ◦ C), the solvent may remain in leak paths for periods of several days at ambient temperature. The result of this is that leaks can become blocked, and therefore not detectable using a leak detector. Nevertheless, they will reappear later (possibly when the part is in service) when the solvent has evaporated [4]. Furthermore, small amounts of solvents that are left in vacuum systems to be pumped away by ion pumps or titanium sublimation pumps can cause problems for these devices [21]. Edge-welded bellows are very difficult objects to clean and inspect. Solvents should not be used with these, since they are difficult to remove [9]. Vacuum, hydrogen, or argon baking are the preferred methods of cleaning. The use of either reactive-gas cleaning or glow-discharge cleaning may be beneficial in some cases (see page 212). Because of the thinness of the metal comprising edge-welded bellows, they are particularly susceptible to the presence of particulates in the convolutions, which can puncture the material when the bellows are compressed. For this reason, alkaline degreasing agents should not be used on such bellows, since they can precipitate particles [22]. The best approach, when dealing with edge-welded bellows, is to assume that they cannot be cleaned if they become contaminated, and to treat them accordingly. Cleaning procedures for vacuum systems are discussed in Refs. [22] and [23].
6.6.2 Vacuum-pump fluids and substances Vacuum-pump oils can be an important source of trouble, even for systems that are not intended to be especially clean. The two major sources of such oil are the rotary-vane type primary pumps and diffusion pumps. While precautions can be taken to reduce the possibility of contamination (see Chapter 7), accidents still can and do occasionally happen. In the case of diffusion pumps, the use of silicone oil as the pump fluid can lead to particularly severe contamination problems (see the discussions on pages 97 and 196). For this reason, silicone should not be used in diffusion-pumped vacuum systems where cleanliness is highly important [19]. Oil-free pumps are available that can effectively eliminate this problem. These include: (a) (b) (c) (d) (e) (f) (g)
sorption pumps (primary pumps), scroll pumps with hermetically sealed bearings ("), piston pumps with hermetically sealed bearings ("), R R or Teflon diaphragms ("), diaphragm pumps with Viton turbomolecular pumps (high-vacuum pumps), ion pumps ("), and cryopumps (").
These issues are discussed in more detail in Chapter 7. Sorption pumps that utilize molecular sieve cooled by liquid nitrogen are frequently used to rough-pump ultrahigh vacuum systems. However, the sieve material (synthetically
6.6 Some insidious sources of contamination and outgassing
149
prepared alumino silicates) tends to degrade and become powdery after repeated thermal cycling. This powder can enter the vacuum system and form a deposit on surfaces, and can foul metal vacuum valve seats. In order to prevent problems when using such pumps, they should be located at a distance from metal bakable valves, and the sieve should be replaced when powdering starts to occur [24]. (See also the comments on page 255.)
6.6.3 Vacuum greases Greases in vacuum systems can act as contaminants in their own right; or by attracting different types of contamination, such as dust and other particulate matter, which can cause leaks in O-ring joints. Furthermore, in systems that must be substantially free of leaks (such as UHV and cryogenic apparatus), they can temporarily block porosity in materials, and thereby make it very difficult to effectively leak-test such systems. A particularly problematic kind of grease is the type based on silicone oil (sometimes called “high-vacuum grease”). This substance is very difficult to remove from objects, and the oil within it tends to migrate across surfaces. Problems related to the use of silicones are discussed in more detail on page 97. The use of vacuum greases in room temperature vacuum systems is seldom required, except on dynamic O-ring seals. It should very seldom, if ever, be necessary to use grease in a UHV system.2 In cryogenic apparatus, grease is often used as a heat-sinking agent, and in order to make a type of demountable vacuum joint called a “greased cone seal.” In cases where the use of grease is necessary, it is best to avoid types containing silicone, except when dynamic O-ring seals or greased cone seals are involved. Those based on R , are generally preferable. Every effort should be made to hydrocarbons, such as Apiezon keep greases away from leak-prone parts of the apparatus, such as solder and weld joints.
6.6.4 Other type of contamination It is especially important to avoid introducing objects that are plated with cadmium into a vacuum system. This is particularly relevant if the system is to be baked, even to moderate temperatures. At elevated temperatures, cadmium melts and evaporates – thereby leading to contamination throughout the inside of the system. The material thereby deposited is very difficult to remove. Particular care must be taken to ensure that small cadmium-plated parts, such as screws and washers, are not confused with nickel-plated or stainless-steel ones, and placed inside vacuum apparatus [4]. Fingerprints are a common source of contamination inside UHV systems, since finger secretions have a very large vapor pressure [9]. Other types of contamination, such as silicones, can be picked up by the hands during normal laboratory activity. In general, fingerprints are virtually impossible to remove from surfaces merely by baking. For these reasons, it is important to use gloves when handling parts that are to be placed in ultrahigh vacuum. (See also the discussion on page 324 and Ref. [23].) 2
Special low-vapor-pressure greases are occasionally used to lubricate antifriction bearings in UHV systems.
150
Vacuum-system leaks and related problems
During the machining of parts intended for use in clean vacuum systems, cutting fluids or lubricants that contain silicones must be avoided. Consideration should be given to properly storing vacuum components. Openings in chambers, valves, and other items should generally be closed off with a dust cover of some kind when they are not in use. Certain types of plastic (in particular materials such as polyvinyl chloride and the like, which contain plasticizers) can contaminate sensitive surfaces. Therefore, storage containers and flange covers should be made of suitable materials, such as polyethylene or nylon. Discussions of the contamination issues that arise in the handling and packaging of vacuum components can be found in Ref. [23], and references therein.
6.6.5 Some common causes of contamination in UHV systems Some frequent errors that lead to ultrahigh vacuum-system contamination include (from Ref. [21]): (a) exposure of the system to a mechanical oil-sealed primary pump without a trap (in fact, such pumps should generally not be used on UHV systems), (b) allowing liquid-nitrogen cold traps on diffusion pumped systems to warm up overnight or on long weekends, (c) permitting molecular sieve from a sorption pump to enter the chamber (see above), and (d) allowing a diode ion pump to operate in the unconfined glow discharge regime, or “starting mode,” for long periods when the system contains organic substances.
6.7 Joining procedures: welding, brazing, and soldering 6.7.1 Worker qualifications and vacuum-joint leak requirements In general, the creation of permanent vacuum joints (and particularly those for critical applications, such as UHV or cryogenics) should be done by professional technical staff – not research workers. This is especially true in the case of welding and vacuum brazing, and somewhat less important for torch brazing and soldering. Depending on the materials being joined, training and experience are often critical in ensuring a successful (structurally sound and leak-free) bond. For example, welding aluminum is significantly more difficult than welding stainless steel, and specialized training and experience is needed in order to do it properly. Rather than having important joining tasks done in-house by general laboratory technical personnel, it may be worth considering having it done externally by specialist companies. Technicians who are not frequently involved with vacuum system construction often do not realize how much care is needed to create helium mass spectrometer leak-free joints. Hence, it can be very beneficial to have the person who creates the joints watch and participate in the leak testing, so that they will appreciate the need for high quality work.
151
6.7 Joining procedures: welding, brazing, and soldering
Companies that specialize in welding for vacuum applications may be able to do helium leak testing in-house. The use of firms with such capabilities can save much time and effort shipping leaking items back and forth between the laboratory and the firm’s premises. In any case, if an external company is employed to do the joining, the written specifications for the work should include an indication of the maximum acceptable leak rate. This should be a quantitative requirement – statements like “the chamber must be free of leaks” can easily lead to misunderstandings. Sometimes the upper limit is chosen to be the maximum sensitivity of the leak detector. In situations where low-temperature superleaks are a concern, special arrangements may have to be made. It has been reported that welders have been known to hide their mistakes (leaky joints) beneath a coating of transparent varnish or epoxy [9]. Such “repairs” stand a very good chance of leaking eventually, especially if the joint is subjected to temperature extremes and thermal cycling (see the discussion on page 180). Professional organizations exist that are dedicated to the acquisition and dissemination of information about joining methods, and it may be useful to contact these if advice is needed. For example, in the USA, Edison Welding Institute (EWI) provides such a service [25], while in the UK, The Welding Institute (TWI) fulfills this role [26].
6.7.2 General points 6.7.2.1 Introduction Welding is generally the preferred method of making permanent joints. Some materials (such as brass) cannot be welded. Brazing is usually practical, and is next in order of preference after welding [4]. Soldering should generally be avoided, if possible, especially if this requires the use of highly corrosive fluxes, as is usually the case for stainless steels. In general, solder joints are weak in comparison with weld and braze joints. Furthermore, solder joints deteriorate slowly over time. Hence, in contrast with weld and braze joints, they should be considered non-permanent [27]. Joining using adhesives (e.g. epoxy) has limited application in vacuum work. It is sometimes employed in cryogenic research (e.g. for the assembly of plastic mixing chambers in dilution refrigerators). Adhesives are also occasionally used for the temporary repair of leaks. Aside from leak repair, the use of adhesives for vacuum joining will not be discussed further. Generally, methods of joining that do not involve the use of fluxes are to be preferred over ones that do. Examples of the former are tungsten inert gas (or TIG) welding, vacuum brazing, and ultrasonic soldering.
6.7.2.2 Semi-permanent joints Sometimes soldering may be thought desirable in order to allow the creation of compact, semi-permanent joints that can be easily disconnected and remade if necessary. This is often not practical with welding or brazing (but see the discussion of semi-permanent “weld-lip seals” on page 245). However, it is normally better to employ a demountable connection
152
Vacuum-system leaks and related problems
using an O-ring or a metal (e.g. indium) seal, rather than soldering, in order to make such a joint. Solder joints cannot, in practice, be taken apart and remade indefinitely, because brittle intermetallic compounds form as a result of reactions between the solder and the materials being joined. If the joint is repeatedly soldered, or the solder is held in a molten state for an excessive time, these compounds will accumulate, and will ultimately cause the joint to degrade [27].
6.7.2.3 Cleaning of surfaces prior to bonding The most important condition that must be fulfilled in order to achieve a good bond, using any method, is cleanliness of the surfaces. Care must be taken to ensure that these are free of, for instance, oils and greases, varnishes, paints, tape residues, and “indelible” marker pen inks. One should be particularly alert to the possible presence of silicone oils and greases, which are often found in the laboratory, can be very difficult to remove from surfaces, and which are very effective at preventing bonding (see the discussion on page 97). Other problematic contaminants include dust, filings, corrosion products produced by the storage environment, salt, and pollution from nearby factories. Heavy oxide layers are also an impediment to bonding, and should be removed beforehand. The removal of contaminants from surfaces using mechanical methods prior to bonding is an efficient approach in cases of very severe contamination or oxidation. The use of clean wire brushes (e.g. stainless steel brushes for stainless steel surfaces) is generally acceptable [28]. However, methods involving the use of abrasive grinding compounds, such as sand or bead blasting, or abrading with alumina, silicon carbide, or steel grit, should be avoided, if possible, because particles tend to become imbedded in surfaces [28, 29]. Such particles subsequently inhibit wetting of the surfaces by the bonding material, such as solder or the weld metal, or cause defects in other ways. The use of fibrous materials, such as tissue paper or cotton swabs, to clean materials before joining can cause problems, because the fibers may catch on burrs in the metal, where they form leak paths during bonding [8]. As mentioned above, ordinary steel wool should not be used to clean stainless steel, because particles of iron can become imbedded in the surface [19]. Methods of cleaning materials prior to joining are discussed in Ref. [30]. Parts that have been cleaned in preparation for joining should be immediately placed in suitable plastic (e.g. polyethylene) bags in order to protect them. Stainless steel items that are to be welded should not be wrapped in aluminum foil, since aluminum can rub off onto these and reduce the quality of the resulting weld.
6.7.2.4 Improving bonding characteristics with surface coatings Some metals, such as stainless steels, can be difficult to bond by brazing or soldering (especially the latter). The ease of joining these materials can often be considerably improved by electroplating or vacuum coating (e.g. by sputtering) the surfaces with some more easily bondable metal, such as nickel. For example, the rims of beryllium discs used for cryogenic vacuum windows can be electroplated with a layer of metal such as Cu, Ni, or Ag, which allows them to be soldered. The resulting seals are leak-tight against superfluid
153
6.7 Joining procedures: welding, brazing, and soldering
helium [31]. When plating is used to improve the solderability of difficult materials, the plated deposits must be free from porosity. This is in order to avoid oxidation of the substrate metals by salts in the plating baths, which will reduce the ability of solders to wet the surfaces [27]. A list of appropriate plated deposits for improving the soldering properties of various difficult-to-solder substrate materials is provided in Ref. [29]. After parts have been electroplated, they should be vacuum baked to 200 ◦ C or more in order to drive off residual moisture, and ensure that there will be no blistering of the electroplated material.
6.7.2.5 Use of eutectic alloys for soldering and brazing Generally, brazing and soldering alloys should be eutectic compositions – that is, with the lowest melting points of all the possible elemental combinations [12]. For example, in the case of tin–lead solders, the 63/37 Sn/Pb alloy is the eutectic composition (see the Sn–Pb phase diagram on page 417). The reason for using eutectic alloys is that they go directly from the liquid state to the solid state upon cooling, with no intermediate pasty phase. As a result, they are more likely to flow smoothly through the joint than other alloys, and there is a reduced likelihood of the formation of voids in the solidified material. Also, if the items undergo movement during solidification of the solder or braze alloy, there is a lower probability of cracks forming in the solid metal.
6.7.2.6 Joints exposed to thermal stresses In cases where large changes in the temperature of the joint are a possibility, and particularly where cryogenic conditions are involved, it is preferable that the items being joined be made of the same materials. This should be done in order to reduce stresses due to differences in thermal expansion and contraction. Cyclic changes in such stresses caused by cooling to low temperatures and warming to room temperature can eventually result in fatigue-related cracks and leakage. A major advantage of autogenous welding (discussed on page 154) is that all the materials, including the weld metal, are identical. However, the above guideline is especially important in the case of soldered joints, which are inherently weak, and generally become brittle at low temperatures. If the use of different materials is unavoidable, the design of the joint should be such that the filler material is placed under compression rather than tension when the temperature is changed, in order to prevent the formation of cracks. For example, if two tubes are being soldered together to form a sleeve joint (see page 162), which is to be thermally cycled between room temperature and cryogenic temperatures, the outer tube in the joint should have a larger thermal contraction coefficient than the inner one [12].
6.7.2.7 Joining of bellows Metal bellows (or “flexible tubing”) are generally especially susceptible to the formation of leaks by corrosion. For this reason, it is preferable that any method that is used to
Vacuum-system leaks and related problems
154
join bellows does not involve the use of flux, and that if it is necessary to use flux, that this be thoroughly removed afterwards [10]. Welding, using the orbital or electron beam techniques, is the preferred approach. Brazing has the drawback that the high temperatures involved tend to anneal the bellows material, which makes it susceptible to fatigue failure. Methods of brazing bellows are discussed in Ref. [30]. In many cases, the best strategy for joining bellows is to obtain ones with pre-attached flanges, and to use demountable seals.
6.7.3 Reduced joint-count designs and monolithic construction It can be worthwhile to spend some time designing vacuum components in such a way as to minimize the number of joints. Even good quality weld joints are generally inferior to bulk material, since the welds are essentially cast (rather than rolled or forged), and are therefore more brittle than the parent metal. Ideally, one might aim for a monolithic construction, in which an item is machined out of a single billet of material (preferably vacuum melted and/or forged) without any weld, braze, or solder connections. (As usual, such an aim must be balanced against the need for modularity.) In practice, this ideal of joint-less construction may not be attainable. However, in consultation with the machinist who will be making the item, it is often practical to significantly reduce the number of connections in a component. Furthermore, with modern CNC mills or turning centers3 it is possible to create vacuum components with few or no joints, and (if necessary) with a complexity that would not be practical using ordinary milling machines or lathes operating under human control. A good example of this principle of “monolithic” or “monocoque” construction can be found in the “compact UHV chambers” that are now commercially available. Such items have other advantages besides a reduced number (or complete absence) of welds. These include having lower weights, higher levels of cleanliness, greater rigidity and strength, and a better alignment of holes and mounting surfaces, than is possible with the usual welded assemblies of flanges and tubes. Such chambers can also possess very complicated geometries, without incurring the normal penalties in cost or reliability.
6.7.4 Welding 6.7.4.1 Arc welding Normally, welds for vacuum joints are created by the “arc welding” method, which involves generating heat by forming a high current electric discharge between an electrode and the materials to be welded. In particular, the TIG (tungsten inert gas) welding (or “autogenous welding”) method is the preferred one. This involves using a tungsten rod as the counterelectrode, and creating the arc in a flow of inert gas (usually argon) in order to prevent oxidation of the materials. This technique involves melting the adjoining surfaces together, rather than adding extra material (as is the case for other arc welding methods). The 3
That is, milling machines or lathes acting under computer control.
155
6.7 Joining procedures: welding, brazing, and soldering
resulting bond is therefore completely homogeneous with the surrounding metal, and less susceptible to failure due to (for example) thermal stresses than might be the case if a separate filler material were used. Methods of welding that use flux-coated filler rods must be avoided [28]. A crucially important factor in the creation of leak-free welds is the training, experience, and skill of the welder. A good welder can make the difference between trouble-free joints, and those that lead to a vast amount of wasted time – involving hunting for leaks, patching them up, and retesting and rewelding them [28]. Ideally, this welder would be a person who worked full time on welding vacuum systems, preferably involving a single type of material such as stainless steel. The use of automatic or semi-automatic welding methods (generally provided by specialist companies) is generally preferable to manual techniques. A good example of this is the “orbital welding” method which is used to join tubes. Such techniques are capable of producing excellent welds very reproducibly. An important factor in the success of a welded joint is its design. In the case of TIG welding, the joining of parts with widely differing thicknesses should be avoided, if possible. Otherwise, as when thin-wall tubing is to be joined to a flange, it may be desirable to machine the parts with thicker sections so that the materials presented to the arc are of similar thickness, and therefore will melt at comparable rates. This is done via the provision of “weld relief grooves”, and the like. It is also generally beneficial if the parts are made to sufficiently close tolerances that they fit together without the presence of large gaps. Such issues are discussed in more detail in Ref. [8].
6.7.4.2 Welding of specific materials Stainless steel is the most common vacuum material that is also relatively easy to arc weld. See the comments on page 145 about selecting stainless steel alloys so as to avoid “weld decay,” which can lead to the formation of corrosion and cracks. In order to avoid the presence of potential leak paths due to filamentary inclusions in certain forms of raw material (as discussed on page 144), it is necessary to use suitable joint designs, as shown in Fig. 6.2. Aluminum alloys can be welded, but not so easily as stainless steels [9]. Special expertise is required in order for welds to be made in the former materials that can be expected to be leak-tight with confidence. Otherwise, welds in aluminum tend to be porous [20]. Many welders, who can do excellent work on stainless steels, are unable to pass a welding qualification test for aluminum [32]. Proper preparation of the materials prior to welding, and the correct welding technique (preferably automated) is required. Copper (including even OFHC copper) is difficult to weld reliably, partly because of its high thermal conductivity [28]. (In general, materials with a high thermal conductivity, such as copper and aluminum, are difficult to weld.) The use of copper alloy filler metals (e.g. 0.75% Sn, 0.25% Si, 0.2% Mn, balance Cu) during the arc welding of copper is the best approach [28]. Brass cannot be welded, because of the high vapor pressure of the zinc. Copper–nickel alloys should not be joined using the TIG-welding method, because
Vacuum-system leaks and related problems
156
Leak path
(a)
Outside of vacuum chamber
Inside
(b)
Outside of vacuum chamber
Weld bead
Weld on inside, if possible (otherwise, make a fullpenetration weld from the outside)
Weld on inside
Fig. 6.2
Examples of welding setups designed to avoid leaks due to filamentary inclusions in the material (see Ref. [28]). the resulting welds can be porous. However, arc-welding techniques that involve the use of suitable filler metals are suitable with these materials.
6.7.4.3 Electron-beam welding For some particularly difficult or high-performance welding tasks, consideration might be given to the use of “electron-beam welding.” This procedure involves projecting a sharply focused, high-power beam of electrons into the joint, which results in melting and fusion of the joint surfaces (without using a filler metal). The resulting welds can be very narrow and deep. Because the process generally takes place under high vacuum (about 10−2 Pa), the weld joints are very clean and have minimum porosity. Also, the metallurgical properties (e.g. the grain size) of the material in, and adjacent to, the weld is generally superior to that obtainable with arc welding [28]. Of the available welding techniques, electron-beam welding generally produces the highest-quality joints. Distortion of welded components is also lower in the case of electron-beam welding than with other kinds. With this method, it is also feasible to join a greater range of dissimilar metals than is possible with other welding processes. Furthermore, the joining of thick to thin sections, and the bonding of delicate parts is more easily done than with other techniques. The disadvantages of electron-beam welding include the need to carry out the process in a vacuum (although in certain cases, with some tradeoffs, this can be avoided), and the need for considerable expertise and very expensive equipment. Normally, electronbeam welding is performed by specialist companies. This process is described in more detail in Refs. [28] and [33].
157
6.7 Joining procedures: welding, brazing, and soldering
6.7.4.4 Some causes of leaks in welded joints Leaks in welded joints in general can be caused by, for example, impurities, incompatible welding alloys, and incorrectly set welding parameters [5]. “Hydrogen embrittlement,” caused by the presence of dissolved hydrogen in the metal (frequently due to acid cleaning the components prior to welding them), can result in the formation of very fine cracks in the heat affected zone surrounding the weld joint. This can occur under the influence of the stresses caused by thermal contraction following welding. The dissolved hydrogen may be removed by heating the parts at temperatures of 150– 200 ◦ C [6]. Residual stresses in a welded vacuum structure can result in cracking of a nearby weld. Such stresses can be reduced by a medium-temperature (400 ◦ C) stress-relief treatment. This is not normally needed in the case of simple structures, unless stress corrosion is a possibility [6,28]. A major advantage of electron-beam welding over other types is that it results in very little residual stress in the parts.
6.7.5 Brazing The bonding of two metals using a filler alloy with a lower melting point (but above 450 ◦ C) is called “brazing” or sometimes “hard soldering” (as opposed to “soft soldering,” which refers to alloys that melt below 450 ◦ C) [9]. The use of brazing, where this can be done in a furnace under vacuum (i.e. “vacuum brazing”) or in hydrogen, is well suited to vacuum applications [28]. Vacuum or hydrogen brazing operations are normally carried out by specialist companies. However, the practice of brazing by heating parts with a hand-held torch, and the use of paste fluxes, should be undertaken with care, and avoided if possible. This is because of the risk of entrapment of flux in the joint, with a consequent reduction of strength and the formation of voids – and hence, potential leak paths [30]. Washing the parts afterwards, although essential, may not be completely effective in removing the flux. (Boiling water is useful for this purpose.) Another potential problem with the use of hand-held torch brazing is the possibility of overheating the joint, and the consequent formation of porosity. Unfortunately, the use of torch brazing is sometimes very difficult to avoid [28]. The presence of certain brazing fluxes in a vacuum system (and in particular those containing fluorine) can lead to poisoning of non-evaporable getter (NEG) vacuum pumps, and may necessitate their replacement. If copper is to be torch-brazed, it is possible to use phosphorous-containing filler materials that eliminate the need for flux. These fillers, including “Phos-Copper” and “SilFos” types, can be used without difficulty in critical high vacuum applications if they are subjected to a high-temperature bakeout after brazing [13]. Joints made using such filler materials may be weaker than those created with the aid of fluxes, however [34]. Important considerations when brazing is done include the following.
Vacuum-system leaks and related problems
158
(a) Use eutectic brazing alloys (so that the metal goes directly from the liquid phase to the solid phase when cooling, with no intermediate pasty phase that could lead to cracks if the parts should move at this time). (b) Avoid Cd-, Zn-, or Pb-bearing brazing alloys, if the brazed items are to be used in vacuums at pressures of less than 10−4 Pa, or if the items are to be baked, or if a vacuum furnace is used to braze, since these elements have a high vapor pressure that will cause problems under such conditions [9,33]. (c) Select filler materials that are compatible with the metals to be joined, so that the formation of brittle intermetallic compounds, and stress corrosion cracking, are avoided [13]. (d) Unlike welds, brazed connections must be provided with mechanical support, e.g. by using “sleeve joints” (see the discussion on page 162). (e) If vacuum brazing is to be used, the joint should be designed with this in mind – including the provision of suitable joint clearances and places to hold the filler metal during the brazing operation. (f) If hand-held torch brazing must be used, and if contamination of the vacuum system is not an important issue (as is normally the case for cryogenic systems, and systems operating at pressures of more than 10−3 Pa), a nearly eutectic 50% Ag–Cu–Zn–Cd brazing alloy with a low melting point and low viscosity (i.e. good flow properties) may be preferable to a grade without the high vapor-pressure ingredients [12,33]. (g) If torch brazing must be used, design the joint so as to avoid blind holes, crevices, or tortuous passageways in which flux may become trapped. The formation of voids and flux inclusions can also be minimized by (a) ensuring that the parts are heated evenly during the brazing operation, so that all surfaces are at nearly the same temperature when the filler metal is applied; (b) providing a uniform joint clearance; and (c) ensuring that the condition of the surfaces to be bonded is uniform. Brazing for vacuum applications is discussed in Refs. [12], [13], [20], and [30]. Complete discussions of brazing in general are provided in Refs. [27] and [33].
6.7.6 Soldering4 6.7.6.1 Introduction As discussed earlier, in comparison with weld and braze connections, solder joints have the disadvantage of being relatively weak, susceptible to fatigue failure, and prone to gradual degeneration. The latter effect is caused by the slow coarsening of the grain structure of the solder, and the formation of brittle intermetallic compounds at the solder/substrate interface (see the discussion on page 416). The most serious difficulty in using solder for vacuum sealing applications is often that of getting it to adhere adequately to the surfaces of certain common vacuum materials, such as stainless steel [28]. In order to overcome this problem, highly corrosive fluxes are frequently employed. Because it may be almost impossible to 4
Throughout this chapter, “solder” refers to soft solder, unless otherwise indicated.
159
6.7 Joining procedures: welding, brazing, and soldering
completely remove these fluxes from the parts after the soldering has been completed, they are vulnerable to corrosion and leaks over time. In spite of these drawbacks, in practice it is sometimes necessary to use solder joints. In particular, they are often employed in cryogenic vacuum applications, where very thinwall tubing and other delicate materials must be joined, and where appropriate welding or brazing operations might involve expensive or unavailable equipment, highly specialized skills, and considerable cost.
6.7.6.2 Causes of leaks in solder joints The most serious threats to the loss of hermetic sealing in solder joints are the presence of voids in the solder, and non-wetted regions of the soldered surfaces [29]. The former condition is often caused by having excessively large gaps between the surfaces to be soldered, because non-eutectic solder is being used, or because of flux trapped in the solder. The latter condition is due to: employing materials that are difficult to wet by solder, lack of cleanliness of the surfaces, the use of inadequate solder fluxes, or insufficient heating. Other common sources of problems include (a) overheating the solder, which causes porosity, and (b) allowing corrosive flux to remain on the surfaces of thin materials, where it causes corrosion and pinholes [10].
6.7.6.3 Problems due to flux, and the choice of flux Items that are particularly vulnerable to leaks caused by flux-induced corrosion include thin stainless-steel sheet and tubing; and brass, bronze, or stainless steel bellows [10]. For the corrosive flux that is commonly used with stainless steel (e.g. ZnCl2 in hydrochloric acid), the time scale for the corrosion of thin-wall tubing is said to range from weeks to years [8]. With corrosive fluxes, one should also be concerned about the possibility that a mist of flux will form in the air during soldering, which can land on other parts of the vacuum apparatus, including nearby tubing [8]. If possible, the use of inorganic acid and salt fluxes (such as ZnCl2 in hydrochloric acid) should be avoided in preference to organic acid types, such as glutamic acid–urea fluxes [13]. Unfortunately, the strongest (i.e. most adherent) joints between metal surfaces are generally made using strong (i.e. corrosive) fluxes. Also, organic fluxes can also be more difficult to use than inorganic ones owing to their sensitivity to overheating [8]. Rosin flux and rosin-core solder should not be used in the soldering of vacuum components, because the flux can remain in the solder in the form of fine threads, which can ultimately lead to leaks [8]. A guide to the selection of fluxes for various metals is provided in Ref. [13].
6.7.6.4 Selection of solders Purity requirements Alloys used for soldering should be of the highest quality – and, in particular, free of the impurities that can reduce their ability to wet surfaces. For example, aluminum at
Vacuum-system leaks and related problems
160
levels of more than 0.005% in solder is sufficient to cause inadequate wetting, and other problems [33]. Normally, solders from reputable suppliers should be sufficiently pure for most purposes (and one should not be tempted to obtain solder from anyone other than a reputable supplier).
Solders for room-temperature applications For applications at or near room temperature, and especially those involving stresses due to vibrations or other causes that cannot be adequately supported by the items being joined, the 95/5 Sn/Ag alloy is a good choice. An example of such as situation is a joint in a copper roughing line that is subjected to vibrations from a nearby mechanical pump (an occasional cause of solder joint failure due to fatigue). The Sn/Ag composition is considerably stronger than the normal Sn/Pb alloys, and joints made with it will last much longer under such conditions [21].
Solder joints in low-temperature applications The use of soft solder joints in cryogenic apparatus requires careful consideration. Most common solders become brittle at very low temperatures. Hence, they form cracks very easily, especially after repeated mechanical stresses resulting from temperature cycling. In professional “large-scale” cryogenic engineering applications, soft solders are generally not used, because their low tensile strength and brittleness frequently results in leaks [35]. However, even in small-scale experimental work, soft solders should be avoided, if possible [36]. The only solders that retain their toughness and ductility at low temperatures, and also have the necessary wetting properties to allow them to adhere adequately to surfaces, are the indium alloys. Such solders have been recommended for sealing applications in cryogenic environments [28,29]. Their dependability under low-temperature conditions is well established. For example, the 52/48 In/Sn eutectic alloy has been used on electrical contacts in cryogenic computing devices, where high reliability after repeated stress cycles is essential. The ductility of this solder is relatively high. For example, its ductility at 4.2 K and 77 K, in terms of elongation,5 turn out to be roughly 20% [37]. The major disadvantage of indium as a solder is its tendency to corrode, especially when exposed to halides such as chlorine [29]. Since active solder fluxes often contain chlorine, this is a major drawback. If indium solders are used with chlorine-containing fluxes, it is crucial to remove any flux residues remaining after soldering [29]. Since, in practice, it may not be possible to completely remove residual fluxes from a joint, the creation of a vacuum seal using indium solder and an active flux is a risky undertaking. The preferred method of avoiding this problem is to use a fluxless soldering technique, such as “ultrasonic soldering.” Another approach, which may allow a less-active flux to be employed, involves the use of an inert cover gas to protect the joint materials from oxidation. These techniques are discussed on pages 163–164. Indium solders can also be exposed to chlorine from 5
Elongation is a measurement of how much a material can be stretched before it breaks.
6.7 Joining procedures: welding, brazing, and soldering
161
other sources – such as chloride-bearing pollution in the atmosphere, airborne salt spray from the ocean, and cleaning chemicals. They also tend to degrade in humid environments, and if condensation is present. To protect them from external causes of corrosion, it is recommended that indium solder joints be coated with a conformal protective coating, such as a lacquer – particularly if the exposed area of the solder is large. The alkyd resin lacquer known as “Glyptal” has been suggested for this purpose [30], but its ability to survive thermal cycling to cryogenic temperatures is doubtful. The use of such protection leads to yet another difficulty – that of being unable to adequately leak test the solder joint. Indium solders have other disadvantages. They are mechanically very weak, even compared with ordinary tin–lead solders. They are also very expensive and somewhat difficult to obtain, compared with other types of solder. Hence, the use of indium alloys in cryogenic solder joints is less attractive than it might appear at first sight. If indium is to be used in a vacuum joint, a much better approach than soldering with it would be to use it as a seal in a demountable connection. These “indium seals,” comprising a pure indium wire compressed between two flanges, are widely used and highly reliable devices. Indium seals are discussed on page 242. An alternative to indium solders is an ordinary tin–lead solder. In particular, extensive experience has shown that the 63/37 Sn/Pb eutectic composition is very easy to use, with good wetting and flow characteristics, and is relatively reliable in cryogenic vacuum applications [12]. This solder is much less affected by corrosion than indium alloys (and therefore does not require a protective coating), and is also relatively strong. The ductility of 63/37 Sn/Pb solder is very poor at low temperatures when compared with that of the indium alloys (the former having a 2% elongation at 4.2 K). However, it is considerably better than that of the tin–silver solders, which are manifestly brittle (e.g. the 96/4 Sn/Ag eutectic has a 0.6% elongation at 4.2 K) [38].6 The tin–silver eutectic solder is also commonly employed in cryogenic vacuum equipment, presumably on account of its high room-temperature strength. However, the reliability of solders with such a high percentage of tin at low temperatures is questionable, since under such conditions they are vulnerable to fracture. The fatigue properties of metals in the “low-cycle” regime (i.e. involving a relatively small number of stress cycles, as in the case of cryogenic equipment undergoing thermal cycling) improves with increasing ductility. Hence, since fatigue is a common cause of failure in solder joints, the superior ductility of the Sn/Pb eutectic when compared with that of the 96/4 Sn/Ag alloy can be expected to be beneficial. An additional advantage of the Sn/Pb eutectic is that a good joint, which has been made using flux (as opposed to ultrasound), has a shiny luster, whereas the Sn/Ag solders generally have a dull appearance, whether they are good or bad. Since visual inspection is an important method of determining the quality of solder joints, the Sn/Pb eutectic has an intrinsic advantage in this regard. Non-eutectic solder compositions should generally be avoided. For example, Sn–Pb alloys with a very high lead content, which are sometimes advocated for cryogenic 6
The brittleness of the latter alloy has nothing to do with the so-called “tin-pest,” which is the product of the allotropic transformation from the body centered tetragonal to the brittle diamond cubic phase of this element. In fact, the transformation occurs only in relatively pure tin under special conditions.
162
Vacuum-system leaks and related problems
solder (in gap)
coaxial tubes
Fig. 6.3
Sleeve joint used for the connection of tubes. applications because of their ductility at low temperatures, have very poor wetting and flow properties. Detailed information about the characteristics of solders in cryogenic environments can be found in Ref. [39], and references therein.
Problems caused by mixing solders Whatever solder alloy is employed, it is essential to either use it consistently throughout an experimental area, keep a record of where different alloys are being used, or mark the individual joints. This is because mixing different solders (possibly during the repair of a joint) often results in alloys with poor properties (see the discussion on page 418). Soldering iron tips should also be separated according the alloy they are used with.
6.7.6.5 Design of the solder joint, and the soldering process It is essential that the solder in a vacuum joint should not be required to support any substantial mechanical load. The only requirement of the solder should be to provide sealing. Mechanical support is often supplied by using a “sleeve joint” design, as in the example shown in Fig. 6.3. This reliable arrangement provides support in all directions except the axial one, and the large area of the solder in the joint ensures that any stresses within it are small. The clearance between the surfaces should range between 75 µm and 250 µm, although the optimum value for strength and ease of soldering is about 125 µm [27]. In cases where the solder joints are to be used in cryogenic applications, a smaller clearance range of between 75 µm and 100 µm has been recommended [12]. In order to ensure that the parts to be joined are adequately wetted by the solder, the “sweat soldering” method is normally used. This involves tinning the appropriate regions of the items to be joined with solder separately, using flux as required. If the soldering is done by using a torch, the flame should not be played directly on the area to be wetted by the solder, particularly if this is an indium type [30]. Since overheating may result if thin-wall stainless-steel tubing is exposed directly to a flame, a soldering iron should be used to tin such materials [12]. Irons with heater power levels of a few hundred watts are suitable for this. A flame may be used to heat other parts of the joint during soldering.
163
6.7 Joining procedures: welding, brazing, and soldering
Subsequent to tinning, the parts are washed and carefully inspected to ensure that they are properly coated with solder. The parts are then lightly covered with a flux (liquid, not paste), and brought together and heated, so that the solder-coatings merge and the assembly is complete. Immediately afterwards, additional solder and (possibly) more flux are added to the joint until a fillet forms. Heat is applied continuously during this process and afterwards, until bubbles of air and steam cease to emerge from the surface of the solder. After the solder has solidified, the joint is completely cleaned inside and out to remove all residual flux. This can be done by (a) washing it with water, then (b) neutralizing the remaining acid using a solution of baking soda and water, and finally (c) rinsing the area with either hot water or a solution of detergent, ammonia, and water [40]. The joint should be inspected for defects such as dimples or cracks, and remade immediately if any are present. This method is described in more detail in Refs. [8], [12] and [40]. During the soldering process, any nearby thin-wall tubing and wires (e.g. on a cryostat) must be protected from flux spatter. If the system is sufficiently modular, so that the soldering operation can be done away from other parts of the apparatus, so much the better.
6.7.6.6 Soldering difficult materials Some common vacuum materials, and in particular the austenitic stainless steels, are very hard to solder without using the most aggressive (and corrosive) fluxes. (The 321 stainlesssteel alloy is especially difficult [8].) Since the use of such fluxes frequently lead to pinhole leaks in thin components, every effort should be made to avoid using them. (Washing components after soldering does not completely eliminate the problem, because beads of flux can become trapped within the solder as it solidifies [40]. The chlorine in the flux then slowly migrates along grain boundaries in the solder to the surface, where it combines with moisture to form hydrochloric acid.) One straightforward approach is to use a more easily soldered material. For example, thin-wall tubes of the 70/30 Cu/Ni alloy are available, which can be soldered by using relatively mild organic acid fluxes. This alloy is considered to be reliable with regard to its resistance to corrosion. Another strategy is to electroplate the substrate material with a more solderable metal, such as nickel or copper (as discussed on page 152). An alternative method of coating the substrate with a more easily soldered material is by vacuum deposition (e.g. sputtering – see pages 152 and 428) [40]. If a demountable joint is needed, this can also be facilitated in a similar way by brazing pieces of more solderable metals onto the less-solderable ones (e.g. brass or copper sleeves onto stainless steel tubes [30]), and making the joint between the more easily soldered items. A third method is to use “ultrasonic soldering,” which involves the use of a special soldering iron. The iron is made to vibrate at frequencies of several tens of kilohertz, and the resulting pressure waves in the solder produce cavitation, in which small bubbles form and collapse in the liquid. These bubbles create large local pressures that remove oxide layers from the surface of the metal. With this method, even materials that have a very tenacious oxide layer, such as aluminum alloys and stainless steels, can be soldered without flux. The ultrasonic soldering process involves tinning and joining the components (as with the “sweating” method discussed previously), but without the use of flux [13]. Suitable
164
Vacuum-system leaks and related problems
irons, with heater power levels of up to about 80 W are commercially available. One can also obtain heated and ultrasonically excited tanks of solder, called “solder pots,” in which parts can be tinned. The ultrasonic soldering technique, used with indium solders, has been successful in making solder joints for cryogenic vacuum apparatus [28]. More information about ultrasonic soldering can be found on page 425. The use of an inert gas, such as nitrogen, to cover the area being soldered can result in a reduction in the oxidation of the solder and the materials being joined. Generally, this approach does not eliminate the need for the oxide removal action provided by fluxes or ultrasound, but it can significantly improve both the solder wetting and flow characteristics, and the quality of the resulting joint. Commercial devices using this method, called “inert gas soldering irons” are available, in which the tip of the iron is surrounded by a stream of nitrogen. These irons are mainly used for soldering electronics, but could also be employed to create vacuum joints. The use of glove boxes containing inert gas is another possible method of reducing oxidation during soldering. Useful information about inert atmosphere soldering can be found in Ref. [41].
6.7.6.7 Sources of information on soldering Two particularly useful books on soldering techniques relevant to the creation of vacuum joints are Refs. [29] and [33].
6.8 Use of guard vacuums to avoid chronic leak problems In some circumstances it may be difficult to avoid leaks, despite the best efforts put into design, workmanship and care. This is sometimes a problem when multi-conductor electrical feedthroughs are used in UHV systems or cryogenic apparatus. Under the thermal stresses that arise during bakeout or cooling to low temperatures, the metal-to-insulator seals occasionally fail. Another example is that of O-ring sealed mechanical feedthroughs, which are prone to leakage (see page 246). A very effective way of dealing with this is to use a “guard vacuum,” in which two of the potentially defective elements are made to straddle a space that is continuously pumped (see Fig. 6.4). The scheme works because the pressure difference across the seal separating the high-vacuum space from the guard vacuum one is very much less than it would be if the guard vacuum space were at atmospheric pressure. Even if both elements leak (and assuming that the leaks are not too large), the pressure in the guard vacuum space can usually be made sufficiently low that the entry of gas into the high vacuum environment is negligible. A mechanical primary pump is normally adequate to maintain the guard vacuum. This approach, which is also referred to as “differential pumping” has proven to be a very effective way of dealing with chronic leak problems. It can be especially useful in vacuum systems that contain a very large number of potential leak sources, such as nuclear fusion reactors [5]. The technique has been applied to electrical and mechanical feedthroughs, vacuum windows, static seals, and bellows. It must be emphasized that the use of a guard
6.9 Some particularly trouble-prone components
165
Seals
Leak Path
Main vacuum chamber
To main vacuum pump
Fig. 6.4
To guard vacuum pump
The use of a guard vacuum can reduce the effective leak rate through a defective seal by orders of magnitude (see Ref. [30]). vacuum, which is a form of redundancy, is a complication that should not be used as a substitute for the good design and construction of a sealing device. A problem can arise if this approach is used to deal with leaky electrical feedthroughs that carry power. In such a case, the pressure inside the guard vacuum space can be at such a level (neither too high nor too low) that electrical discharges can readily occur between the conductors – even at moderate voltages. A situation of this type has been reported for voltages of around 150 V and air pressures of between 1 Pa and 103 Pa [5]. An extensive discussion of the use of the guard vacuum technique can be found in Ref. [30].
6.9 Some particularly trouble-prone components7 6.9.1 Items involving fragile materials subject to thermal and mechanical stresses In UHV systems, the process of bakeout can subject brittle items, such as glasses and ceramics – and in particular their glass-to-metal and ceramic-to-metal seals, to large thermal stresses which can cause leaks. The two most problematic components in this category are vacuum windows and electrical feedthroughs. Particular care must be taken to limit temperature gradients across such devices (especially if heating is done from the rim of the device’s mounting flange), and to avoid rapid temperature changes. A common technique to avoid the temperature gradients is to apply several layers of aluminum foil across the vulnerable parts, and to leave it there until the bakeout has been completed and the vacuum system has returned to room temperature. It has been recommended that temperature differences across glass-to-metal seals be limited to less than 30 ◦ C [42]. If vulnerable components must be heated during some joining process (e.g. soldering electrical feedthroughs onto a flange), similar care may be needed. Hence such a 7
Demountable connections, motion feedthroughs, and valves are discussed in Chapter 8.
Vacuum-system leaks and related problems
166
soldering operation would best be done by heating the entire assembly uniformly using a hotplate [43]. Problems due to thermal stresses can also arise in components such as electrical and mechanical feedthroughs in cryogenic systems. As a rule, vulnerable items should be placed outside the low temperature environment, if possible. For example, a cryostat design that involves electrical feedthroughs separating room temperature air from vacuum is preferable to one using feedthroughs that separate liquid helium from vacuum. In cases where delicate cryogenic seals must be used, fully engineered and tested commercial products (with, e.g., ceramic-to-metal seals) are usually preferable to home made versions (which often involve epoxy-to-metal seals).8 Feedthroughs using glass-to-metal seals are particularly fragile and should be avoided, if possible, in leak-sensitive vacuum work in general. Ceramic-to-metal seals are preferable. Attention should be given to adequately derating the currents and voltages used with electric feedthroughs – see page 58. Electrical feedthroughs (e.g. on vacuum gauges) are often subjected to mechanical damage that causes leaks [5]. Simple measures, such as protective external covers and cable restraints, are usually sufficient to prevent this. Particular care is needed when the flange bolts on certain delicate objects, such as UHV windows and electrical feedthroughs, are tightened. A procedure for doing this has been provided [44]. In order to prevent UHV windows from being damaged when their flange bolts are tightened, it is desirable to use fully annealed OFHC copper gaskets with them. These are softer than the standard gaskets, and require lower stresses in order to produce a seal. Such gaskets are also preferred for use with electrical feedthroughs and ionization gauge heads with glass-to-metal seals. (See also the discussion on page 235.)
6.9.2 Water-cooled components Of all the types of leaks that can occur in a vacuum system, among the most dreaded are those arising in cooling water lines. The problems resulting from water entering the vacuum environment can range from contamination of the system to the damage or destruction of components and devices operating in the vacuum. A frequent cause of such leaks is the cracking of flexible (bellows-based) water lines, which suffer from fatigue failure due to repeated movements during normal operation, or vibrations caused by the water passing through them. The best way of avoiding this problem is to avoid using flexible cooling water lines inside the vacuum. The next best solution [5] is to use a two-layer bellows, with the space between them pumped and acting as a guard vacuum that can be continuously monitored for the entry of water or air. The use of such bellows with an “air guard” is a somewhat simpler alternative. It is also good practice to avoid the use of water-to-vacuum: weld, braze, solder, or demountable-seal connections. That is, all joints in the cooling water line should be located outside the vacuum chamber – unless such joints can be surrounded by a guard vacuum. Water-cooling issues are discussed in Section 8.4. 8
The construction of cryogenic electrical feedthroughs is discussed in Ref. [40].
6.9 Some particularly trouble-prone components
167
(a)
Fig. 6.5
(b)
(a) Rolled bellows, (b) edge-welded bellows.
6.9.3 Metal bellows Bellows can be problematic items in a vacuum system [28]. These devices are generally one of two types. The first and most common kind are “rolled” (or “hydro-formed”) bellows, in which the convolutions are created by plastic deformation of the tubular starting material. Such bellows are relatively inexpensive, and often used for flexible metal tubing. The second type, which is more expensive and less common than rolled bellows, are “edgewelded bellows”. With this type, annular diaphragms are stacked together and welded along their inner and outer edges, usually by an automatic orbital welding process. Edge-welded bellows are used in vacuum systems when large axial deformations are required in a limited space. The rolled bellows have convolutions with a well-rounded appearance, whereas the convolutions of the welded types have thin and relatively delicate edges (see Fig. 6.5). Of the two types of bellows, the welded ones are more troublesome with regard to their propensity for leakage [5]. There can be hundreds of welds in a typical bellows used, for example, to move samples in an ultrahigh-vacuum instrument. Each of the welds is a place that is particularly susceptible to porosity, fatigue failure, mechanical damage, and stress corrosion. The crevices next to the weld tend to collect contaminants, and edge welded bellows are particularly prone to trapping debris. As discussed in the section on cleaning, particulate matter can puncture the thin material comprising the bellows when they are compressed. Furthermore, edge welded bellows are very difficult to leak test and virtually impossible to repair (except temporarily – see page 181). This is particularly true if a leak is on one of the inside welds, which is where leaks most commonly occur. Both rolled and edge-welded bellows are relatively vulnerable structures. They are intended to flex, and so they are susceptible to fatigue failure. (Fatigue of materials is discussed on page 101.) When they are bent, the stresses they are placed under makes them vulnerable to stress corrosion. Furthermore, in order to flex, their walls must be thin, which makes them particularly susceptible to the formation of leak paths by various types
168
Vacuum-system leaks and related problems
of corrosion [10]. Their thinness also increases their vulnerability to mechanical damage and leaks due to naturally occurring defects in the material. Vibrations in bellows, and especially axial ones occurring at natural resonant frequencies, often result in fatigue. This is a particular problem if bellows are used to carry gas or cooling water. If the flow velocity is high enough, the resulting turbulence will generate vibrations, and these are frequently found to be the cause of failure [5]. Also, erosion of the bellows material can take place because of high-velocity water flow. This, and other conditions that can lead to water leaks are discussed on page 265. Somewhat ironically, bellows are frequently used to isolate vibrations in pumping lines, which can be yet another cause of fatigue failure. Cyclic electromagnetic forces have also been reported to induce vibrations in bellows that have lead to failure [5]. Metal bellows are often used as vacuum hose, which are employed, for example, to connect pumping stations to vacuum apparatus. Their convenient and enticing flexibility makes them naturally susceptible to aggressive manhandling, which quickly leads to the formation of fatigue cracks. Vacuum-insulated liquid-helium transfer tubes are often made using bellows, in order to make them easier to manipulate. However, for the aforementioned reason, and because of fatigue brought on by the large changes in temperature, leaks in the bellows can be troublesome (see page 287). In view of these problems, it is clear that the employment of bellows in vacuum systems should be reduced as much as possible. However, bellows are much too useful to avoid completely in vacuum work. Sometimes their reliability with regards to leakage is substantially superior to alternative devices. For example, they are indispensable in mechanical linear motion feedthroughs, such as those used in high-quality valves, which would otherwise have to employ leak- and wear-prone O-ring sealed devices. And it is hard to imagine a satisfactory alternative to flexible metal hoses for making ad-hoc high-vacuum connections. The following practices can be helpful in preventing bellows leaks. (a) Use rigid metal tubing instead of bellows wherever possible. If bellows are being used in order to accommodate relative movement between fixed components in a vacuum system, it may be possible to employ instead a loop of metal tubing, with a sufficiently large loop dimension to provide the required flexibility. (An illustration of this type of arrangement, as applied to the transport of cryogens, can be found in Ref. [45].) (b) Use rolled bellows instead of edge-welded ones, if the rolled bellows are able to provide sufficient movement. If large amounts of movement are required, edge-welded bellows may be better able to withstand fatigue. (c) If the bellows are being used to decouple vibrations, and large movements are not required, the vibration amplitudes can be greatly decreased by using rolled bellows enclosed in metal braid [5]. Braid can also provide protection in situations where bellows are exposed to abrasion. (d) Consider using a different technique that allows the use of bellows to be avoided. For example, linear and rotary movement inside a vacuum system can be effected using a “magnetic drive,” rather than by means of an arrangement using edge-welded bellows (see the discussion on page 248).
169
Fig. 6.6
6.9 Some particularly trouble-prone components
Flexible metal tubing provided with reinforced ends and protected by metal braid (used in this case for transferring liquid nitrogen).
(e) Obtain bellows of the highest quality from a reputable supplier. Stainless steel used to make the machined end-pieces of bellows should be an “electro slag refined” (ESR) material, in order to avoid inclusions and porosity [5]. (f) If “high-cycle fatigue” failure is an issue (e.g. due to vibrations), and good corrosion resistance is desired, bellows made from the semi-austenitic stainless steel “AM 350TM ” may be a good choice. (g) If “low-cycle fatigue” is a potential problem (e.g. resulting from movement caused by temperature cycling), and good corrosion resistance is desired, the nickel alloy R 625LCF” can be a useful material. (In fact, this alloy has excellent corrosion “Inconel resistance.) (h) If severe corrosion is a issue (e.g. stress corrosion of stainless steel in the presence of R . chlorides), consider using a highly corrosion-resistant alloy such as Hastelloy (i) In the case of flexible metal tubing, consider providing some kind of strain relief devices at the ends (where fatigue cracks often appear). The simplest arrangement is to provide the machined end-pieces with recesses into which the bellows are fitted, in order to protect the weld or brazed terminations from bending stresses. One can also mount flexible reinforcements (similar in function to the bend reliefs used on electrical cables) on the end-pieces, perhaps in the form of a short length of slightly larger diameter rolled bellows, or possibly “spring guards.” Commercial tubing is sometimes provided with reinforced ends of this sort (see Fig. 6.6). (j) Some manufacturers have the ability to subject bellows to a “cycle test.” This involves exposing the bellows to a series of repeated movements, of the kind that they would experience during service, and then leak-testing them. The procedure allows substandard bellows to be revealed before being placed into service.
170
Vacuum-system leaks and related problems
(k) Edge-welded bellows should never be completely and forcibly compressed so that their segments touch, because this is frequently a cause of leaks [46]. (l) Flexible metal tubing should be handled carefully, and should not be over-bent (so that it takes a permanent set), if possible. The manufacturer should be able to provide a value for the “minimum bend radius,” below which the tubing is at risk of damage. The appearance of kinks in tubing is an indication that it has been mistreated – and should be considered for replacement. One should generally avoid applying torsional (i.e. twisting) forces to bellows. (m) Bellows, and especially edge-welded bellows, should not be exposed to high vibration levels, and one should definitely avoid vibrations that cause resonances (particularly in the longitudinal direction). Care should be taken to limit the velocity of cooling water and gases through bellows, so that violent turbulent flow is avoided. The possibility of damaging vibrations occurring during transport should be considered when bellows are being packaged for this (see the discussion on transport on page 91). (n) Bellows that are susceptible to stress corrosion should be protected from exposure to aggressive chemicals. In the case of stainless steels, these include in particular chloride-containing substances, such as active soldering fluxes, bleach, some cleaning agents, and chlorinated solvents. Extra precautions should be taken to keep edge-welded bellows clean and free of particulate matter.
6.9.4 Vacuum gauges The electrical feedthroughs of vacuum measuring devices (such as Pirani gauges [5]) are vulnerable to mechanical damage and leakage, and should be protected, as discussed earlier. If Penning gauges are left switched on at atmospheric pressure for long periods, electric discharges can erode or crack sealing materials, resulting in leaks [5]. To avoid this, and related harmful effects, such as the sputtering of metal onto the insulators, such gauges should be operated only when the pressure is below 1 Pa [4].
6.10 Diagnostics 6.10.1 Leak detection 6.10.1.1 Introduction A high-vacuum system that is unable to reach its normal base pressure does not necessarily contain a leak. Such apparent behavior can also be caused by excessive outgassing, a contaminated vacuum gauge, or a malfunctioning high-vacuum pump [9]. Problems may also be caused by the presence of virtual leaks, which cannot be detected using normal leaktesting procedures. (Virtual leaks can be detected by looking at the shape of the vacuum
171
6.10 Diagnostics
system’s pressure-rise versus time curve (see Ref. [4]), and can usually be located only by examining the inside of the system for blind screw-holes, double seam welds, etc. [21].) Similarly, in cryogenic equipment running at low temperatures, it can be difficult to determine the pressure directly. Hence, abnormal operation, such as an apparent failure of the apparatus to reach its base temperature, can be erroneously attributed to vacuum leaks. They may instead be the result of other problems, such as touches or other causes of heat leaks unrelated to vacuum quality, blocked helium capillary lines, RF heating of thermometers, etc. The most useful general principle for the successful detection and location of leaks is that one should have a good understanding of the specific characteristics of the vacuum equipment. Keeping a logbook for the apparatus is one of the most helpful measures that one can take in this regard – often saving large amounts of time when leaks are investigated. Such a logbook may contain information such as the system pressure as a function of pumping time, base pressure, previous instances of leaks and their character, sample traces from a residual gas analyzer (RGA), and a maintenance record [9]. If a leak is suspected in apparatus, it is often worth looking at the last seal that has been disturbed before doing anything else. Undisturbed vacuum seals only rarely develop leaks. One might find that an O-ring has been contaminated, or a flange has been scratched. The presence of liquid-like substances such as oils, greases, soldering flux, and moisture on surfaces tends to cause leak paths to plug, thereby preventing their detection [1]. Nevertheless, the leaks will probably open up again in the future, possibly during UHV bakeout or cool-down to low temperatures. Hence, items to be leak tested should be clean and dry – both inside and out [4]. If the vacuum components have been exposed to water (perhaps during cleaning), baking can be useful in removing it sufficiently to enable leak testing to be performed [35,1]. Oils and greases (especially silicone- or perfluoropolyether-based ones) may be difficult or impossible to remove for testing. The use of vacuum greases should be kept to a minimum in vulnerable vacuum systems, and one should definitely avoid using silicone grease unless it is absolutely necessary (see page 97). Similarly, the use of greases and masking compounds during leak testing (with the intention of temporarily plugging large leaks while looking for smaller ones) should be avoided, if possible [4]. A better method of temporarily plugging leaks is to apply alcohol to them with a brush. The alcohol can subsequently be removed with a hot air gun [9]. For the same reasons, the use of dye penetrants (which are commonly employed to find cracks in materials) should be avoided in the testing of vacuum components [1]. Likewise (and for other reasons, such as possible degradation of O-ring materials), leak-testing techniques involving the use of solvents such as acetone are best avoided. Sometimes, large leaks are located by pressurizing the vacuum system, and applying a soap–water mixture to suspect areas. Any major leaks present will manifest themselves by producing bubbles. Apparatus that has been tested in this way should be cleaned and dried thoroughly before being tested for small leaks [1]. A better method of finding large leaks by using a mass spectrometer leak detector is described on page 175. Tracking down small leaks can sometimes take a very long time, especially if they are intermittent. In some situations, it may be prudent to consider what leak rate can tolerated
172
Vacuum-system leaks and related problems
by the apparatus, so that effort is not wasted in chasing tiny, but irrelevant, leaks [1]. For example, in the case of vacuum roughing lines, small leaks can even be beneficial in preventing the backstreaming of mechanical pump oil under certain conditions [21]. Also, the dilution refrigerators used in low-temperature research can tolerate normal (as opposed to superfluid) helium leaks of 10−8 Pa·m3 ·s−1 for about a week, especially if a charcoal adsorption pump is present [7]. Normally, leaks will be found by placing the apparatus under vacuum and applying a tracer gas (such as helium) to external points. However, some leak-detection methods (such as the “sniffer technique”) involve pressurizing the vacuum system (e.g. with helium) and detecting its presence on the outside. These two methods are often treated as if they automatically lead to equivalent results. In fact, owing to the mechanical properties of the vacuum components, there are situations in which a pressure on the inside of the system may give rise to a leak, whereas the presence of vacuum results in substantially no leaks [4]. Once a leak has been approximately located using a leak detector, visual inspection frequently allows its position to be determined precisely [50]. At the location of a leak, one normally finds that the surface of the item being tested contains a small hole (perhaps visible only with a hand-held lens), has a dull appearance, or is visibly flawed in some other way.
6.10.1.2 Mass spectrometer leak detection Introduction The most versatile and widespread leak-detection method involves the use of a mass spectrometer leak detector. With this technique, the item to be tested is connected to the detector (generally a commercially made instrument), which uses self-contained pumps to evacuate the system. A light gas (usually helium) is then sprayed onto suspect places on the evacuated item. If any leak paths are present, the helium will migrate through them and into a mass spectrometer inside the instrument. This device separates helium atoms from other species that are present in the vacuum environment, such as nitrogen or oxygen molecules from the surrounding atmosphere, and detects only the former. The resulting signal is a measure of the size of the leak, which is given in units of (pressure·volume·time−1 ). Leak detectors with a “180◦ sector mass spectrometer” are preferred over those using a “quadrupole” device, because of their greater sensitivity and certainty of measurement. The former are also called “helium leak detectors,” although other test gases, such as hydrogen, can also be used. The mass spectrometer leak-detector method is favored over other techniques used to detect leaks in vacuum apparatus because it is quantitative, highly sensitive, able to detect leaks with a very large range of sizes (e.g. roughly 102 –10−13 Pa·m3 ·s−1 [4]), and relatively reliable and convenient to use. Furthermore, unlike some other leak-detection methods (such as those involving the use of dye penetrants), the mass spectrometer technique does not block up leak paths and thereby preclude further leak detection efforts.
173
6.10 Diagnostics
The main disadvantage of the mass spectrometer method is the high cost of the instrument. However, owing to the labor-intensive nature of scientific research and the huge amounts of time that can be expended in finding and repairing leaks, the mass spectrometer method is generally a very cost-effective one. Furthermore, in certain areas of research, such as cryogenic and ultrahigh-vacuum work, it is for all practical purposes, the only technique with a sufficiently high sensitivity to find the smallest leaks that are capable of causing problems. It would not be going too far to say that the presence of a 180◦ sector mass spectrometer leak detector is essential in any laboratory doing serious work involving high and ultrahigh vacuums, and cryogenics. The technology of helium leak detection has not stood still over the years, and anyone using a very old leak detector, on the grounds that it is still working adequately, may wish to consider the advantages of some of the newer machines. The most modern leak detectors are particularly easy to use, with features such as automatic calibration, electronic background suppression, freedom from liquid-nitrogen cold traps, and (sometimes) the use of oil-free pumping systems, which eliminate problems arising from the dissolution of helium in the vacuum pump fluids. The latter “dry leak detectors” are also suitable for use on contamination-sensitive vacuum components and systems.
Helium leak-testing techniques Some useful practices during helium leak testing are as follows [1,4]. (a) Difficulties often arise if the time required for the leak detector to respond to a leak has been underestimated. One can determine the response time of the leak detector – apparatus combination by placing a helium source (e.g. a portable helium reference leak) on a part of the apparatus furthest away from the detector, opening the valve, and waiting for the detector to respond – i.e. to reach two-thirds of its maximum amplitude. Alternatively, the time can be estimated by taking the ratio of the total volume of the system to the conductance of the path between the leak-detector ion source and the most distant part of the apparatus. Relevant information on vacuum theory can be found in Ref. [9]. (b) Find and repair large leaks before searching for small ones. (c) Begin the testing at the top of the apparatus, and move downward (since helium is lighter than air, this ensures that parts that have not yet been tested are not prematurely exposed to helium). (d) Ensure that the air in the testing area is still. (e) In order to prevent helium from drifting to unintended places on the apparatus, and giving rise to false leak indications, consider using a “coaxial helium leak-detection probe” [47]. This simple device allows excess helium to be pumped away from the leak-testing area (see Fig. 6.7). (f) In the case of a small item, connect this to the leak detector with some flexible hose, place it in a plastic bag, and fill the latter with helium. This allows one to establish whether any leaks are present. The item can then be removed from the bag and tested in the normal way in order to locate the leaks.
Vacuum-system leaks and related problems
174
Helium Delivery Tube
Centering Spider
Helium
Outer Tube
Vacuum
Fig. 6.7
Cross-sectional view of a coaxial helium leak-detection probe. This handheld device allows one to spray helium onto a precise spot on a suspect vacuum component, while pumping away excess helium before it can migrate to leaks in other areas. (The tube marked “vacuum” is connected to a mechanical vacuum pump, which exhausts the helium at a safe distance from the leak-testing location.) Its construction and use are described in Ref. [47]. (g) If possible, test subsections of assemblies individually. Use plastic bags where necessary to confine helium to the regions being tested. (h) Long welds, braze joints, etc., can be tested by placing plastic film across and along the joint, taping it to the welded items on both edges on either side of the joint, and filling the resulting tunnel with helium. (i) In situations where extremely fine leaks are being hunted, heating the item while it is being tested may be beneficial in preventing the plugging of leaks by volatile substances such as moisture [13].
6.10.1.3 Some potential problems during leak detection A common cause of difficulties during leak testing is the permeation of O-rings and other elastomeric seals with helium. If these seals are exposed to helium, either directly while undergoing test or indirectly because of the buildup of helium in the room where testing is being carried out, the result will be a gradual buildup in the background helium level registered by the leak detector. The time scale for this phenomenon has been reported to be between about 15 min and 1 h for most types of O-ring [9,48]. As a consequence, the effective sensitivity of the detector will be reduced until prolonged pumping can remove helium from the seal. During extensive leak testing, one may have to stop periodically in order to allow the background arising from this phenomenon to diminish. In order to limit helium permeation, O-rings should not be subjected to a direct spray of helium gas for more than a few seconds at a time [48]. Rubber and plastic vacuum hose is also highly permeable to helium, and its use to join the leak detectors to the vacuum systems being tested should be minimized or eliminated.
6.10 Diagnostics
175
Component Under Test Mechanical Pump
Throttle Valve Exhaust
Leak Detector
Fig. 6.8
Setup for hunting very large leaks in a vacuum component that would normally overwhelm the leak detector (see Ref. [50]). Lengths of 10 cm or less are acceptable [1]. Metal bellows hoses are the preferred means of providing flexible connections. Some leak detectors can be supplied with all-metal seals, in order to eliminate the entry by diffusion of helium into their vacuum systems. This is a particularly useful feature when very high-sensitivity detection is being carried out, or where background helium levels are often very high (as may be the case in low temperature laboratories). Of course, one still must contend with helium permeation through any O-rings in the apparatus being tested. An ordinary ion pump that has pumped 10−6 –10−7 Pa·m3 of helium will produce an effective background leak level of 10−11 Pa·m3 ·s−1 for a year or more after it has been exposed to this gas [1]. As with absorption in O-rings, this effect can significantly increase helium background levels during leak testing. The problem may be particularly severe for normal diode ion pumps – other kinds of ion pump (such as “triode” types) are better able to pump inert gases. Cryopumps in systems that have been subjected to helium leak testing must be regenerated afterwards [48]. Sometimes large leaks will appear that overwhelm the leak detector, even on its least sensitive scale, and make finding the leak impossible. A simple arrangement that overcomes this difficulty with the aid of an auxiliary medium vacuum pump is shown in Fig. 6.8. Helium leak testing must not be done around live high-voltage devices. This is because of the low breakdown voltage of helium relative to air, and hence the possibility of arcing – see page 384.
6.10.1.4 Leak testing cryogenic systems Introduction Hunting leaks in low-temperature apparatus often requires special techniques. Although in many cases, cryogenic leaks can be found at room temperature with a helium leak
176
Vacuum-system leaks and related problems
detector, sometimes they will appear only at low temperatures. This may happen because of stresses arising from differential thermal contraction, or (especially in the case of helium superleaks) because of the vastly greater leak rates made possible by changes in the character of gases and liquids at low temperatures. In the latter case, leaks that may be undetectable at room temperature, as measured even by using a very sensitive leak detector, can become unmanageably large at low temperatures. Matters are made worse by the general lack of access to leaking cryogenic parts when they are in their working environment.
General methods The easiest initial strategy to follow in locating cryogenic leaks is to use the roomtemperature methods – at their highest levels of sensitivity, if necessary. Modern leak detectors in good condition should be employed. For example, they should be clean – i.e. not having previously been used to test contaminated systems. Adsorbent materials in the apparatus, such as the activated charcoal in cryopumps, should be removed before starting, since these can otherwise constitute virtual leaks [10]. When looking for cryogenic leaks, the first items of attention should be demountable seals and solder joints. If this method has not revealed the leak, the next approach might be to cool the apparatus in a bath of liquid nitrogen. This should be done relatively slowly, so as to avoid subjecting vacuum joints to potentially harmful thermal stresses. Helium can then be sprayed over the apparatus before condensation has had a chance to form, or bubbled up through the nitrogen against the suspect regions [10]. This type of method stands a good chance of revealing any leaks caused by thermal contraction (even down to liquid-helium temperatures) because, in the majority of materials, most thermal contraction occurs between the room temperature and 77 K. As it is normally carried out, the gradual formation of condensation and ice on the cooled parts of the apparatus can be a problem with this type of procedure. A careful and systematic version of this strategy of leak testing cooled objects has been devised that does not lead to problems of icing of the apparatus as it is moved in and out of the nitrogen [49]. The method involves immersing the apparatus completely in an open-necked dewar of liquid nitrogen, above which helium gas is placed, and slowly lowering the dewar away from the apparatus using a motorized lift table. The helium is kept in place around the apparatus using a clear polyethylene bag and the leak detector signal is recorded continuously as the lowering is carried out. As a leak site emerges from the liquid, it is immediately exposed to the helium and triggers a response from the detector. The known rate of lowering of the dewar, and the record of leak detector signal versus time, allows leaks to be located along the direction in which the dewar is moved. The article of Ref. [49] discusses the location of leaks in a dilution refrigerator using this method. (The method described therein for patching these leaks, using silicone grease, is not recommended.) A somewhat similar strategy involves removing the liquid helium from the dewar in which the apparatus normally resides, and replacing it with liquid nitrogen. Pressurized helium gas is then used to blow the liquid nitrogen out of the dewar. As the liquid level falls, the leak detector response is monitored [50].
177
6.10 Diagnostics
Superleaks The location of superleaks is probably the most difficult and time-consuming leak-hunting task. It can take many days for a superfluid leak (e.g. in a dilution refrigerator) to become apparent on a leak detector, or to manifest itself by the increased flow of heat into the cryogenic system [51]. Locating and repairing such a leak can take several months. Furthermore, one normally finds that superleaks cannot be located to precise positions on the apparatus, but only to general regions. If the previously discussed methods have been tried in order to find the superleak, several other approaches are available. A method has been devised that involves manipulating the temperature of different parts of the apparatus in order to selectively cool possible leak locations below the superfluid transition temperature, while monitoring the response of the system on a leak detector [51]. This control of the temperature is achieved by attaching heating elements on the various stages of the apparatus (e.g. the mixing chamber, heat exchangers, and the still on a dilution refrigerator), by providing additional thermometers to these places as needed, and by installing copper wires to act as thermal links between appropriate points. The authors of Ref. [51] claim that this method is of considerable benefit in lowering the amount of time taken to locate a superleak. However, in an earlier work (see Ref. [52]) this type of approach is considered to have very little chance of success. Another method for locating superleaks involves subjecting the apparatus to thermal stresses by cooling it in liquid nitrogen and warming it up again [50]. This is done with the aim of causing small channels (such as cracks in solder joints) to become larger, so that the resulting leak can be detected at 77 K, or even at room temperature. Leaks often form in the first place as a result of thermal stresses – owing to fatigue, for example – so this not an unreasonable approach. As discussed in Section 1.4.3, stressing things in some way, so that intermittent faults become permanent, is a very useful general strategy for debugging equipment and software. However, this method is not always successful – one may find that even after many thermal cycles, the leak does not become sufficiently large to permit detection. Another approach can be used, which is very straightforward in principle, but very tedious [52]. Making use of modular construction of the leaking apparatus, a binary testing scheme is employed. The cryogenic parts of the system are divided in two, and the open ends of the separated subsections are blanked off. These are then cooled and leak tested in order to reveal the leaking subsection. This process is continued until a faulty component (e.g. a heat exchanger) is located. Since it is normally very difficult to locate a superleak any more precisely than this, the problem is solved by replacing the entire component. If every method of locating the leak has failed, then the only recourse is a systematic repair and replacement of the cryogenic seals and components. This might involve, in the following order: replacement of indium seals, reflowing of soft solder joints, reflowing of hard solder joints, removal and replacement of weld joints, and replacement of components or subassemblies [50]. Normally, a leak test should be carried out after each step. If large amounts of time have already been spend on searching for leaks, this strategy of repairing and replacing items in the absence of firm evidence of leakage is often a very helpful one [12]. For this purpose, the use of a modular apparatus design is indispensable.
178
Vacuum-system leaks and related problems
Sensing low-temperature helium leaks by using extractor gauges Room-temperature leak detection equipment is not well suited to sensing leaks occurring at the temperature of liquid helium (4 K and below), because of (for example) outgassing from the room-temperature vacuum components, and the very low vapor pressure of the cold helium. This problem can be overcome by operating a vacuum gauge in the low-temperature environment. It has been found that extractor gauges (which are sensitive devices even at room temperature) can be operated at cryogenic temperatures, where they have a very high sensitivity to helium leaks. The lower pressure limit of these gauges is about 10−10 Pa, and they release only around 3 W into the cryogenic environment. The use of extractor gauges to detect and locate leaks in systems cooled by liquid helium is described in Ref. [53].
6.10.1.5 Some other methods of leak detection and location Detection of leaks in large vacuum systems In certain large vacuum systems, such as those employed in nuclear fusion research, it may be difficult to use normal leak-detection methods. This may be because of the size of such systems, and possibly also the difficulty of getting the tracer gas (e.g. helium) to the required locations on the outside of the apparatus. Therefore, alternative methods have been developed for these situations. Such methods make use of the character of movement of the gaseous species that typically takes place in these systems, wherein the particles travel on ballistic trajectories, rather than as part of a fluid. (That is, the gases in the system are in the “molecular flow,” rather than the “viscous flow” or “transition flow” regimes.) These methods also make use of the large amount of space inside these vacuum systems, where the necessary equipment must be installed. One strategy makes use of a vacuum gauge mounted at the end of a manipulator arm that is placed inside the chamber. This arm can be moved in the chamber, so that pressure gradients associated with gas flows resulting from leaks can be found. In this way, if the gauge is sufficiently close, the leaking gas can be followed to its source [54]. Another method uses the ballistic flow properties of the gas to create an “image” of the leak. This device, which is known as a “leak telescope,” is scanned over the inner surface of the vacuum chamber, while the particle flux is detected with a vacuum gauge, and the resulting signal is recorded [54,55]. Unlike other leak detection techniques, both of these in-chamber leak detection methods are capable of locating, not only real leaks, but also virtual ones, as well as sources of outgassing.
Locating large leaks at very large distances If it is possible to slightly pressurize a vacuum system, one obvious method of locating large leaks is to listen for the sounds of the escaping gas. For the largest leaks, this is a very useful method. The use of special instruments, such as stethoscopes or microphones, can improve on what the unaided ear can accomplish using this technique. However, much higher levels of sensitivity can be achieved by detecting the sound in the ultrasonic, rather than the audio
179
6.11 Leak repairs
frequency, domain. Commercial devices are available for this purpose, which operate in the 20–100 kHz range. The unique feature of this method is its ability to sense and locate leaks from very large distances. For example, the ultrasound produced by air escaping from a hole of 0.25 mm diameter, with an overpressure of 35 kPa (5 psi) can be sensed 15 m away. In a quiet environment, it is possible to detect leaks as small as 10−3 Pa·m3 ·s−1 by using the ultrasonic technique. The use of helium, rather than air, as the pressurizing gas provides the highest levels of sensitivity. If the vacuum system cannot be pressurized, leaks can be hunted using an ultrasonic detector in combination with an electronic ultrasound generator, located inside the system at ambient pressure. The ultrasonic method of leak detection is discussed in Ref. [2]. An additional advantage of the use of this method is that the same instrument (which is not particularly expensive) can also be used for other diagnostic purposes in the laboratory – such as revealing leaks in gas lines, and detecting corona and tracking in high-voltage apparatus [56].
6.10.1.6 Sources of information about leak detection Two useful publications on leak detection and location, with particular emphasis on helium mass spectrometer techniques, are Refs. [1] and [4]. (A helpful step-by-step leak-testing procedure is provided in the former.) A general discussion of leak-testing topics can be found in Ref. [21]. Information on leak hunting in cryogenic equipment is provided in Ref. [50]. A wide variety of fundamentally different leak-testing methods are described in Ref. [2].
6.10.2 Methods of detecting and identifying contamination A very simple method exists for determining whether surfaces to be exposed to a vacuum (e.g. following cleaning) are sufficiently clean to avoid problems. One takes a clean surgicalgrade cotton wool swab, which has been dampened (not soaked) with isopropyl alcohol, and wipes it across the surface. If the swab is not discolored as a result, the surface is acceptable. This technique is useful even for ultrahigh vacuum systems [4]. Vacuum system contamination may be conveniently analyzed with a special type of mass spectrometer, known as a “residual gas analyzer” or “RGA.” The presence and character of any impurities, such as roughing pump oil or solvent, can be determined by the positions of the peaks in the mass spectrum. These can be distinguished from other sources of gases in the vacuum system, such as leaks and permeation. Commercial residual gas analyzers are relatively compact devices, and easy to mount on most vacuum chambers. Details regarding the use of this method are provided in Refs. [4] and [9].
6.11 Leak repairs Leaks are generally best handled by remaking connections or replacing the leaky modules or components. Some items, such as edge-welded bellows, are very difficult (if not impossible) to permanently repair. (Also, a leak in an old bellows is usually a portent of more leaks in the
180
Vacuum-system leaks and related problems
near future [46].) In such cases, replacement is usually the only viable option. Sometimes, economics dictate the most effective strategy – one may find that the true cost of repairing an item is greater than that of replacing it. Procedures that consist of patching leaks should be avoided, if possible. Repairs must not involve replacing a real leak with a virtual one – such as by laying down a second layer of weld metal on the outside of a welded connection. The possibility that the repair of an existing leak will cause a new one to open up (e.g. reflowing of braze joints causing overheating and degradation of nearby solder joints) should also be considered when the repair method is chosen. It is desirable to anticipate a procedure for the repair of leaks during the design of vacuum devices. The best way of repairing welds is by completely removing the weld material (down to the base metal), and re-welding the joint. Welding over top of an existing weld is seldom a satisfactory approach, since the stresses set up by the newly laid weld metal tend to produce cracks nearby [30]. Similarly, if possible, brazed joints should be completely remade. Leaky braze joints may, or may not be, repairable by simple measures such as reheating and melting the filler metal. Some types of filler undergo changes during the initial brazing process, so that re-melting is not easily done. Re-brazing using a lower-melting-point filler metal may be possible [21]. In some cases, the most practical alternative to remaking a braze joint is to apply soft solder over it. In such a case, of course, the resulting joint will have at least some of the vulnerabilities of an ordinary soldered connection. In the case of low-temperature apparatus, the use of (for example) ordinary tin–lead soft solder for this purpose may not be satisfactory, because thermal cycling causes the patch to fail eventually [57]. A method has been developed to patch brazed connections in cryogenic equipment using pure indium as a solder, for use in situations in which it is not feasible to effect a permanent repair [57]. One of the few advantages of soft soldered connections is that they are relatively easy to repair, by heating and re-flowing the solder. Substances such as varnishes, epoxies and greases are often used to repair leaks by those who, pressed for time, dread the delay and effort that might be involved in implementing a permanent and reliable solution. Although one does hear and read about success stories, such improvised repairs stand a very good chance of becoming an unwanted legacy for future users of the apparatus, including possibly those who made them in the first place. Potential problems with these methods are widely recognized – see, e.g., Refs. [9], [21], [30], and [58]. The use of special commercial vacuum sealing compounds can permit repairs of acceptable reliability to be made in some vacuum applications, especially if these do not involve temperature extremes or extensive thermal cycling. However, use of the above substances should generally be considered to be a temporary measure. Hence, if these materials are used to repair leaks, an important consideration is that they should be removable without too much effort. Otherwise, it may be difficult or essentially impossible to repair the leak properly later on by, for example, welding or brazing. Another potential difficulty is that the application of new sealing compound over some that has been applied previously may open up already repaired leaks. The use of vacuum-sealing compounds is most justified when leaks appear in materials that cannot be repaired by welding, brazing or soldering – such as ceramics or glasses.
181
Summary of some important points
Most of these substances are incompatible with the ultrahigh-vacuum environment and the need for bakeout in UHV equipment. However, some low-vapor-pressure sealants are available commercially which can be used under such conditions [24]. The use of a silicone resin for temporarily sealing leaks in UHV equipment (including ones in edgewelded bellows) is discussed in Ref. [46]. In cryogenic equipment, the low-temperature embrittlement of organic materials in general, coupled with stresses caused by differential thermal contraction during cooling, combine to reduce the reliability of patches made with them. However, in emergencies, the application of glycerol can be used as a form of first aid in cryogenic equipment, and is frequently successful [58]. Unlike many other substances that could be employed for this purpose, it has the advantage of being soluble in water, and can therefore be removed without great difficulty. Only the minimum amount of sealing compound (or varnish, epoxy, etc.) should be applied to a leak [59]. A vacuum part that has been temporarily repaired in this way should be provided with a tag indicating the location of the leak, and the date on which the repair has been carried out. Such parts should be permanently repaired as soon as possible. A discussion of various aspects of leak repair can be found in Ref. [21].
Summary of some important points 6.3 Common locations and circumstances of leaks (a) Leaks are usually found at singular places in a vacuum system, such as solder joints, welds, insulator-to-metal seals, connections, seams, braces, sudden bends in pipes, etc. (b) Whether leaks are present or not sometimes depends on the condition of the system – i.e. if it is hot or cold, subjected to particular mechanical stresses, etc. (c) If vulnerable items (such as demountable seals, feedthroughs, etc.) are inaccessible (hard to see and/or reach), leaks are more likely than they would otherwise be.
6.4 Importance of modular construction (a) Generally, and particularly in the case of vacuum apparatus where hard-to-find leaks are commonplace (e.g. cryogenic instruments), the use of a modular system design can be very beneficial. (b) The use of modules can make the assembly of such apparatus, and the repair of leaks, much easier than they would be if a non-modular vacuum system design were employed.
6.5 Selection of materials for use in vacuum (a) The porosity of materials may not be immediately evident, because the pores might be blocked by grease or other contamination. Nevertheless, leaks will occur later, e.g. during UHV bakeout or cryogenic cool-down.
182
Vacuum-system leaks and related problems
(b) Porosity is often found in materials that have been cast in air, without subsequent deformation by rolling or forging. Notorious examples include air-cast brass and iron. (c) Metals that have been processed by multi-axis forging and/or vacuum melting are preferred. (d) Rolled materials should be incorporated into vacuum equipment in such a way that naturally occurring voids oriented in the direction of rolling do not become potential leak paths. (e) Sintered materials, which are often porous, should generally be avoided. These include many ceramics, which should not be employed in systems sensitive to outgassing until it has been firmly established that they are safe for this application. (f) Do not use anonymous materials in critical vacuum applications – they should have an established provenance from a reputable supplier. (g) Leak test potentially problematic materials, such as thin-wall tubing and brass bars, upon receipt. (h) Many 300 series austenitic stainless steels are susceptible to degradation during welding (“weld decay”) that can lead to corrosion and cracks. This can be avoided by using only 304L, 316L, 321, 347, and 348 grades. (i) Brass can often have cracks or pores – especially when extruded or cast in air. “Free machining brass,” which contains lead, can be problematic – especially following hard soldering. “Cartridge brass” – a 70/30 Cu/Zn alloy – is preferred. (j) If tubing is required which must be soldered, copper–nickel alloy (e.g.: 70/30 Cu/Ni) is preferable to stainless steel. (k) Copper (especially OFHC) and aluminum (especially 6000 series) are considered good vacuum materials, but are difficult to weld reliably. Copper is easy to solder and braze, but these operations cannot be done so easily on aluminum.
6.6 Some insidious sources of contamination and outgassing (a) Cleaning agents themselves are often contaminants in vacuum systems. (b) Chlorinated solvents can be difficult to remove from vacuum surfaces and, in the case of stainless-steel bellows, can cause stress corrosion cracking. (c) Acid cleaning of vacuum components can lead to hydrogen embrittlement (e.g. of welds), unless the hydrogen is removed by a low-temperature anneal. (d) Vacuum greases are rarely needed in room-temperature vacuum systems, except for moving seals, and should be avoided if possible. (e) Special efforts should be made to avoid getting oils, greases, and other like contaminants on leak-prone parts of a system, such as solder joints. Such contaminants could temporarily block leaks, and thereby inhibit leak testing. (f) Cadmium is a very troublesome contaminant in vacuum systems – especially in those that must be baked – and should be carefully avoided. (g) Fingerprints are a common form of contamination that can cause considerable outgassing in UHV systems, and are difficult to remove by normal vacuum baking procedures.
183
Summary of some important points
6.7 Joining procedures: welding, brazing, and soldering 6.7.1 Worker qualifications and vacuum joint leak requirements (a) The creation of permanent vacuum joints for critical vacuum equipment, by welding, brazing, or soldering, should be done by professionals specializing in these activities, if possible. (b) It is very desirable that firms involved in the joining of vacuum components have the ability to do helium leak testing of the completed items in-house.
6.7.2 General points (a) The most desirable methods of joining are (in order of preference): welding, hard soldering (or brazing), and soft soldering. (b) Mating surfaces must be completely clean before any joining operation is carried out. (c) The removal of heavy contamination from surfaces using abrasive grinding compounds, or by sand or bead blasting, should be avoided. (d) Brazing and soft-soldering alloys should be eutectic compositions, if possible. (e) In the joints of systems that must be heated or cooled (e.g. UHV or cryogenic systems) the materials to be joined should be identical, or chosen so that the filler metal is always under compression during the temperature excursions. (f) Connections should be designed so that it is possible to visually inspect the completed joint.
6.7.3 Reduced joint-count designs and monolithic construction (a) It generally pays to reduce the number of joints in a vacuum system (while taking into account the need for modularity). (b) In certain cases, particularly those in which vacuum joints (welds, braze joints, etc.) are presenting severe leak problems, it may be worthwhile to machine a vacuum component out of a single piece of material. (c) Such monolithic construction also results in greater strength and rigidity, and has other advantages.
6.7.4 Welding (a) Welds are normally made by arc welding using the “tungsten inert gas” or TIG process. (b) The training, experience, and skill of the welder is a crucial factor in the creation of leak-free weld joints. (c) Automatic or semi-automatic welding methods should be used wherever possible. (d) The details of the design of a welded joint are an important element in their success. (e) If 300 series stainless steels are to be used in a vacuum system, the particular alloy should be selected so as to avoid degradation in the form of “weld decay.”
184
Vacuum-system leaks and related problems
(f) Special expertise is required in order to weld aluminum so that the resulting joints are free of porosity. (g) In the case of particularly difficult, delicate, or high-performance welding tasks, consider the possibility of using the “electron-beam welding” method. (h) Leaks in welded connections may be the result of impurities, incompatible welding alloys, and wrongly set welding parameters.
6.7.5 Brazing (a) Brazing is a useful method for making very high-quality vacuum joints, as long as it is done in a furnace under vacuum or in hydrogen. The use of hand-held torch brazing should be avoided, if possible. (b) Avoid using brazing alloys containing Cd, Zn, or Pb (all high-vapor-pressure elements), if the joint is to be used in a vacuum system at pressures of less than 10−4 Pa, if the system is to be baked, or if a vacuum furnace is used to braze. (c) If it is necessary to braze with a torch, care must be taken to avoid overheating the filler metal, which can lead to porosity. (d) If torch brazing must be used, steps should be taken to minimize the entrapment of flux – e.g. by avoiding joint designs with blind holes, crevices, or tortuous passageways.
6.7.6 Soldering (a) Unlike weld and braze joints, soldered ones should be considered non-permanent. Solder joints are also relatively weak, and are prone to fatigue failure caused by thermal cycling and vibrations. (b) A major problem with solder is in obtaining adequate adhesion to some common vacuum materials, such a stainless steel. (c) Leaks in solder joints are most often caused by the presence of voids in the solder, and regions of the soldered surfaces that have not been wetted by the solder. (d) Solder joints should be made in a way that does not require the use of corrosive fluxes – which are generally inorganic mixtures such as ZnCl2 in hydrochloric acid. (e) Organic acid fluxes, such as glutamic acid–urea types, are to be preferred. Rosin fluxes should not be used. (f) For applications in environments at or near room temperature, particularly if large stress-inducing vibrations are present, the 95/5 Sn/Ag solder alloy is a useful one. (g) Solder should generally be avoided in cryogenic applications, if possible. Nevertheless, if it must be used, the 63/37 Sn/Pb eutectic is a good choice. (h) Solder alloys should be used consistently throughout an experimental area, or records should be kept or a marking scheme employed – since inadvertent mixing of different solders can cause reliability problems. (i) Solder joints should be designed so that the solder supports substantially no mechanical load. This is usually done by using a sleeve joint.
185
Summary of some important points
(j) Because of the almost inevitable corrosion and leaks, the soldering of thin-wall stainless-steel tubing with flux (which must be corrosive in order to work) should be avoided. (k) In the case of materials that are not readily soldered, the task can be made easier (without the need to use corrosive fluxes) by (a) ultrasonic soldering, (b) electroplating or sputter-coating the materials with more easily soldered ones, and (c) enveloping the parts with nitrogen gas to reduce oxidation and improve the wetting and flow properties of the solder.
6.8 Use of guard vacuums to avoid chronic leak problems It is often possible to solve particularly difficult leak problems by using the “guard vacuum” (or “differential pumping”) technique, in which two of the potentially leaky elements (such as electrical feedthroughs) are made to straddle a space that is continuously pumped.
6.9 Some particularly trouble-prone components (a) Brittle items, such as UHV windows, are particularly susceptible to damage during large temperature changes, and steps should be taken to limit the rate of change of temperature, and the size of the temperature gradients across them. (b) Electrical feedthroughs involving glass-to-metal seals should be avoided, if possible. Ceramic-to-metal seals are much more robust. (c) Commercial electrical feedthroughs making use of ceramic-to-metal seals are greatly preferable to homemade types, which normally involve epoxy. (d) Forces transmitted through cables and accidental impacts are a common source of damage to electrical feedthroughs – cable restraints and protective covers can be used to prevent this. (e) Cooling water lines using flexible metal bellows in the vacuum should be avoided, if possible, unless a guard vacuum can be provided. (f) Similarly, water-to-vacuum seals should not be used inside a vacuum system, unless these can be equipped with a guard vacuum. (g) Metal bellows should not be used in vacuum systems indiscriminately. (h) Rolled or hydroformed bellows tend to be more reliable than edge-welded types, unless the bellows are being used to accommodate large axial movements, in which case edge-welded bellows are often preferable. (i) Vibrations in bellows, caused by liquid or gas flow through them, time-dependent electromagnetic forces, or transport, can be a common cause of failure due to fatigue. (j) Fatigue failure in flexible metal hose often occurs because of mistreatment in the form of overbending during manual handling. (k) Bellows undergoing repeated and predictable movements should be designed so that stresses within them do not exceed the fatigue limit for the material (if the material has a fatigue limit).
186
Vacuum-system leaks and related problems
(l) Stainless steel bellows should be protected from chemicals that can cause stress corrosion cracking – particularly substances containing chlorides (e.g. active soldering fluxes and commercial cleaning agents).
6.10 Diagnostics (a) Keeping logbooks for vacuum apparatus can be very helpful in solving leak problems. (b) Items to be leak tested should be clean and dry. (c) In order to avoid wasting time chasing tiny leaks that will not disable the apparatus, consider making an estimate of the maximum leak size that a vacuum system can tolerate. (d) The helium mass spectrometer leak-detection technique is the most useful and least problematic one, in general. (e) The permeation of O-rings in the vacuum system and the leak detector by helium will eventually cause the background leak level to rise – steps should be taken to minimize the unnecessary release of helium during testing. (f) The overlooking of leaks during helium leak testing can be avoided by finding out the response time of the vacuum system–leak detector combination following the entry of helium into the system. (g) The leak testing of cryogenic systems (i.e. to find “cold leaks”) can be very difficult and time consuming, especially if leaks involving superfluid helium (“superleaks”) are present. Special leak-detection methods may be needed. (h) If the usual room-temperature leak-testing techniques for finding a cold leak have failed, one may try cooling the vacuum system in liquid nitrogen, and spraying helium gas just above the surface of the liquid. (i) Another approach for revealing cold leaks, which can also be useful for superleaks, is to thermally cycle the apparatus, in order to cause small leaks to widen, and therefore become detectable at 77 K, or even room temperature. (j) If much time has already been expended in hunting cold leaks or superleaks, the most efficient approach may be to repair all items that are likely to be a cause of problems (e.g. reflow all relevant solder joints), or replace them. (k) Relatively large room-temperature leaks can be detected and located even from great distances (e.g. tens of meters) by using the ultrasonic method.
6.11 Leak repairs (a) Leaks should be repaired either by remaking connections (e.g. removing existing weld metal and re-welding), or by replacing leaky items. (b) Repairs should not be done by patching (e.g. welding over top of an existing weld), if possible. (c) The use of varnishes, epoxies, greases, etc., to repair leaks should be avoided, except possibly as a temporary measure, especially in systems subjected to temperature extremes and thermal cycling.
187
References
References 1. N. G. Wilson and L. C. Beavis, in Handbook of Vacuum Leak Detection, W. R. Bottoms (ed.), American Institute of Physics, 1979. 2. G. L. Anderson, in Metals Handbook-Ninth Edition, Volume 17, Nondestructive Evaluation and Quality Control, ASM International, 1989. 3. D. V. Osborne, in Experimental Cryophysics, F. E. Hoare, L. C. Jackson, and N. Kurti (eds.), Butterworths, 1961. 4. N. Harris, Modern Vacuum Practice, 3rd edn, Nigel S. Harris, 2005. www. modernvacuumpractice.com. (First edition published by McGraw-Hill, 1990.) 5. T. Winkel and J. Orchard, Leak evaluation in JET and its consequences for future fusion machines, Report No. JET-P(89)57, JET Joint Undertaking, Abingdon, UK. 6. D. Brandon and W. D. Kaplan, Joining Processes: an Introduction, John Wiley and Sons, 1997. 7. G. K. White and P. J. Meeson, Experimental Techniques in Low-Temperature Physics, 4th edn, Oxford, 2002. 8. E. N. Smith, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 9. J. F. O’Hanlon, A User’s Guide to Vacuum Technology, 2nd edn, John Wiley and Sons, 1989. 10. R. B. Scott, Cryogenic Engineering, D. Van Nostrand Company, 1959. 11. A. J. Croft, in Experimental Cryophysics, F. E. Hoare, L. C. Jackson, and N. Kurti (eds.), Butterworths, 1961. 12. A. J. Croft, Cryogenic Laboratory Equipment, Plenum, 1970. 13. W. H. Kohl, Handbook of Materials and Techniques for Vacuum Devices, American Institute of Physics, 1995. 14. Metals Handbook – Ninth Edition, Volume 3 – Properties and Selection: Stainless Steels, Tool Materials and Special-Purpose Metals, American Society for Metals, 1980. 15. V. A. Wright, Jr., Stainless steel for ultra-high vacuum applications, Varian report VR-39; Varian Inc., 3120 Hansen Way, Palo Alto, CA, USA www.varianinc.com 16. G. McIntosh, in Handbook of Cryogenic Engineering, J. G. Weisend II (ed.), Taylor & Francis, 1998. 17. D. A. Wigley, Materials for Low-temperature Use, Oxford, 1978. 18. J. E. VanCleve, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 19. Y. Shapira and D. Lichtman, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 20. J. H. Moore, C. C. Davis, M. A. Coplan, and S. C. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. 21. L. T. Lamont, Jr., in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979.
188
Vacuum-system leaks and related problems
22. R. J. Reid, CERN Accelerator School Vacuum Technology, Proceedings (CERN 99–05), S. Turner (ed.), CERN, 1999, pp. 139–54. 23. Y. Tito Sasaki, J. Vac. Sci. Technol. A 9, 2025 (1991). 24. G. F. Weston, Ultrahigh Vacuum Practice, Butterworths, 1985. 25. EWI,1250 Arthur E. Adams Drive, Columbus, Ohio. www.ewi.org 26. TWI Ltd, Granta Park, Great Abington, Cambridge. www.twi.co.uk 27. E. A. Brandes, Smithells Metals Reference Book, 6th edn, Butterworths, 1983. 28. N. Milleron and R. C. Wolgast, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 29. P. T. Vianco, Soldering Handbook, 3rd edn, American Welding Society, 1999. 30. A. Roth, Vacuum Sealing Techniques, American Institute of Physics, 1994. 31. R. D. Taylor, W. A. Steyert, and A. G. Fox, Rev. Sci. Instrum. 36, 563 (1965). 32. D. A. Wigley and P. Halford, in Cryogenic Fundamentals, G. G. Haselden (ed.), Academic Press, 1971. 33. Metals Handbook – Ninth Edition; Volume 6 – Welding, Brazing and Soldering, American Society for Metals, 1983. 34. R. C. Thomas, Solid State Technology, September 1985, p. 153. 35. W. J. Tallis, in Cryogenic Engineering, B. A. Hands (ed.), Academic Press, 1986. 36. F. Pobell, Matter and Methods at Low Temperatures, 2nd edn, Springer, 2002. 37. T. Caulfield, S. Purushothaman, and D. P. Waldman, Adv. Cryo. Eng. 30, 311 (1984). 38. A. Nyilas and J. Zhang, 4 K tensile measurements of different solder alloys, Kernforschungszentrum Karlsruhe (now Forschungszentrum Karlsruhe) Internal Report, Contract 5183/90 of Project EW-293607, Oct. 1990. 39. R. K. Kirschman, W. M. Sokolowski, and E. A. Kolawa, J. Elect. Pack. 123, 105 (2001). 40. J. W. Ekin, Experimental Techniques for Low-Temperature Measurements: Cryostat Design, Material Properties, and Superconductor Critical-Current Testing, Oxford University Press, 2006. 41. M. Judd and K. Brindley, Soldering in Electronics Assembly, 2nd edn, Newnes, 1999 42. H. J. Lewandowski, D. M. Harber, D. L. Whitaker, and E. A. Cornell, J. Low Temp. Phys. 132, 309 (2003). 43. R. S. Germain, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 44. A. Mosk, in Interactions in Ultracold Gases: From Atoms to Molecules, M. Weidem¨uller and C. Zimmermann (eds.), John Wiley & Sons, 2003. 45. C. A. Bailey and B. A. Hands, in Cryogenic Engineering, B. A. Hands (ed.), Academic Press, 1986. 46. W. F. Egelhoff, Jr., J. Vac. Sci. Technol. A 6(4), 2584 (1988). 47. G. L. Fowler, J. Vac. Sci. Technol. A 5(3), 390 (1987). 48. J. F. O’Hanlon, in Encyclopedia of Applied Physics – Volume 23, G. L. Trigg, E. S. Vera, and W. Greulich (eds.), Wiley-VCH, 1998. 49. A. M. Putnam and D. M. Lee, J. Low Temp. Phys. 101, 587 (1995).
189
References
50. N. H. Balshaw, Practical Cryogenics: An Introduction to Laboratory Cryogenics, Oxford Instruments Superconductivity Limited, Old Station Way, Eynsham, Oxon, England, 2001. www.oxford-instruments.com 51. E. Suaudeau and E. D. Adams, Cryogenics 30, 77 (1990). 52. G. Nunes, Jr., and K. A. Earle, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 53. M. G. Rao, Advances in Cryogenic Engineering 41, 1783 (1996). 54. G. D. Martin, in Proceedings, IEEE Thirteenth Symposium on Fusion Engineering; Cat. No. 89CH2820–9, M. S. Lubell, M. B. Nestor, and S. F. Vaughan (eds.), IEEE (1990), pp. 360–363. 55. E. S. Ensberg, J. C. Wesley, and T. H. Jensen, Rev. Sci. Instrum. 48, 357 (1977). 56. A. S. Bandes, AIPE Facilities 22, No. 1, 41 (1995). 57. R. L. Holtz and C. A. Swenson, Rev. Sci. Instrum. 56, 329 (1985). 58. A. J. Croft, Cryogenics 3, 65 (1963). 59. J. T. Yates, Jr., Experimental Innovations in Surface Science: a Guide to Practical Laboratory Methods and Instruments, Springer-Verlag, 1998.
Vacuum pumps and gauges, and other vacuum-system concerns
7
7.1 Introduction Aside from leaks, which are discussed in Chapters 6 and 8, and outgassing, which is discussed in Chapter 6, other kinds of problem often afflict vacuum apparatus. For instance, vacuum pumps are frequently mechanical devices, and can suffer from the reliability difficulties that are common to machines in general. Contamination is often a concern in vacuum systems, and the pumps themselves are frequently sources of contaminants. These often take the form of pump oils and their vapors, and various types of particulate matter. Besides these general difficulties, there are numerous others that are peculiar to each particular type of pump. Vacuum gauges are generally affected by contamination, and are often degraded by substances released from the pumps. The vibrations produced by some vacuum pumps can, in addition to increasing the likelihood of leak problems, also damage delicate structures inside certain vacuum gauges. (These points are also relevant for mass spectrometers, which are common items in vacuum equipment.) These issues, and others pertaining to the use of vacuum equipment, are discussed in the following chapter.1
7.2 Vacuum pump matters 7.2.1 Primary pumps 7.2.1.1 General issues concerning mechanical primary pumps Perhaps the most serious problem pertaining to the use of mechanical vacuum pumps concerns, not difficulties with the pumps themselves, but those that they may cause to the rest of the system in the form of contamination – especially by pump oil. Mechanical pumps generally are sensitive to the presence of abrasive dust. In some cases (as with rotary vane pumps), it may be possible to provide sufficient protection 1
190
NB: Any recommendations made for inspecting or maintaining such equipment should be checked by referring to documentation provided by the manufacturer.
7.2 Vacuum pump matters
191
for the device using purpose-made dust filters. These can be obtained from the pump manufacturers. Some pump designs, such as scroll and certain diaphragm – based devices, are intrinsically unsuited for use with abrasive dust-laden gases [1,2]. Dusty gases are often produced in certain vacuum-based materials preparation processes, such as chemical vapor deposition (CVD). The presence of belt drives in mechanical pumps is an inherent weakness because of possible belt breakage and slippage, and the need for belt replacement. Most modern mechanical pump designs use direct drive arrangements that allow one to avoid these potential failure modes. If a pump with a belt is being used, periodic adjustment of the belt tension may be necessary. Replace frayed belts [1]. Vibrations produced by mechanical pumps sometimes result in failures in the vacuum system. For example, they may lead to the fatigue cracking and leaks in solder joints, bellows, or other items (see the discussions in Chapter 6). Another possibility is the loosening of threaded fasteners on demountable connections, which can ultimately result in their complete disengagement. Vibrations can also cause the release of troublesome particulate matter in some vacuum systems, such as those used for thin-film deposition. Interference with vibration-sensitive measurements is another potential problem. Vibration issues in general are discussed in Section 3.5. Because of their asymmetric layout, rotary-piston pumps2 are inherently prone to producing vibrations. The manufacturers of these devices may reduce vibration levels by incorporating suitable balancing arrangements into the design. Rotary-vane pumps (especially the newer kinds) are significantly better with regards to intrinsic vibration levels. If a pump (of any type) is driven with a V-belt, large amounts of vibration can result if the belt is poorly made, non-uniform and unbalanced [3]. A loose belt can also have this effect. Such belt-related problems can be avoided by using direct-drive pumps. Most modern oil-sealed mechanical pumps are designed to achieve comparatively low noise and vibration levels [1]. More information about pump vibrations is provided on page 80.
7.2.1.2 Oil-sealed mechanical primary pumps Prevention of contamination from pump oil The most common kind of primary pump is the rotary-vane type, in which the moving parts are lubricated and sealed by oil. In principle, avoiding the contamination of vacuum systems from the oil in mechanical pumps (to the extent that it can be avoided) just involves installing an activated alumina “foreline trap” between the mechanical pump and the highvacuum pump (e.g. a diffusion pump). (Activated alumina is a highly porous material that adsorbs the back-migrating oil.) Foreline traps are available from the pump manufacturers. One must ensure that the activated alumina is replaced before it becomes saturated with oil. This is done by removing the basket containing the alumina from the trap, and checking for discoloration of the (normally white) material. If such discoloration extends more than 2
Large oil-sealed mechanical primary pumps are often rotary-piston types. However, these are relatively uncommon compared with rotary-vane pumps.
192
Vacuum pumps and gauges, and other vacuum-system concerns
a third of the way through the bed of alumina, then it should be replaced [1]. Some traps are available with transparent housings, which make it possible to inspect the alumina without having to dismantle the trap. Foreline traps generally last a considerable time before replacement of the alumina is necessary. Under conditions of normal use, saturation will usually take place after at least three months [4]. Nevertheless, the failure to periodically inspect the trap is a potential reliability problem for systems that must be kept clean. In fact, foreline traps are, in reality, almost never maintained, so that the protection that they provide is largely illusory [4]. Liquid-nitrogen traps can also be used to prevent oil contamination [1]. However, since these devices must be constantly filled with nitrogen, the probability of accidents due to error is even greater than with activated alumina types. A much better way of reducing the back-migration of mechanical pump oil is to leak dry nitrogen into the foreline. This is done at such a rate that the pressure in the foreline is in the viscous flow regime. At this pressure, the mean free path of gas particles is determined by collisions with other particles (rather than collisions with the walls of the pumping line, as would be the situation for molecular flow conditions). In this way, the nitrogen molecules act as a barrier that prevents oil vapor from moving upstream. A pressure of about 4 Pa has been used in one arrangement [5]. Although air can also serve this purpose (see page 172), dry nitrogen is preferred because it does not introduce oxygen or water vapor into the vacuum system. The dry nitrogen may be obtained from a gas cylinder. Alternatively, in some laboratories it is continuously available on-tap – originating as boil-off from a central liquid-nitrogen storage tank. The leak rate of nitrogen into the foreline can be set either with an adjustable leak valve or a calibrated leak. If the roughing pump is being used in conjunction with a diffusion pump, the foreline pressure must be kept below the critical backing pressure of the latter (see page 196). The nitrogen leak method can be a reliable and low-maintenance way of reducing backmigration of oil. Unlike the situation when foreline traps are used, the nitrogen leak arrangement can be made immune to operator error. For example, an automatic interlock device can be used to prevent the pump from operating unless the pressure in the foreline is sufficiently high. The nitrogen leak method is described in more detail in Ref. [5]. Another genuinely reliable way of reducing the back-migration of mechanical pump oil is to use a type with a low vapor pressure, such as “Convoil-20” or “Invoil-20” [4]. This approach can, of course, be combined with other methods to reduce the likelihood of back-migration. Another potential cause of contamination from oil-sealed pumps is an effect known as “suck-back”, which involves the bulk movement of oil from the pump into the rest of the vacuum system under the influence of atmospheric pressure. This can occur if the pump is switched off (or shuts off during a power failure) while the system to which it is connected is still under vacuum. While this behavior can be prevented by manually shutting off the valve connecting the pump to the rest of the system, or venting the system, it is unwise to depend on such measures. Although most modern oil-sealed pumps contain an internal valve that automatically prevents this movement of oil [1], some pumps (especially older designs) may not have such a feature. In situations in which contamination is an important issue, it is essential to ensure that a “safety valve,” “inlet isolation valve,” or some other automatic
193
7.2 Vacuum pump matters
suck-back prevention arrangement, is present within the pump or mounted externally, and that it is functioning properly. Externally mounted safety valves can be obtained for pumps that are not supplied with one.
Pump problems due to oil degradation Nearly all internal problems with oil-sealed mechanical pumps are the result of degradation of the pump oil, and the failure to change it at regular intervals [4]. Contamination of the oil is usually the major issue [1]. The resulting difficulties include a failure of the pump to reach the normal base pressure, or low pumping speeds. It is normally possible to tell whether the oil is degraded, because it becomes discolored or contaminated with particles. Such discoloration typically involves changing from light brown to dark brown or black, or perhaps taking on a cloudy appearance. (The latter indicates the presence of a foreign liquid, such as water, in the oil.) At least one vacuum firm sells “oil change reminder tags,” which can be fixed to a vacuum system in order to prompt users to replace the oil when it takes on the hue illustrated on the tag. At any rate, under average conditions, the oil in oil-sealed pumps should be changed every six months. (If a pump is being used in a clean application, such as backing a diffusion pump used to keep a charged-particle beam instrument under vacuum, it can operate for years without needing an oil change [25].) When the device is used to pump very dirty systems, more frequent replacement may be necessary. This action involves draining the old oil, flushing the pump with a purpose-made flushing agent, and filling it with new oil. Most difficulties with oil-sealed pumps are solved by carrying out this procedure. Rotary pump oil levels should generally be checked regularly (daily or weekly), especially in the case of pumps that are frequently cycled to atmospheric pressure [1,4]. Oil is typically lost from pumps in the form of a mist of droplets that exit via their discharge ports. If oil loss is a problem, it can be prevented by installing an oil separation and return device (i.e. an oil-mist filter), which captures the droplets and sends the oil back to the pump. Oil-mist filters also prevent pollution of laboratory air by pump oil. Special rotary pump oils are available that have a lower vapor pressure, and considerably longer service lives, than the ordinary mineral oils usually used in pumps. These include “doubly distilled” fluids, and “technical white” or “TW” oil. When used with reactive gases, the latter has a service life that is two to three times longer than that of doubly distilled fluid under similar conditions. Pump oils comprising perfluoropolyethers (PFPE) are almost completely inert, and appropriate for pumping the most reactive gases, including pure oxygen [1].
Leaks Oil leaks along the drive shaft are another potential trouble with mechanical pumps. Driveshaft seals should be inspected for such problems every six months [1]. (Recommendations in this section are for pumps used under average conditions.) Leaks at exhaust valve seals are yet another source of troubles, and these should be examined annually. Other possible causes of leaks in these pumps are operators not properly closing the gas ballast valve
194
Vacuum pumps and gauges, and other vacuum-system concerns
after a ballasting operation, and faulty gas ballast valves. (Ballasting involves allowing a small amount of air into the pump when moisture-laden gases are being pumped in order to prevent the condensation of moisture and its consequent emulsification with the oil.) Gas ballast valve seals should be examined annually.
Relative reliability Oil-sealed mechanical pumps are highly reliable devices as long as they are used correctly and maintained – especially with regards to regularly changing oil. Under such conditions, they very seldom fail. Ensuring that they are adequately maintained is often a problem, especially in a typical university physics laboratory environment. Nevertheless, of all the available positive-displacement primary pumps (including in particular oil-free “dry pumps”), the oil-sealed types are probably the most robust and trouble-free, even when maintenance is neglected. Also, if an oil-sealed pump is being used to evacuate a clean vacuum system, it can operate for years without maintenance. The oil in these pumps very effectively serves several functions that are conducive to good reliability. It acts as a self-repairing vacuum seal and lubricant, a heat-transfer medium, and as a flushing agent to remove particles from the pump [6]. The oil also protects pump materials from corrosion [1]. A potentially important advantage of oil-sealed pumps is that they tend to fail gradually (due to slow degradation of the oil) – not suddenly. Hence, maintenance does not normally have to be carried out at the first signs of trouble, but can often be delayed until a convenient moment.
7.2.1.3 Oil-free scroll and diaphragm pumps, and other “dry” positive-displacement primary pumps In many applications where oil contamination must be absolutely avoided, various types of mechanical primary pumps which do not make use of oil seals have become popular. These include, for example, “scroll pumps,” “diaphragm pumps,” and multistage “claw” or “Roots + claws” designs. Generally, this class of pumps is mainly intended for applications in which oil must be completely eliminated, and not those in which reliability is important [1]. Even when regular maintenance is carried out, early failures can still be a problem. However, in those situations in which avoiding oil is a key requirement, these devices can be very useful. There is, of course, always the possibility of employing some kind of redundancy arrangement if the reliability of a single pump is not sufficient. The scroll design is a relatively common example of a dry pump, which is often used in place of an oil-sealed type in small pumping stations. The scroll design has had a reputation for poor reliability in the past (see, e.g., Ref. [7]). However, these devices are unforgiving of inadequate routine maintenance, and it is possible that at least some of the reported reliability problems have stemmed from this cause. Although it has been claimed that newer scroll pump designs are superior in this regard, they are probably not yet as reliable as oil-sealed pumps.
195
7.2 Vacuum pump matters
As mentioned above, scroll pumps will not tolerate abrasive dust [1]. Although they do not release oil, scroll pumps will generate a small amount of dust from the moving seal, and steps may have to be taken to prevent this from entering the working vacuum environment [8]. Unlike oil sealed pumps, scroll pumps contain no liquids to carry away accumulated particulate matter (whether of internal or external origin). Hence, it is necessary to occasionally flush these devices by having them pump air at atmospheric pressure for a few minutes. This should be done on a regular basis. Normally, dynamic seals in scroll pumps must be replaced about once a year [1]. However, depending on the particles present in the gases being pumped, this interval may be reduced to a few months in practice [6]. Like modern rotary-vane vacuum pumps, scroll pumps are relatively quiet and lowvibration machines [1]. Diaphragm pumps are, like scroll types, often used for roughing or backing purposes. The main problem with these devices is fatigue failure of the diaphragm. It is essential to replace this item regularly [1]. The replacement interval is more than about 5000 h of continuous use [6]. A discussion of other dry pumps can be found in Ref. [1].
7.2.1.4 Sorption pumps Materials with a high surface area, such as molecular sieve (e.g. aluminum calcium silicate), which have been cooled to liquid-nitrogen temperatures (77 K) have a high affinity for atmospheric gases. Such materials are often used in pumps for rough-pumping apparatus that can tolerate absolutely no oil contamination – such as ultrahigh-vacuum systems. These “sorption pumps” have no moving parts, and are, at least in principle, very reliable. However, unlike the pumps described so far in this chapter, they are not positive-displacement devices and have a limited pumping capacity. They operate on a cyclic basis, whereby they are cooled with liquid nitrogen and allowed to pump, and then warmed to room temperature in order to release their adsorbed gases. Furthermore, heating of the pumps (to about 300 ◦ C) is occasionally necessary in order to drive off moisture. As described on pages 148–149, the molecular sieve tends to degenerate after many thermal cycles, and can represent a contamination problem in its own right [9]. The real-life reliability of sorption pumps is, of course, dependent on the availability of liquid nitrogen. In actual laboratory situations, being able to obtain this substance precisely when it is needed is sometimes difficult. In this sense, the reliability of sorption pumps can be poor.
7.2.2 High-vacuum pumps 7.2.2.1 Diffusion pumps Introduction Unlike the devices discussed up to now, the diffusion pump is a high-vacuum gas-transfer pump, which must always be used in combination with a mechanical backing pump. Generally, the latter will be an oil-sealed device. Although diffusion pumps are considered
Vacuum pumps and gauges, and other vacuum-system concerns
196
to be intrinsically reliable, in reality their dependability cannot be considered in isolation from that of their backing pumps and other associated components, such as fans or cooling water systems, and any automatic protection devices.
Vacuum-system contamination Since diffusion pumps use directed jets of oil vapor, produced by heating oil using an electric heater at the base of the pump, contamination of the vacuum chamber3 by this oil is often the greatest concern. In this regard, the most important requirement is to always ensure that the pressure maintained by the mechanical pump in the foreline (or “backing line”) is below a critical pressure – the “critical backing pressure” (CBP) [1]. Above this pressure, the diffusion pump will “stall,” and oil vapor will migrate into the vacuum chamber – usually causing severe contamination. (Migration of oil into the vacuum chamber is called “backstreaming.”) Harmful rises in foreline pressure can be caused by, e.g., leaks in the foreline, a failure in part of the heater, a loose belt on the mechanical pump, or a low mechanical pump oil level [4]. In order to condense the oil vapor, diffusion pumps must be cooled, either with water or forced air. If air-cooling is used, or if the vacuum chamber must be kept particularly clean, then it is usually necessary to have a liquid-nitrogen trap at the inlet of the diffusion pump, in order to ensure that no oil vapor enters the vacuum chamber. If the liquid nitrogen trap on an air-cooled pump should run dry, or (especially) if either the cooling fan fails or the water supply for a water-cooled pump breaks down, then it is likely that the chamber will become contaminated. (This will occur unless, of course, automatic protection devices are in place to prevent it.) If the cooling efficiency is merely reduced, then a reduction of the pumping ability can be expected [1]. (Sections 8.4 and 11.8.3 contain discussions on water-cooling issues and fan problems.) Normally, liquid-nitrogen traps reduce oil contamination to negligible levels. However, under some conditions, severe contamination may result because of a curious phenomenon known as the “Herrick effect” [10]. This involves the fracture and fragmentation of the film of oil that has built up on the trap, when the film freezes following the addition of extra liquid nitrogen to the trap. The release of the elastic energy contained in the solidified film of oil causes the fragments to be propelled with considerable energy in all directions. Some of these fragments land upstream of the trap, where they melt and result in contamination. The use of appropriately designed traps will reduce this behavior [10]. One should also avoid placing a trap too close to an unprotected diffusion pump. Baffles should be installed between the pump and the trap in order to condense oil vapor and return it to the pump, before it has a chance to accumulate on the trap. If it is necessary to allow the trap to warm up in order to desorb it, one can also reduce risks by ensuring that the high-vacuum valve remains closed during and shortly after refilling the trap with nitrogen. This is done in order to lower the probability that fragments of oil, and products of desorption, will enter the main vacuum chamber (see Ref. [10]). In dealing with diffusion pump oil contamination 3
The “vacuum chamber” is the part of the vacuum system which must be evacuated by the various pumps, and where the primary activity (e.g. thin-film deposition, experimental measurements, etc.) takes place.
197
7.2 Vacuum pump matters
issues, one must not ignore the risk of oil contamination from the mechanical foreline and roughing pump [4]. Another possible form of contamination from a diffusion pump is water vapor. It is not unknown for water in baffles and cold caps in a water-cooled diffusion pump to leak into the vacuum space [1].
Some causes of harm to diffusion pumps Although they can cause oil contamination difficulties, and can have problems relating to the working of ancillary items such as cooling fans, cooling water systems, and foreline pumps, diffusion pumps are themselves very reliable devices and hard to damage. They are very simple, have no moving parts, and can run for many years without maintenance. Unlike other pumps with similar capabilities, diffusion pumps are relatively tolerant of contamination such as dust. One of the few ways of harming a diffusion pump is to allow some kinds of pump oil (mainly “paraffinic” types) to be exposed to air at atmospheric pressure while hot. This can result in oxidation of the oil, which will cause carbonaceous deposits (tar-like substances) to build up in the pump. The formation of such deposits may necessitate that the pump be dismantled and cleaned. (The latter is not a trivial task, since manual abrasion and shot blasting is required – see the description in Ref. [1].) Certain types of diffusion pump oil, such as silicones, are much more resistant to oxidation, and are therefore preferable from this point of view. However, silicones can present contamination problems (see page 97). They are not suitable for use in vacuum systems that must be kept very clean. Silicones are also unsuited for use in systems which contain devices that make use of charged particle beams. This is because these compounds break down under charged-particle bombardment to form insulating films on electrode surfaces [10]. Other types of oil, such as polyphenyl ether, are also very resistant to oxidation, and do not have the contamination risks of silicones. Furthermore, polyphenyl-ether based diffusion-pump oils are the least troublesome of the various types when exposed to electron beams [11]. However, they are relatively expensive. Decomposition of some types of oil, and the formation of carbonaceous deposits in the pump, can also take place if the oil temperature becomes excessive [4]. This can occur if the cooling water fails, or the level of oil in the diffusion pump is too low.
Automatic protection devices Diffusion pumps can be very unforgiving of human error – usually not in the sense of becoming damaged, but by causing contamination to other parts of the vacuum system. Such errors include, for example, allowing cold traps to run dry, overloading the diffusion pump with gas for an excessive time, or (perhaps the worst possible mistake) bringing a system up to atmospheric pressure through the foreline when the diffusion pump is operating. Automated protection systems are recommended to prevent errors (as well as cooling water or mains power failures) from resulting in contamination disasters.
Vacuum pumps and gauges, and other vacuum-system concerns
198
For example, a pressure gauge in the foreline can be used to actuate a valve and thereby isolate the diffusion pump, and to shut off the heater power, if the pressure rises excessively [10]. A thermal switch on the exterior of the pump can detect a condition of overheating (e.g. due to inadequate cooling water), and a signal from it can be used to shut off the heater power, and close the high vacuum and foreline valves. Other harmful conditions, such as a loss of liquid nitrogen in the cold trap, or a loss of diffusion pump fluid, can also be sensed and used to place the pump in a safe state. Loss of a.c. mains power can also result in backstreaming [4]. Hence, things should be arranged so that all valves in the vacuum system, with the exception of the roughing pump vent, automatically close upon loss of mains power or compressed air. (Compressed air is often used to actuate valves that are under automatic control.) It is possible to obtain gate valves that close automatically if the power fails. Discussions of such automated systems can be found in Refs. [4] and [11]. Operating procedures for diffusion pumps are provided in Refs. [1], [4], and [10].
Use of diffusion pumps in UHV systems It is quite possible to use diffusion pumps to evacuate apparatus that must be kept very clean – even UHV systems [4]. Achieving this aim involves carefully designing and building the pumping system (including the use of special traps), and vigilantly following the correct procedures while running it. Freedom from hydrocarbon contamination originating from the pumps requires perfection. Although the use of automatic protection devices is highly desirable, these are difficult (although by no means impossible) to construct for UHV systems, because bakable metal valves are usually manually operated [9].4 Furthermore, an important principle in designing systems for high reliability is that it is better to use technologies that are intrinsically reliable, rather than compensating for inadequate reliability with added protection devices. Hence, most workers who are concerned about the cleanliness of their vacuum systems prefer to use high-vacuum pumps that are inherently incapable of generating oil contamination, such as turbomolecular, cryo-, and ion pumps.
7.2.2.2 Turbomolecular pumps Introduction A modern method of avoiding the potential contamination problems posed by diffusion pumps involves the use of “turbomolecular pumps,” or “turbopumps.” These devices, like diffusion pumps, are gas-transfer pumps, and hence can pump gases continuously, without the need for regeneration. Like diffusion pumps, turbopumps must be backed by mechanical backing pumps and the need for the latter must be considered when the overall reliability of a turbopump system is assessed. However, unlike diffusion pumps, a turbopump has no need of a liquid-nitrogen trap, a high-vacuum valve, or even (if an oil-sealed forepump is 4
Such a strategy is more easily implemented for systems operating at the high-pressure end of the UHV range, where it is possible to use O-ring sealed valves.
199
7.2 Vacuum pump matters
used, unless the highest levels of cleanliness are needed) a foreline trap [1]. An ordinary turbopump cannot be directly backed by a scroll pump or a diaphragm pump (which would provide an intrinsically clean pumping system). However, a similar kind of high-vacuum pump, called a “turbomolecular drag” (or “turbo-drag”) pump, can be used in this way. Cooling water is not normally needed for turbopumps – passive or active air cooling is usually employed in all but the largest devices. Essentially, a turbopump consist of a kind of fan or turbine mounted on the end of a motor shaft. Unlike normal fans, they operate at pressures in which the movement of gases involves “molecular” rather than “continuum” flow – so that every atom and molecule that is pumped must be struck by a turbine blade directly. The speed of the blade must be roughly comparable with the room temperature thermal velocities of the atoms and molecules in the vacuum. What this means in practice is that the turbopump fan must revolve at very high rotational speeds – 50 000 RPM is typical, and speeds of about 70 000 RPM are used in small pumps.
Vulnerability to damage Because of this high rotational speed, and the resulting high linear speed of the blades (roughly 200 m·s−1 ), turbopumps are very sensitive to damage by impact resulting from the entry of particles and small objects. This is their major weakness, and because of it, turbopumps (unlike diffusion pumps) cannot be considered to be rugged devices. One instance of such a failure, which occurred in the author’s laboratory, involved the implosion of a glass vessel that was being evacuated by a turbopump. This event caused glass debris to fly into the machine. The repair of a pump that has been damaged in this way can be very expensive. It is normal for turbopumps to be provided with a metal screen in order to prevent the largest objects from reaching the blades. However, the use of such a screen invariably diminishes the pumping speed, so that making the mesh size arbitrarily small is not an option. By using special blade materials, some types of turbopump can be made to cope with impacts by smaller particles, including those generated in certain materials preparation processes such as “chemical vapor deposition.” In the case of turbopumps with ball bearings, the entry of dust or corrosive substances into the pump has the potential not only to damage the turbine blades, but also accelerate wear of the bearings [1]. Despite the drawback of potentially catastrophic damage due to particle or object impact, in practice with reasonable care modern turbopumps will generally run for a long time before breaking down. In this sense, even though they are not rugged devices, they are (in the absence of abuse) highly reliable ones. (This was not always the case – early turbopumps had a reputation for poor reliability.) Most modern turbopumps employ ball bearings with ceramic balls. The use of ceramics has considerably increased bearing reliability, compared with earlier designs, based on metal balls.
Magnetic-bearing turbopumps One class of turbopumps makes use of “magnetic bearings,” in which the rotor is levitated using magnetic fields. Since such pumps do not contain contacting surfaces in high-speed
200
Vacuum pumps and gauges, and other vacuum-system concerns
relative motion, their reliability is very high, and their lifespan is limited only by the driving electronics. Turbopumps with ball bearings must employ some lubrication in the ball races. Usually this is some type of low vapor-pressure grease, and the turbopump is designed in such a way that the bearings are on the high-pressure side of the turbine, so that vacuumsystem contamination by lubricant vapors is normally acceptably small. However, for some applications, the potential for contamination by the lubricant may still be unacceptable (see below). Magnetically levitated turbopumps have an advantage in this regard, because absolutely no lubrication is needed. In a dirty vacuum environment, the uneven accumulation of contamination on the blades of turbopumps with ball bearings can cause vibration and other problems due to a loss of dynamic balance. However, magnetically levitated turbopumps with active radial bearings (as opposed to passive ones) can automatically compensate for the presence of asymmetric forces due to contaminants or large gas loads. Hence, they are well suited to pumping unclean vacuum systems. Yet another advantage of using turbopumps with magnetic bearings is that they have exceptionally low vibration levels, in comparison with those that contain ball bearings. In the latter case, large vibrations can result if the turbopump happens to drive some object, such as a support structure or pipework, at its resonant frequency. (A discussion of turbopump vibration difficulties is provided in Ref. [12].) For all the advantages of magnetically levitated turbopumps one pays a high price. Turbopumps in general are relatively expensive when compared with diffusion pumps, and magnetically levitated ones are much more so. The loss of mains power to a magnetic-bearing turbopump is not a disaster. Such an event causes the suspension-system electronics to switch over to internal backup batteries while the turbine comes to rest. Turbopumps with magnetic bearings are sensitive to mechanical shock while operating. Acceleration levels of as little as 1 g can cause problems [1].
Possible contamination from ball-bearing turbopumps Although turbopumps with ball bearings can be considered to be very clean devices, there is a potential for contamination from the bearing lubricants if the turbine is allowed to come to rest while the system is still evacuated. This can be prevented by venting the pump from the high vacuum side (preferably with dry nitrogen) while the blades are still rotating [1]. It may be desirable to have an automatic venting arrangement that will do this if the system suffers a power failure while running unattended.
Consequences of improper venting Rapid inrushes of air into a turbopump (perhaps caused by an injudicious sudden venting of the vacuum system) can be hard on these devices. Such events are dealt with differently depending on whether the pump bearings are ball-types, or based on magnetic levitation. In the former case, no damage of immediate consequence is likely to result – only undue wear that can decrease the long-term life of the bearings. In the latter case, the magnetic suspension may not be able to cope with the loads on the turbine caused by the incoming air,
201
7.2 Vacuum pump matters
and the rotor will be forced to touch down on special emergency ball bearings (with “dry” oil-free lubrication) while automatic protection devices remove the power. Because the emergency bearings come up to speed very quickly, their life is short. Replacement would be necessary after two events of this kind [1]. Generally, the venting of turbopumps should be done in a controlled way, in accordance with the manufacturer’s recommendations. Procedures for operating turbopumps are discussed in Ref. [1].
7.2.2.3 Cryopumps Cryopumps are a capture-type of high-vacuum pump that remove gases by condensing and adsorbing them on very cold surfaces. The cryogenic conditions needed for the operation of these devices (ranging from about 70 K to 10 K) are created by a special refrigerator (or “cold head”) that uses gaseous helium as its working fluid. In the usual configuration, the helium is compressed in a separate compressor unit, and delivered to, and retrieved from, the cold head/pump module through flexible metal hoses. Refrigeration is achieved by allowing the helium to expand within the cold head, while a small piston (the “displacer”) in the cold head slides back and forth to control the movement of the helium, and capture and release thermal energy from and to this gas. The compressor is a self-contained oillubricated device that is similar to the ones used in ordinary domestic refrigerators. Oil that has been introduced into the helium gas stream by the compressor is removed by filters before it can reach the pump. The compressor is normally water-cooled. Because cryopumps are based on gas capture, they do not require a backing pump, although some form of pump (often a sorption type) is needed to pre-evacuate the system to the point where the cryopump can begin effective operation. Their main advantage over some other types of high-vacuum pump is their intrinsically high degree of cleanliness. They are inherently incapable of contaminating the vacuum system with hydrocarbons, because no oil or grease ever comes into contact with the vacuum environment. Cryopumps are also relatively rugged devices. If they are given adequate maintenance, they can be considered to be reliable. On the other hand, a major disadvantage with cryopumps is that they do require a considerable amount of maintenance, when compared with most other types of highvacuum pump. The filter in the compressor unit that is responsible for removing oil from the helium (the oil adsorber) must be replaced every 6000–9000 h [1]. If this is not done, oil can reach the cold head, where it will freeze and cause a malfunction. The seals in the cold head will eventually wear out, and so these must be replaced roughly every 10 000 h. Somewhat surprisingly, the compressor itself is a very long-lived machine, which requires no maintenance. Since cryopumps cannot continue to condense and adsorb gases indefinitely, they must be periodically “regenerated.” This procedure consists of allowing the refrigerator/pump module to warm to room temperature, and pumping off the devolving gases with some type of primary pump. Other potential sources of trouble include cooling water failure, and leaks in the compressed helium circuit. Helium leaks may occur at the connectors on the ends of the flexible hoses, or at pressure relief valves [1]. Cryopumps will not tolerate the presence of dust or dirt in the vacuum system, as might be created by, e.g., a vacuum furnace.
202
Vacuum pumps and gauges, and other vacuum-system concerns
Pressure-relief valves are normally installed in cryopumps (between the high vacuum and the atmosphere), in order to make sure that gases released during regeneration cannot cause harmful pressure buildups within the pump. Such valves are susceptible to leaks – see page 252. Cryopumps are very vulnerable to brief interruptions in power, or even dips in the supply voltage – especially if helium is being pumped [4]. The gas that evolves during the power interruption effectively causes a thermal short-circuit between the cold parts and the room temperature pump casing, thereby raising internal temperatures, and preventing further pumping. A period of more than a few minutes without power may be enough to cause difficulties if air is being pumped. The only way to correct the problem is to regenerate the pump and cool it back down again, which can take several hours. In order to prevent the gases that devolve from a cryopump following a power cut from entering the rest of the vacuum system, a fast-acting automatic isolation valve may be required [13]. The use of an uninterruptible power supply to keep the cryopump running is another way of dealing with this problem (see Section 3.6.3.4). Because of the reciprocating displacer and the compressor, cryopumps tend to produce substantial vibrations. This can be a problem for vibration-sensitive devices within the vacuum system itself, or even for sensitive instruments in the surrounding laboratory area. Procedures for using and maintaining cryopumps are discussed in Refs. [1] and [4].
7.2.2.4 Ion pumps Introduction Sputter-ion pumps make use of a high-voltage electric discharge in a magnetic field to remove gas particles. In the case of active gases, such as oxygen, the pumping occurs because of chemical combination (i.e. “gettering”) by active electrode materials (e.g. titanium) that have been deposited on surfaces by sputtering in the discharge. Inert gases, such as argon, are pumped by being ionized, accelerated within the discharge, and buried in the cathode. Like cryopumps, ion pumps are a “capture” type of high-vacuum pump. Hence, they do not require a backing pump, but only some form of roughing arrangement to produce the vacuum needed to begin operation.
Advantages With regards to reliability, ion pumps have many virtues. They are relatively clean – there is no possibility of oil contamination. Moreover, ion pumps have no moving parts (except for the cooling fan in the high-voltage power supply), and require no cooling water. For these reasons, ion pumps are highly reliable, and also completely free of vibration. They are capable of running for long periods without routine maintenance (e.g. about 6 years while pumping at 10−4 Pa), aside from occasional inspection and cleaning of the high-voltage cable assembly and the high-voltage feedthrough [14]. (High-voltage cables and connectors are discussed in Chapter 12.) The high-voltage power supplies should also be cleaned of dust about every two years [15].
7.2 Vacuum pump matters
203
Ion pumps are extremely simple to operate – a feature that greatly reduces the chances of failure due to human error [16]. For the most part, ion pumps are also very robust devices, and difficult to damage. (However, the high-voltage feedthrough and external cable assembly are vulnerable, and should be treated with care.5 ) One of the most endearing features of ion pumps is their intrinsic harmlessness in the event of a mains power failure. If the power is cut, essentially nothing happens except that the pumping of inert gases ceases. Unlike cryopumps, previously pumped gases are not released back into the vacuum system, and there is no loss of vacuum [16]. Generally, only a relatively small rise in pressure will take place [13]. Unlike diffusion pumps, there are no oil backstreaming problems to worry about. Because of the active titanium still present in the pump, pumping of active gases will continue for a considerable time following the loss of power, depending on the pressure. Furthermore, an ion pump can be restarted immediately following restoration of the power. Since the power demands of ion pumps are relatively small (especially when the pressure is very low), it is relatively straightforward to use a small array of batteries as a source of power for the ion pump for the duration of the blackout. A simple power supply that automatically switches to battery operation when the power fails, permitting operation in this mode for up to several days, is described in Ref. [17].
Limitations Ion pumps could be considered to be almost ideal, if it were not for their somewhat limited performance in several areas. Since ion pumps are capture devices, they cannot (unlike turbopumps, for example) pump a substantially unlimited quantity of gas. Unlike another type of capture pump – the cryopump – they have no equivalent to the relatively quick and straightforward “regeneration” operation used to release pumped gases. When a sufficient amount of gas has been pumped, the anode and walls of the pump must be cleaned of loosely adhering sputtered titanium, electric insulators may need cleaning or replacing, and the electrode plates may need replacement. These operations can involve considerable cost and labor, and in many cases will require returning the pump to the factory. Furthermore, ion pumps have a relatively low pumping speed, for a given mass and volume, when compared with other high-vacuum pumps [18]. Also, pumping speeds are pressure dependent, and cannot be uniquely defined independently of the particular composition of the gases being pumped [4]. Some gases are pumped very slowly. Argon, for example, which is present in the atmosphere at a concentration of about 1 % [1], is pumped at only 1–2 % of the speed of active gases when the most common type of ion-pump (a “diode pump”) is being used [4]. Other relatively inert gases, including light hydrocarbons such as methane, are also pumped slowly [18]. One peculiar characteristic of ion pumps that can sometimes lead to significant problems is their tendency to release previously pumped gases at a later time. This phenomenon, called the “memory effect,” is particularly prominent when a diode pump is used to remove 5
High-voltage cables with protective armor are available from suppliers of ion pumps. Such armor can provide cables with a high abrasion and/or crush resistance, depending on the armor type.
204
Vacuum pumps and gauges, and other vacuum-system concerns
noble gases. Argon in particular causes a pumping instability, which involves the periodic reemission of this gas from the cathodes inside the pump [4]. This can be a particular problem if the device is being used to pump a system with a leak, because of the relatively high concentration of argon in the atmosphere [1]. For the same reason, such pumps are not suitable for use in vacuum systems that are frequently cycled to atmospheric pressure. It is possible to significantly reduce noble-gas pumping problems by using an ion pump with a different electrode configuration to that of the usual “diode” type. One such device, which is called a “triode pump,” is sometimes referred to as a “noble gas version” of the ion pump [18]. Other types of noble gas ion pump are also available.
Deterioration and failure modes Ageing of ion pumps takes place because of the movement of cathode material to other parts of the pump – namely, the anode and the inner walls of the device. This degradation is directly proportional to the total amount of gas that has been pumped. Such deterioration normally shows up in the form of difficult starting of the pump, and not by a reduction in the ultimate vacuum that can be achieved [18]. Another potential problem with ion pumps that have been used for a long time is excessive leakage currents due to the formation of metal whiskers on the cathode, or conducting deposits on insulators. Conducting whiskers cause leakage currents because their edges can be very sharp, which allows field emission to take place. Although this does not affect the basic operation of the pump, it can affect the ability to use the pump current (as displayed on the power supply) as an indicator of the pressure. This behavior can usually be eliminated by the application of a very high voltage across the electrodes, in a process called “hi-potting”. The presence of such potentials results in current densities in the whiskers that are normally enough to melt the asperities and eliminate the field emission. The presence of leakage across insulators is a more serious matter, since it can reduce or eliminate the ability of the pump to function. The presence of such conducting deposits can be established by measuring the resistance across the electrodes with an ohmmeter. The removal of these deposits usually requires that the pump be dismantled and rebuilt [19]. Although ion pumps can be considered to be clean with regard to an absence of oil, they can present a different contamination risk. Over time, the thick deposits of sputtered titanium that have formed on the anode and inner wall of the pump will flake off. The flaking can cause pressure bursts, and titanium flakes sometimes produce short circuits between the pump electrodes [1]. Furthermore, if the pump has been positioned without taking this titanium buildup and flaking into account, falling flakes may migrate to other parts of the vacuum system [20]. These can cause problems by, for example: outgassing, contaminating the sealing surfaces of valves, or short-circuiting vacuum gauges. Flaking titanium films are pyrophoric – i.e. they ignite spontaneously in air (see Ref. [16]). This is a concern mainly when such films are removed during cleaning. The hi-potting method discussed above is one of a number of folk-remedies that are sometimes employed to quickly eliminate ion-pump malfunctions due to wayward titanium whiskers or flakes. (Others include: hitting the side of the pump with a mallet to dislodge
205
7.2 Vacuum pump matters
loose flakes, or in the event of a complete short circuit across the electrodes, burning away the titanium using a low-voltage, high-current power supply.) Unfortunately, such remedies generally have only a temporary effect. In most cases, if flaking titanium is causing problems, a better long-term solution is to have the pump rebuilt. One potential difficulty that can occur occasionally under conditions of high humidity, over a period of years, is corrosion of the high-voltage feedthrough. This may eventually result in vacuum leaks [15,20], and possibly electrical leakage, arcing, and binding of the feedthrough to the cable connector [19]. Like most other high-voltage devices, ion pumps and their cables and power supplies should not be operated under conditions of high humidity – especially if condensation is a possibility. In some cases, it has been found desirable to install low-power heating collars on ion pump feedthroughs in order to exclude moisture [15].
7.2.2.5 Titanium sublimation and non-evaporable getter pumps Sublimation pumps Titanium sublimation pumps (TSPs) are used as auxiliary high-vacuum pumps to remove reactive gases such as oxygen from a vacuum system. They are a form of capture pump based on gettering, and will not remove inert gases. Hence, these devices are almost never used by themselves. Often, they are employed in combination with ion pumps. TSPs, which are similar in many ways to ion pumps, have the virtue of being extremely simple. One common arrangement consists of a titanium–molybdenum alloy filament that is heated by an electric current. Titanium evaporates (by sublimation) from the hot filament and lands on a nearby vacuum system wall or interior surface, which is often cooled by using liquid nitrogen or water. The freshly evaporated titanium is very reactive, and combines with active gases in the system to form solid compounds. The power supply that provides current to the filament is normally furnished with a timer, or some means of control based on a signal from a pressure gauge, so that the filament is heated only intermittently, as is needed to satisfy the prevailing pumping requirements. TSPs can be considered to be reliable devices [21]. They are also relatively clean – in the sense that they cannot cause oil contamination. Furthermore, they are completely free of vibration. These pumps are not only intrinsically forgiving of power failures, but will continue to pump for a considerable time (depending on the pressure) following a loss of power. With sufficient use, the filament will be consumed and must be replaced. Usually the pump will be provided with several filaments, and some means of switching the current between them, in order to extend the replacement interval. The use of redundant filaments also makes it possible to replace them at opportune moments, and not as a result of immediate necessity. The installation of new filaments is very straightforward. After enough titanium has accumulated on the walls of the pump, it will begin to peel and flake off, and the walls will have to be cleaned. This can be a relatively simple task, depending on the design. (The flaking titanium may present a contamination problem which is similar to that encountered when using ion pumps.) During cleaning, it is important to keep in mind that the flaking titanium films are pyrophoric [22]. It has been claimed that
206
Vacuum pumps and gauges, and other vacuum-system concerns
peeling occurs readily if the titanium is deposited on smooth surfaces, but is less likely if the surfaces have been roughened, e.g. by sand blasting [22]. TSPs should be oriented in such a way that no direct line of sight path exists between the filaments and sensitive items such as valves, pumps, and electric insulators. Sublimed titanium will coat surfaces directly exposed to the filaments and cause, e.g., sealing or pumping problems [23]. As discussed in Section 8.4, water cooling is often a source of problems. This may not be needed if the vacuum chamber pressure is sufficiently low during pumping (<10−4 Pa), and if the pump design is suitable [21].
Non-evaporable getter pumps Another auxiliary high-vacuum pump that uses chemical reactions to remove active gases from a vacuum space is the “non-evaporable getter” or “NEG” design. This device consists of a block of porous reactive metal, such as the 70/24.6/5.4 Zr/V/Fe alloy, which contains an electric heater. After the material is “activated” by warming it to an appropriate temperature for a specified period, it will continue to pump the vacuum space without any further intervention. Activation must be carried out only when the pressure in the vacuum chamber is sufficiently low (e.g. 10−1 –10−2 Pa). The getter material must be re-activated after it has been exposed to air at atmospheric pressure. Unlike titanium sublimation pumps, which consist of just a thin layer of active material, NEG pumps are active through a considerable volume. After striking the surface of the getter alloy, gas particles travel into its interior by diffusion. The use of liquid nitrogen or water cooling is not needed. These pumps are very widely used in, for example, particle accelerators and experimental fusion reactors. Non-evaporable getter pumps have reliability advantages that are comparable or superior to those of titanium sublimation pumps. They are very clean, contain no moving parts and generate no vibrations, are robust and difficult to damage, and require essentially no maintenance. Unlike titanium sublimation pumps, NEG pumps with sintered getter alloy do not shed material after a long period of use. (However, some types of getter, and in particular non-sintered types, have a tendency to produce dust, whether they are old or new [22].) One does not have to worry about evaporated material landing on sensitive items in the vacuum system. Nor is there a need to regularly remove built-up deposits of getter alloy. In the case of certain getter materials, such as 70/24.6/5.4 Zr/V/Fe alloy, they are completely immune to power failure since, once they have been activated, they will continue to pump for a long time before reactivation is needed. Such getters can last very much longer than the filaments used in titanium sublimation pumps [9]. Once the getter material is completely reacted, it must be replaced. Pumps using non-evaporable getters can be particularly vulnerable to damage during the activation period. Any fault or error that causes the pump to be exposed to air at a sufficiently high pressure while it is undergoing activation will cause the getter material to burn up, and may necessitate its replacement. For example, in some pump designs very large currents may be required to heat the getter to the required temperature. If excessive electrical heating of the high-current feedthroughs should occur during this period (possibly because of loose electrical contacts, or inadequate derating of the feedthough) a leak may occur, which can
207
7.3 Vacuum gauges
lead to this type of failure. Such problems can be avoided by using a residual-gas analyzer during activation to monitor the vacuum for an excess of argon, which is not removed by the pump. If such a condition is detected, a signal can be sent to switch off the heater power supply [15]. Some NEG pumps use heaters that do not require large currents. Furthermore, it is often possible to activate the getters using heaters on the outside of the vacuum system, thereby avoiding the need for any electric feedthroughs. Indeed, in the case of some UHV systems that contain NEG pumps, the bakeout process itself is used to activate the getter. Another potential cause of reliability problems with NEG pumps is halogens, and particularly fluorine, in the vacuum system, which may be present in the residues of brazing fluxes. The presence of this element may lead to failure of an NEG pump, since it causes poisoning by reacting with the getter material [24].
7.3 Vacuum gauges 7.3.1 General points Contamination, in the form of, e.g, dust, dirt, and oil, is an important problem for most types of vacuum gauge [1]. If such gauges are not oriented properly (i.e. vertically and open at the bottom), they may be vulnerable to contamination or damage from falling debris and substances, such as filings, metal flakes, small screws, or condensates [25]. The result could be intermittent operation or inaccurate readings. Also, the hot incandescent filaments of some devices can bend and cause short circuits if the gauge is not aligned in this way. In diffusion pumped systems, horizontally mounted gauges often collect liquid diffusion pump oil [26]. Although filters are available to protect gauges from particulates and small objects, these will have an adverse effect on the accuracy of the gauge. It is also essential to make sure that gauges are shielded from beams of contaminating substances, such as titanium emitted by titanium-sublimation pump filaments, or materials evaporated in a thin-film deposition system [1]. Delicate internal members in gauges (e.g. filaments in Bayard–Alpert devices) can be susceptible to fatigue failure. Hence, they should not be placed in a region of the vacuum system that is subject to vibration, if possible [25]. When a sensor is not attached to a vacuum system, it is important not to leave it lying unprotected on a work surface [1]. The internal surfaces of these devices will collect dust, swarf, and filings that can cause outgassing and erratic operation. This problem is especially significant in the case of Penning gauges, which contain magnets that attract ferromagnetic particles. Gauges that make use of an electrical discharge are vulnerable to a peculiar form of degradation, if they are used in diffusion-pumped systems that employ silicone oil as the pump fluid. As discussed on page 98, when silicones are subjected to electron bombardment, they tend to break down and form insulating deposits on nearby electrodes [1]. Such deposits
208
Vacuum pumps and gauges, and other vacuum-system concerns
can prevent the proper operation of the device. For example, Bayard–Alpert ionization gauges will give anomalously low-pressure readings in their presence [26].
7.3.2 Pirani and thermocouple gauges Vacuum sensors based on the conduction of heat by gases are often used to measure high pressures – above about 10−2 Pa and 1 Pa for Pirani and thermocouple types, respectively [1]. These gauges are unusually sensitive to contamination. For example, in the case of Pirani gauges, vapors of oil or other organic substances may cause pressure readings to be excessively high. Neither Pirani nor thermocouple sensors are suitable for use in contaminating environments [27]. Contaminated gauges can usually be cleaned with suitable solvents. This must be done carefully, so as not to damage their delicate filaments [1]. The manufacturer should be consulted about appropriate techniques. It is also important to keep devices that can increase local outside temperatures, such as furnaces or stoves, away from thermal conduction gauges [25]. A potential problem with thermal conduction gauges concerns their sensitivity to gas composition. For heavy gases (such as argon) this may result in low pressure readings when the actual pressure is very high (this can lead to explosions due to overpressures in vacuum chambers). In the case of light gases (such as helium) the indicated pressure may be far above the actual one (this situation can lead to implosions) [27]. To prevent such accidents, it is desirable to use a pressure-relief valve or rupture disc to prevent the establishment of hazardous pressures in the vacuum system, and/or to employ a sensor that is not sensitive to gas composition, such as a capacitance manometer. Thermal conduction sensors can be considered to be relatively rugged with regard to sudden exposures to atmospheric pressure – as long as the inrushing air is not directed directly into the gauge head, which can cause damage to the sensing elements [1].
7.3.3 Capacitance manometers Very accurate pressure measurements in the range from atmospheric pressure to 10−4 Pa can be made using a “capacitance manometer” [1]. These devices employ an electro-mechanical method of pressure determination. A flexible diaphragm is sealed in place between a chamber containing a fixed vacuum on one side (perhaps created when the device is manufactured), and the vacuum to be measured on the other. The deflection of this diaphragm, which depends on pressure difference between the two sides, is measured using a capacitive technique. These devices are relatively robust, long-lived, and are unusual in being almost completely insensitive to gas composition. Capacitance manometers are also relatively insensitive to contamination (except possibly heavy contaminants, such as particulates and oil droplets) and attack by corrosive substances [6]. Measurements can be affected by vibrations, and are especially sensitive to changes in ambient temperature [27]. Although ordinary “two-sided capacitance manometers” are difficult to clean, “single-sided” versions can be cleaned using chemical methods [1].
7.3 Vacuum gauges
209
7.3.4 Penning gauges Penning gauges make use of a discharge from a cold cathode in a magnetic field to measure pressure in the range from 1 Pa to 10−7 Pa [28].6 Like other gauges based on vacuum electrical discharges, Penning gauges are sensitive to electrode surface condition, and therefore to contamination. Since contaminants on a Penning gauge’s electrodes will reduce the discharge current, these will cause the gauge to indicate a pressure that is too low [25]. For instance, if oil vapors enter a Penning gauge, the electric discharge will crack the oil molecules, and cause carbonaceous deposits to build up on the cathode [1]. However, unlike other gauges in this class (such as hot cathode ionization devices) Penning gauges are relatively rugged and long-lived. There is no filament to break or burn out, and the electrodes are very robust. Such gauges are resistant to damage from air inrushes and vibrations [25]. The electrodes can be easily cleaned by gentle abrasion with little fear of causing harm [1]. Penning gauges are reliant on initial ionization of the gas particles in the vacuum by natural background radiation to start the discharge. While at high pressures, the time delay between the switching-on of the gauge and the start of the discharge is normally only a fraction of a second, at low pressures it can take much longer – e.g. up to several minutes at 10−4 Pa, and hours or even days at 10−8 Pa [27]. While this delay falls into the “nuisance” category of reliability issues at higher pressures, it can be completely unacceptable at the lowest ones. Normally, one avoids the problem by switching the gauge on when the pressure is high – perhaps around 1 Pa. However, this is not always possible. In order to overcome this difficulty, some manufacturers make Penning gauges with a built-in ionizing radiation source. This usually takes the form of an insert with a very small quantity of radioactive 63 Ni, which emits high-energy electrons (beta particles) [1]. Another solution to the problem involves equipping the gauge with a small ultraviolet light source to provide the needed ionizing radiation [27]. If a Penning gauge is operated at high pressures (above about 1 Pa) for long periods, sputtering will take place inside the gauge, which will cause metal deposits to build up on insulating surfaces [1]. These deposits may affect operation of the gauge by allowing leakage currents to pass between the electrodes.
7.3.5 Bayard–Alpert ionization gauges Bayard–Alpert gauges are “hot cathode” ionization devices (containing a heated filament) that are used to measure pressures ranging from about 10−1 Pa to 10−8 Pa [1]. Like other ionization gauges, Bayard–Alpert types are susceptible to contamination (some of these issues have been discussed on page 207). In practice, contamination may not be as great a problem as might be supposed, since these gauges are often used in apparatus that must be kept scrupulously clean for other reasons (e.g. UHV systems). Nevetheless, the electrodes in Bayard–Alpert gauges are delicate and the filaments have a limited life. The latter are susceptible to damage if they are exposed to atmospheric 6
Reference [1] gives a range of 1 Pa to 10−5 Pa.
210
Vacuum pumps and gauges, and other vacuum-system concerns
pressure while operating. Filaments made of tungsten are liable to be destroyed instantly upon exposure to the atmosphere when they are hot. However, “non-burnout” filaments made of thoriated iridium are available if this might be a problem [4]. The difficulty with the latter type of filament is that, unlike tungsten, it is susceptible to poisoning when exposed to some hydrocarbon or silicone vapors, such as those resulting from the backstreaming of diffusion pump oil, and also to halogens [4,19,29]. (See also the general comments about silicone contamination on page 98.) Some ionization gauges are provided with redundant filaments, and their controllers have a provision for automatically switching over to the other filament if the active one breaks. Leakage currents through deposits of contamination on internal insulators is a common source of inaccurate readings [1]. These may be caused by external sources, or can be due to evaporation of material from the filament. Because of their fragility, Bayard–Alpert gauges cannot be easily cleaned using the straightforward mechanical method employed with Penning gauges. Chemical cleaning may or may not be possible or effective, depending on the composition of the filament. Sometimes replacement of the gauge is the only option if it has become contaminated. A method for partly removing mechanical-pump oil and silicone diffusion-pump fluid contamination from ionization gauges (enough to allow these to function to a limited degree in some cases) has been described [26]. (NB: Ultrasonic cleaning can cause ionization gauge filaments to break – see page 104.)
7.4 Other issues 7.4.1 Human error and manual valve operations In vacuum work, ill-considered changes to the state of a valve can cause major failures, which may require expensive repairs. For instance, vacuum systems that are evacuated by diffusion pumps are very vulnerable to such problems. It is usually a good idea to pause and think about the consequences of operating a valve in a high- or ultrahigh-vacuum system, unless this is done as part of a routine procedure. The use of graphic panels indicating the layout of the vacuum system, and the position and function of valves, can be very helpful in preventing mistakes. (See also the discussions on human error in the use of valves on page 251, and on human error generally in Chapter 1.) In some cases, the installation of an automatic control system for operating valves and pumps is worth considering. (As mentioned earlier, this is difficult to do in the case of UHV apparatus, because bakable metal valves are usually manually operated.) Programmable logic controllers (PLCs), rather than ordinary computers, are often used for this purpose (see page 491). Solenoid- or pneumatic-valves are generally employed instead of manually operated ones, although motor-operated valves can also be used. Large commercially made high-vacuum systems are frequently provided with automatic control arrangements. These issues are discussed in more detail in Refs. [9] and [10].
7.4 Other issues
211
7.4.2 Selection of bakeout temperatures for UHV systems In the early years of ultrahigh-vacuum work, relatively high temperatures (e.g. 450 ◦ C) were usually used during bakeout procedures [16]. This was done because such temperatures were already employed to outgas electronic vacuum devices (e.g. microwave tubes), which were expected to maintain a good vacuum throughout their working life in the absence of pumping. However, over the years it was realized that vacuum systems that were continuously pumped could be baked at much lower temperatures (e.g. 150 ◦ C) without loss of function, as long as temperatures were uniform over the entire system. Hence, in recent times, devices and parts that are intended for use in UHV systems have often been designed for the lower bakeout temperatures now considered as standard. Nevertheless, one can still find books and articles on vacuum technique that recommend the use of the higher values. Because of the potential for such conflicts to cause errors, it is essential to make sure that one knows exactly what is in a UHV system before proceeding with the bakeout [30]. Making a mistake at this point could cause major damage to some vacuum components. For example, certain wire insulations may be limited to maximum temperatures that can vary from 150 ◦ C to 220 ◦ C. Some recommended maximum baking temperatures for various devices are as follows (from Ref. [1] unless otherwise noted). (a) (b) (c) (d) (e) (f) (g) (h) (i)
Copper Conflat seals: Ion pump without magnet or high voltage lead: Glass viewports: Glass ion gauges: All metal valves with copper pad seal: Most UHV electrical feedthroughs: Ion pump with magnets: Ion pump with PTFE high voltage lead: UHV gate valve with Viton O-ring: (depends whether the valve is respectively open or closed [4]): (j) Purpose-made UHV stepper motor, e.g.: (k) Turbomolecular pump: (l) Cryopump: (m) Diffusion pump:
450 ◦ C 450 ◦ C 400 ◦ C [31] 400 ◦ C 300–450 ◦ C 300 ◦ C [32] 250 ◦ C 250 ◦ C
200 ◦ C or 150 ◦ C 150 ◦ C [33] 100–140 ◦ C [34] 100 ◦ C [4] (cannot be baked) [4]
It is, of course, prudent to confirm the maximum bakeout temperature of any vacuum component with the manufacturer. In cases where items cannot be baked to a sufficiently high temperature, some benefit may be obtained by using a special ultraviolet lamp, which is placed inside the vacuum chamber [11]. This device emits radiation that causes water molecules to desorb from surfaces. It is not as effective as a normal high-temperature bakeout, however. If a vacuum system has been contaminated with organic substances, and cleaning with wet chemicals is not feasible, baking at temperatures of 400–450 ◦ C is often recommended (see, e.g., Ref. [1]). However, in-situ “dry” cleaning methods are available to remove
212
Vacuum pumps and gauges, and other vacuum-system concerns
these substances, which do not involve the use of such high temperatures [35]. One such method, developed for use in particle accelerators and experimental fusion reactors, involves backfilling the vacuum chamber with an argon/oxygen mixture at low pressure. An electric “glow discharge” is then produced in the vacuum chamber between a central anode and the walls of the chamber, which act as the cathode. The temperatures involved in this process (which includes a preliminary vacuum bake of the normal kind) do not exceed 200 ◦ C. Another technique, called “reactive gas cleaning”, involves no electric discharge. This procedure consists of backfilling the chamber with a reactive gas such as nitric oxide (NO), and heating the system to 200 ◦ C. Extensive discussions of UHV bakeout techniques can be found in Refs. [16] and [35].
7.4.3 Cooling of electronics in a vacuum Usually, the best place to put electronic devices is outside a vacuum system, in the ambient environment. However, sometimes it is desirable to locate electronic circuits, or other electrical devices such as electric motors, within the vacuum. One problem with this approach is that the main process by which heat is normally removed from electronics – air convection – is non-existent in a vacuum. Since the escape of heat by radiation is also generally absent, the only remaining path for heat to escape is by conduction through solid objects such as component leads. Unless the devices in question are designed specifically with this in mind, such thermal conduction will also generally be of very limited effectiveness in removing heat. As a result, electronic circuits that are directly exposed to the vacuum tend to overheat. Active devices, such as power transistors and integrated circuits, dissipating more than about 2 W, exhibit particularly poor reliability under these conditions [11]. Several methods can be used to eliminate such problems. Perhaps the simplest is to enclose the circuit in a vacuum-tight gas-filled container, and to thermally anchor the latter to the wall of the vacuum system [11]. Helium is a useful filler for this purpose, because of its very high thermal conductivity when compared with other gases. Thermal anchoring may be done either by directly attaching the container to the wall, by using a high-thermalconductivity member (such as a copper rod) to connect the two, or by using a “heat pipe” to transport the thermal energy. (The latter is a very simple device, with no moving parts, which can have a very much larger effective thermal conductivity than any solid material.) One problem with placing the circuit in a completely sealed enclosure is that a leak can occur which may go undiscovered, thereby allowing the escape of the filler gas. This possibility can be avoided by connecting the interior of the container to the outside of the vacuum system with a sealed tube, which allows the filler gas to be conveniently inserted, and the container pressure to be monitored afterwards. A difficulty with relying on thermal conduction between two solid surfaces in a vacuum that have been clamped together is that heat transport across the interface will generally be very poor [16]. This will be the situation unless the clamping forces are large enough to cause distortion of the surfaces, or there is a soft medium between the surfaces to maximize the true area of solid contact. (See also the discussion on thermal contact in cryogenic systems on page 296). Another approach that is sometimes employed is to use a flowing liquid coolant such as water to carry the heat away from the enclosure. However, from many points of view, a better
213
Summary of some important points
way of removing heat from electronics in a vacuum is to use a variation on the standard forced-air cooling technique. The circuit is placed inside a vacuum-sealed enclosure, as above, and two small-diameter metal tubes are connected between the inside of the enclosure and the outside of the vacuum system. A flow of gas from a compressed gas source (e.g. dry and filtered air from a compressor) is then passed through the enclosure via the tubes. This can provide enough cooling for many purposes, without the serious reliability problems that may be associated with other methods (e.g. water leaks in the vacuum system). It also has the advantage of conveniently bringing all the components directly in contact with the cooling medium. This would generally not be practical with water cooling, unless a heat sink was employed. An example of the use of this method in a non-electronic application can be found in Ref. [36], and references therein.
Further reading Reference [1] is a particularly good source of practical information on the selection, operation, troubleshooting, and maintenance of vacuum devices and equipment, and is recommended for laboratories that make significant use of vacuum technique. Other helpful sources of information on these subjects are Refs. [4] and [37]. Some useful guidelines on purchasing commercial vacuum systems, including a list of mistakes commonly made when designing and specifying them, is provided in Ref. [38].
Summary of some important points 7.2.1 Primary pumps (1) A major concern with vacuum pumps in general (particularly mechanical roughing pumps) is the possibility of contamination to the rest of the vacuum system – especially oil contamination. (2) In practice, the foreline traps that are often used to protect vacuum systems from oil contamination from oil-sealed mechanical pumps are rarely inspected or replaced. A more dependable method of preventing such contamination is to leak dry nitrogen into the foreline. (3) It is essential to ensure that oil-sealed mechanical pumps are equipped with safety valves to prevent accidental oil movement, or “suck-back”, into the rest of the vacuum system following a power cut. (4) Almost all problems with oil-sealed roughing pumps (e.g. low pumping speeds) are caused by oil degradation, and the failure to change the oil at regular intervals. (5) The oil in an oil-sealed mechanical pump should be changed if it becomes discolored (going from light-brown to dark-brown or black, or perhaps taking on a milky appearance).
214
Vacuum pumps and gauges, and other vacuum-system concerns
(6) Oil-sealed mechanical pumps can be considered to be very robust and reliable devices, which usually degrade gracefully and are forgiving of human error. (7) Dry mechanical pumps are very useful in situations where it is essential to completely avoid oil contamination, but are generally not particularly robust or reliable in comparison with oil-sealed types. (8) Scroll-type dry pumps are very vulnerable to damage by abrasive particulates (9) Neither scroll nor diaphragm dry pumps tolerate a lack of routine maintenance.
7.2.2 High-vacuum pumps (1) Diffusion pumps can be considered to be intrinsically very robust and reliable. However, they are dependent on auxiliary devices and services (such as backing pumps, cooling water, and liquid nitrogen) during operation that may not be reliable. (2) In the absence of proper operating procedures or automatic protection devices, diffusion pumps pose severe contamination risks. Simple operator errors or system malfunctions can result in the complete contamination of a vacuum-system with pump oil. (3) When using a diffusion pump, the foreline pressure must never be allowed to rise above the “critical backing pressure” – otherwise diffusion pump oil will migrate into the vacuum system. (4) The formation of hard-to-remove carbonaceous deposits inside a diffusion pump can occur if certain types of pump oil are exposed to air while hot, or if the oil is overheated. The latter can take place if the cooling-water supply fails, or if the oil level is too low. (5) Turbopumps are very susceptible to catastrophic damage to the rotating blades caused by the entry of objects or debris. They are not robust devices. (6) However, if treated with reasonable care, turbopumps are (in practice) very reliable, will last a long time, and present virtually no risk of contamination. (7) As with diffusion pumps, turbopumps are dependent on a backing pump during operation. (8) Cryopumps are generally robust and reliable devices, but must be regularly regenerated and periodically maintained. (9) Cryopumps do not need backing pumps, but are dependent on water cooling, and are vulnerable to even brief power outages or dips in the supply voltage. (10) Ion pumps are the most reliable type of high-vacuum pump. They are robust, simple to operate, will last a long time without maintenance, and produce no oil contamination. Furthermore, they do not require a backing pump, and need no services other than electric power. (11) Diode ion pumps, which are the most common type, are vulnerable to pumping instabilities when used with inert gases – especially argon. Special ion pumps, such as those with triode configurations, are able to cope with these gases.
215
Summary of some important points
7.3 Vacuum gauges (1) The greatest threat to vacuum gauges in general is posed by contamination and debris, such as dust, filings, small loose parts, and pump oil. To minimize the risk, gauges should always be mounted vertically in a vacuum system, and open at the bottom. (2) Thermal conductivity gauges such as the Pirani and thermocouple types are used to measure pressures above respectively 10−2 Pa and 1 Pa. (3) Such gauges are particularly sensitive to contaminants, and are not suitable for use in vacuum environments containing these. (4) Capacitance manometers are very accurate gauges, which are capable of measuring pressures from atmospheric to 10−4 Pa. (5) These gauges are relatively robust, insensitive to light contamination, and resistant to attack by corrosive substances. (6) Penning gauges are a robust and long-lived type of high-vacuum gauge, which are based on an electric discharge from a cold cathode. (7) Like all gauges based on discharges, Penning gauges are sensitive to contamination, but are relatively easy to clean. (8) Bayard–Alpert ionization gauges measure pressures in the high- and ultrahigh-vacuum ranges by making use of an electric discharge involving a “hot cathode” (a heated filament). (9) These gauges are not particularly robust, and are sensitive to contamination. Furthermore, the filaments have short lives, and are sensitive to damage by vibrations and physical impact.
7.4 Other issues (1) The incorrect operation of manual valves in high- and ultrahigh-vacuum systems can have very harmful and expensive consequences. One should generally pause and think before operating such devices, unless this is part of a routine procedure. (2) There is a wide variation in the maximum temperatures that can be withstood by UHV devices during bakeout. It is essential to know the highest temperature that can be tolerated by the most sensitive items before baking a UHV system. (3) The use of special in-situ cleaning methods to remove organic contaminants can reduce the maximum temperatures that are needed to prepare UHV systems for operation. (4) Electronic circuits in a vacuum environment tend to overheat. Active devices that dissipate more than about 2 W are especially problematic. (5) A number of methods are available to improve the cooling of in-vacuum electronic devices. (6) In making provisions for cooling, it should be kept in mind that the transport of heat between two rigid objects that are clamped together in a vacuum will generally be very poor, unless provisions are made to maximize the true area of solid contact.
216
Vacuum pumps and gauges, and other vacuum-system concerns
References 1. N. S. Harris, Modern Vacuum Practice, 3rd edn, N. S. Harris, 2005. www. modernvacuumpractice.com (First edition published by McGraw-Hill, 1989). 2. BOC Edwards Vacuum Products 2000 (catalog), BOC Edwards, Manor Royal, Crawley, West Sussex RH10 2LW, UK. www.bocedwards.com 3. R. H. Alderson, Design of the Electron Microscope Laboratory, North-Holland, 1975. 4. J. F. O’Hanlon, A User’s Guide to Vacuum Technology, 2nd edn, John Wiley and Sons, 1989. 5. M. T. Postek, Scanning 18, 269 (1996). 6. M. H. Hablanian, High-Vacuum Technology: a Practical Guide, 2nd edn, Marcel Dekker, 1997. 7. M. B¨ohnert, O. Hensler, D. Hoppe, and K. Zapfe, Oil-free pump stations for pumping of the superconducting cavities of the TESLA Test Facility, Proceedings of the Tenth Workshop on RF Superconductivity, September 6–11, 2001 (SRF2001). http://conference. kek.jp/SRF2001/ 8. N. T. M. Dennis, in Foundations of Vacuum Science and Technology, J. M. Lafferty (ed.), John Wiley & Sons, Inc., 1998. 9. G. F. Weston, Ultrahigh Vacuum Practice, Butterworths, 1985. 10. M. H. Hablanian, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 11. J. H. Moore, C. C. Davis, M. A. Coplan, and S. C. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. 12. G. Osterstrom, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 13. J. H. Singleton, in Handbook of Vacuum Science and Technology, D. M. Hoffman, B. Singh, and J. H. Thomas, III (eds.), Academic Press, 1998. 14. N. S. Harris, Modern Vacuum Practice, 1st edn, McGraw-Hill, 1989. 15. P. M. Strubin and J.-P. Bojon, Proceedings of the 1995 Particle Accelerator Conference (Cat. No. 95CH35843), IEEE, 3, 2014 (1995). 16. L. T. Lamont, Jr., in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 17. E. Taylor, Rev. Sci. Instrum. 49, 1494 (1978). 18. W. M. Brubaker, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 19. Duniway Stockroom Corp., 1305 Space Park Way, Mountain View, CA, USA. www. duniway.com 20. H. Winick, Synchrotron Radiation Sources: a Primer, World Scientific, 1995. 21. D. J. Harra, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 22. B. Ferrario, in Foundations of Vacuum Science and Technology, J. M. Lafferty (ed.), John Wiley & Sons, Inc., 1998.
217
References
23. H. J. Lewandowski, D. M. Harber, D. L. Whitaker, and E. A. Cornell, J. Low Temp. Phys. 132, 309 (2003). 24. C. Liu and J. Noonan, Advanced Photon Source accelerator ultrahigh vacuum guide – ANL/APS/TB-16, Advanced Photon Source, Argonne National Laboratory, 9700 South Cass Ave., Argonne, IL, USA. www.aps.anl.gov/Facility/Technical Publications/techbulletins/index.html 25. Leybold Vacuum Products and Reference Book 2003/2004, Leybold Vakuum GmbH, Bonner Strasse 498, D-50968 Cologne, Germany. www.leyboldvac.de 26. P. E. Siska, Rev. Sci. Instrum. 68, 1902 (1997). 27. R. N. Peacock, in Foundations of Vacuum Science and Technology, J. M. Lafferty (ed.), John Wiley & Sons, Inc., 1998. 28. E. Bopp, Solid State Technol. 43, No. 2, 51 (2000). 29. D. J. Hucknall, Vacuum Technology and Applications, Butterworth-Heinemann, 1991. 30. J. A. Venables, Introduction to Surface and Thin Film Processes, Cambridge University Press, 2000. 31. C. Kraft, in Handbook of Vacuum Science and Technology, D. M. Hoffman, B. Singh, and J. H. Thomas, III (eds.), Academic Press, 1998. 32. N. Milleron and R. C. Wolgast, in Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 33. Princeton Research Instruments, Inc., 42 Cherry Valley Road, Princeton, NJ, USA. www.prileeduhv.com 34. J. Henning, in Foundations of Vacuum Science and Technology, J. M. Lafferty (ed.), John Wiley & Sons, Inc., 1998. 35. Y. T. Sasaki, J. Vac. Sci. Technol. A 9, 2025 (1991). 36. R. K¨onig, K. Grosser, D. Hildebrandt, O. V. Ogorodnikova, C. von Sehren, and T. Klinger; Fusion Engineering and Design 74, 751 (2005). 37. Methods of Experimental Physics – Volume 14: Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 38. D. M. Mattox, Handbook of Physical Vapor Deposition (PVD) Processing, Noyes, 1998.
8
Mechanical devices and systems
8.1 Introduction Mechanisms in general are a notorious source of reliability problems. This chapter deals with mechanical devices and systems (broadly construed) that are of particular concern in experimental work. The former include precision mechanisms, static and dynamic vacuum seals, and valves. A brief discussion of devices for preventing mechanical overtravel and overload is also included. Mechanical devices that are used under extreme conditions (e.g. in ultrahigh-vacuum and cryogenic environments) often suffer from friction and wear problems due to a lack of lubrication. A section of the chapter is devoted to lubricants (and especially dry lubricants and self-lubricating solids) that are useful under such conditions. Water-cooling systems are a common source of trouble in apparatus ranging from smallscale tabletop equipment (e.g. lasers) to large particle accelerators. Such problems include leaks, cooling-line blockages, corrosion, and condensation. These issues, and others, are discussed below. Mechanical vacuum pumps are dealt with in Chapter 7, while electrical connectors and cables (which are partly mechanical devices) are discussed in Chapter 12.
8.2 Mechanical devices 8.2.1 Overview of conditions that reduce reliability Some agents and environmental conditions that tend to diminish the reliability of mechanical parts and systems are as follows. (a) Particulate matter such as dust and grit can be very harmful to mechanical devices. It leads to accelerated wear and damage of moving parts (especially antifriction bearings, such as ball bearings), leaks in sealing devices, a decrease in the accuracy of precision mechanisms, and instabilities in mechanical assemblies requiring accurate registration of surfaces. (b) Moisture (including high humidity) causes corrosion and promotes biological growth (i.e. mold and fungi, which can release corrosive chemicals), degrades lubricants, and holds particulate matter that would otherwise escape on surfaces. 218
219
8.2 Mechanical devices
(c) Large amplitude vibrations lead to high-cycle fatigue failure, loosening or shifting of parts, wear of contacting surfaces by fretting, and damage to ball bearings in the form of false brinelling. (d) High temperatures give rise to errors in precision mechanical measuring devices, owing to thermal expansion of materials. They can also cause jamming of mechanisms, due to differences in thermal expansion coefficients (see the comments below concerning cryogenic environments). Spring materials that are under stress at elevated temperatures can undergo a form of degradation called “stress relaxation” (see page 435). (e) Vacuum environments can lead to damage and seizure of mechanisms, owing to problems with lubrication (see page 227). Problems with mechanisms operating in cryogenic environments are usually the result of either mismatches in thermal expansion coefficients, or special difficulties concerning friction and wear under such conditions [1]. Difficulties are often encountered with smallclearance bearings (particularly antifriction types) that have been mounted in aluminum structures. This is because aluminum has a larger coefficient of thermal expansion than the steels typically used to make antifriction bearings. Thermal contraction of the materials must be taken into account during the design. Lubrication problems in a cryogenic environment are discussed on page 228. Cryogenic mechanisms can become immobilized due to the presence of traces of air or water, which solidify upon cooling. Even thin residual films of air in a mechanism can cause this sort of problem [2]. (Solid air is very sticky.) Particles of solid air floating around in a liquid-helium dewar can also cause problems. For this reason, it is preferable to operate cryogenic mechanisms in a vacuum, rather than in liquid helium. Naturally, it is usually best to place the most trouble prone parts of a mechanism outside any regions in which environmental stresses are particularly great. For instance, although it is possible to obtain special motors from commercial sources that are designed to operate in ultrahigh vacuum (UHV), it is generally preferable to get an ordinary motor and put it in the normal room environment. In this location, reliability problems are likely to be minimal, and any necessary troubleshooting and repair will be easy to carry out. The motion can be transmitted from the motor into the UHV chamber using a suitable motion feedthrough device (see page 246).
8.2.2 Some design approaches for improving mechanism reliability 8.2.2.1 Use of flexural devices for small-amplitude and/or high precision motion Advantages and disadvantages Mechanisms that employ sliding or rolling action (e.g. bearings, linear slides, screws, and gears) will behave erratically, at least to some extent, owing to the slip-stick character of sliding motion. Play between parts in mechanisms (i.e.: backlash) is also frequently a problem. These phenomena limit the accuracy of such devices. For instance, the slip-stick
Mechanical devices and systems
220
flat spring
Fig. 8.1
Simple linear flexure. In this design, the upper surface largely retains its original orientation during movement. Slight non-ideal behavior, such as tilting of the upper surface, and the small displacement in the vertical direction (as shown), can be prevented with more sophisticated designs.
action of dovetail-slide translation stages is generally too large to allow these devices to be used in high-precision applications (see also page 224). The significance of slip-stick behavior is very much less in rolling element bearings than it is in plain (sliding-contact) ones. However, even rolling element bearings undergo some sliding motion, and hence are not immune to such effects. Furthermore, mechanisms based on sliding or rolling are also prone to degradation arising from wear. In many cases, particularly where small motions are involved, such difficulties can be avoided by using flexures (also known as spring- or elastic-movements). These devices, which make use of the high reliability of the elastic properties of metals in comparison with their frictional ones [3], make it possible to have rotational or translational movements, and even motion magnification or reduction (i.e. lever actions), involving only the elastic bending of metals. The basic elements in such mechanisms are referred to as: “linear flexures” or “parallel spring movements” (for linear motion – see Fig. 8.1), “flexural pivots” or “angle-spring hinges” (for rotary motion), and “elastic levers.” These devices are very useful in high-precision instruments making use of very tiny movements. Motions produced with flexures are smooth and continuous, over scales ranging from macroscopic distances (>10−2 m) down to those corresponding to atomic dimensions (<10−10 m) [4]. There is no backlash in a flexural component. A certain amount of
8.2 Mechanical devices
221
hysteresis1 will be present in such a device, but if the stresses on its bending members are not too large this will be very small. Flexures are free of wear. They require no lubrication, or other maintenance [5]. A well-designed and well-made flexural mechanism can last for a virtually unlimited time without attention. Since they do not require lubrication, flexures are also well suited for operation in a vacuum, and other extreme environments. Furthermore, although flexures are commercially available, they are generally very simple and easy to make. The main reliability disadvantage of flexural mechanisms, in comparison with ones based on rolling motion, is that they are very susceptible to being disturbed by vibrations. This is because of the low vibrational resonance frequencies of flexures, relative to those of the rolling-element bearings that are often used in precision translation stages. However, flexures are considerably less susceptible than rolling element bearings to being damaged by large vibrations, or small-amplitude oscillatory motions along their normal directions of movement. (See the discussion on page 225.) Flexural mechanisms that are properly designed are also less susceptible to being damaged by shocks. If flexures are involved in a large number of movement cycles (e.g. if they are used in a scanning device), fatigue failure may be a problem if they have not been designed correctly.
Methods for operating flexural mechanisms Every mechanism must be driven by some means. In cases where manual operation of a flexural mechanism is desirable, and low speed is acceptable, actuation is often achieved by using a micrometer drive. Since these drives are based on the sliding motion of screw threads, slip-stick behavior can be a problem. Also, in vibration-sensitive apparatus, this scheme has the disadvantage of permitting the creation of vibrations when the micrometer thimble is touched. Alternative methods that do not have these shortcomings normally involve electrical actuation devices, although pneumatic or hydraulic schemes are also employed (e.g. in micromanipulators). Such arrangements allow manual adjustments to take place using a remote control, with the driving signals taking the form of electric currents or voltages, or pressures in a fluid. With a suitable actuator, slip-stick motion can be avoided, and the flexure can be isolated from vibrational disturbances produced by the operator. One useful type of drive is the “voice coil actuator.” This essentially comprises a copper coil that has been mounted on a flexure and suspended in a magnetic field. Inexpensive versions of these can be made from ordinary loudspeakers, although purpose-made devices may be obtained from commercial sources. Both linear and rotary voice coil actuators are available. These devices are simple and reliable. Electric drive schemes making use of the piezoelectric effect are also useful in certain applications involving very precise positioning. The main problem with piezoelectric actuators is that they exhibit pronounced hysteresis and creep effects. (Creep is a slow unwanted drift in the position of the actuator tip.) Also, piezoelectric actuators often require high 1
Hysteresis is a non-ideal behavior in mechanisms, caused by inelastic deformation, that makes the motion dependent on its recent history.
222
Mechanical devices and systems
voltages (up to about 1000 V in some cases), which can lead to reliability problems (see page 382). On the other hand, piezoelectric devices (unlike voice-coil actuators) are not sensitive to magnetic fields. Piezoelectric actuators are also intrinsically very clean, and suited for operation in ultrahigh vacuum. Yet another type of actuator is available, which makes use of electrostriction. Such electrostrictive actuators are similar in some ways to piezoelectric ones, but do not have their hysterisis and creep problems, and do not require high voltages. However, electrostrictive actuators are relatively sensitive to changes in ambient temperature.
8.2.2.2 Direct versus indirect drive mechanisms Devices used to transmit or transform (i.e. speed up or slow down) motions within an apparatus are frequently troublesome. Backlash, stiction, elasticity, vibrations, wear, and mechanical failure are some of the potential problems. This is especially true in the case of devices with a large number of independently moving and rubbing parts, such as rollers, linkages, gear mechanisms and drive belts. Very long drive shafts (used, for instance, to orient samples in low-temperature apparatus), can produce erratic motion because of their large elasticity (low stiffness), and stiction in the drivetrain. It is often desirable and possible to avoid the use of the aforementioned devices where it is necessary to transmit or transform motion, in favor of direct-drive arrangements. With these, a motor or actuator is directly and closely coupled to the item to be moved, without any intermediate gears, drive belts or other mechanisms. For example, stepper motors are often useful in situations where highly controlled motion is needed at a distance. Although stepper motors are normally prone to generating vibrations, the technique of microstepping allows them to undergo relatively smooth and vibration-free movements. Very low rotation speeds are possible, without the need for gears or belts. Linear stepper motors can also be obtained. These devices make it possible to produce controlled linear motion directly, rather than having to convert rotary to linear motion using, for instance, screw, or rack and pinion drives. For extremely high resolution (10−12 m) and stability (10−9 m) positioning over a small travel range (10−2 m), piezoelectric linear stepper motors can be used. (These piezoelectric devices require relatively small actuation voltages.) Direct-drive mechanisms potentially have greater mechanical simplicity, higher stiffness, lower vibrational noise, and fewer mechanical failures, than indirect-drive ones. In the case of positioning mechanisms, errors due to backlash, hysteresis and stiction can be reduced or eliminated through the use of direct-drive schemes, without the need for potentially expensive and complicated encoders (position sensors) and closed-loop position control systems. (Conversely, if a positioning mechanism does exhibit errors due to backlash and other effects, a closed-loop electronic control system can greatly reduce these – see Ref. [6].) An example of where the transition from indirect- to direct-drive has resulted in improvements can be found in the control of monochromators used in spectroscopy. If several such monochromators must be scanned in synchrony, the indirect mechanical method tends to involve a multiplicity of cams, linkages, and clutches, which are prone to errors arising
223
8.2 Mechanical devices
from backlash and hysteresis. The use of some stepper motors for this purpose, directly turning the diffraction gratings under computer control, largely removes this problem [7]. Direct drive systems are also favored over alternatives such as gear- or roller-drives for orienting optical telescopes, owing to the aforementioned advantages [8].
8.2.2.3 Transferring complexity from mechanisms to electronics It is often possible to improve the reliability of a given apparatus by simplifying its mechanical parts, and by carrying out the complicated operations by means of electronics. An example of this approach was given in the previous section, in which a relatively complex mechanical system, forming part of a spectrometer, is replaced by a few stepper motors running under the direction of a computer. A further simplification of spectroscopic apparatus can be achieved by eliminating the mechanical scanning altogether, and replacing the single optical detector with an array of detectors. In this way, all moving parts of the apparatus are eliminated, since the grating is stationary. It is the detector array that is scanned, by purely electronic means. The resulting spectrometer is very stable, and measurements made with it are highly reproducible [7].
8.2.3 Precision positioning devices in optical systems The screw-drive-based translation-stages and adjustable optical-mounts used to alter the position and orientation of components in optical systems can be a source of trouble. There is a temptation to have at one’s disposal many adjustments, so as to take into account uncertainties in the configuration of the system. This is generally bad practice, and is likely to introduce troublesome mechanical instabilities into a sensitive optical setup [9,10]. Adjustable devices generally lack rigidity, and have other stability problems. (See also the discussion on page 81.) A better approach is to install adjustments only where they are truly necessary, and use high-quality fixed mounts in all other locations. Translation stages can drift over time due to various effects, such as the slow migration of lubricants on lead-screw threads [6], or vibrations. Also, the unauthorized alteration of adjustment settings is a common and serious problem with optical setups [10]. For these reasons, position- or angle-adjusting devices should always be positively locked after being set. Translation stages with linear ball bearings are very vulnerable to permanent damage from mechanical shocks (due to brinelling – see page 225). Such an event can result in a very large reduction in the repeatability of the stage [9]. Translation stages that are equipped with roller bearings are less susceptible to shocks, and can withstand higher static loads than ones containing ball-bearings, but are more prone to problems due to the presence of particles on the bearing-ways [11]. The greater vulnerability of ball-bearing devices to shock is due to the very small contact area of the balls on the bearing-ways. On the other hand, cylindrical rollers are less able than balls to move out of the way of particles in their path.
224
Mechanical devices and systems
The painting of precision mechanisms, or items attached to them, should be avoided, because the paint will flake off eventually and get into the moving parts [6]. Other surface finishes are preferable (see page 321).
8.2.4 Prevention of damage due to exceeding mechanical limits Unless a mechanism contains components that move along a periodic path (as in the case of rotation), there will inevitably be a limit to the extent of its motion. (One example might be a tabletop instrument that is mounted on a carriage and moves along a track.) Consequently, some devices are usually needed to prevent such limits from being exceeded. Motor-driven apparatus is often controlled using computers. Hence, there is a temptation to use the software to determine whether a mechanical position limit has been reached, and to prevent further motion. However, computer bugs and human error can seldom be ruled out, and therefore one should never trust software alone to prevent damage. It is essential to always have simple and basic backup devices, such as limit switches or hard stops, which are built directly into the mechanism, and capable of preventing failure under any condition. The possibility also exists that mechanisms will break due to excessive forces or torques acting upon their members. This may happen if a mechanism becomes jammed, or it accidentally runs up against an object. Again, relying on software alone to prevent damage can lead to trouble. Simple protection devices, such as shear pins or mechanical torque limiters, should be employed to prevent forces or torques from becoming excessive. Shear pins are mechanical analogs of fuses, and generally take the form of a small cylinder that is interposed between two mechanical members (such as mechanical links or drive shafts) used to transmit a force or torque. The pin is designed to break if the shear stress created in it by the two members exceeds a certain amount. A torque limiter is a simple mechanism that allows a torque to be transmitted through it if the magnitude is sufficiently small, but turns freely if a given torque level is exceeded. An example of such a device can be found on the thimble of a micrometer. Spring-loaded clutches and friction-style slip clutches are two devices that are often used as torque limiters – the former being the most reliable. Designs of various mechanisms for limiting torque, controlling speed, preventing reverserotation, limiting shaft rotation, and preventing overloads are presented in Ref. [12]. Devices for actuating limit switches, and correct and incorrect ways of using limit switches in mechanisms, are also described therein. (See also the discussion on alternatives to mechanical switches on page 399.) Protection devices such as limit switches and torque limiters are available commercially.
8.2.5 Bearings 8.2.5.1 Plain bearings Plain (or sliding contact) bearings, in which the load is supported by sliding surfaces, are restricted to operation at relatively low speeds. They have high ratios of static to dynamic friction when compared with other types of bearing, resulting in a tendency to undergo slip-stick motion. Also, the level of friction depends on the speed. These behaviors limit the
225
8.2 Mechanical devices
accuracy of precision positioning devices, such as linear stages with dovetail slides, which make use of plain bearings [6]. Of the most common bearing types, plain bearings are the most robust. As long as the load can be distributed over a large surface area, plain bearings are very resistant to damage by vibrations and shocks. They are also relatively tolerant of particles of dust and grit, because these tend to become imbedded in the softer of the two sliding surfaces, and thereby prevented from causing further damage [13]. Plain bearings also tend to damp vibrations effectively (in comparison with rolling element ones) and are also quieter. In high- and ultrahigh-vacuum environments, where liquid lubricants cannot be used, rolling element bearings are to be preferred over plain types. The latter are suitable only if short lifespans (measured in number of revolutions, or sliding cycles) are acceptable, and substantial clearances can be provided between the sliding parts [3]. In such environments, self-lubricating solids (see page 233) are useful as bushing materials. Hard metals, such as AISI 440C stainless steel, are suitable for revolving shafts.
8.2.5.2 Rolling-element bearings Bearings in which the load is supported by rolling members (such as balls, as in “ball bearings,” or cylinders, as in “roller bearings”) are called “rolling-element” or “antifriction” bearings. The latter term is used because the static and dynamic friction of such devices can be very low compared with that of plain bearings. Slip-stick effects are also very small, and hence such bearings are well suited for use in precision positioning devices. Rolling-element bearings are available for both rotary and linear motion. They are capable of operating at very high speeds (>100 000 RPM for rotating types). Antifriction bearings are much more susceptible to damage from shock than plain bearings. The effect of impacts is to force the rolling elements against their raceways (the structures in which they roll), producing indentations in the latter. This mode of damage is referred to as “true brinelling” [14]. It tends to occur during transport, mishandling of items containing the bearings, or as a result of improper treatment during installation (e.g. using a hammer to tap a bearing in place). Packaging for transport is discussed on page 93. Vibrations can also degrade antifriction bearings. This is an especially serious problem when the bearing is not rotating, so that the balls or cylinders repeatedly rub against particular spots on the raceways. The result of the rubbing caused by vibration is the local removal of lubricants from points of contact, allowing direct metal-to-metal interaction. The consequence of this is a type of wear, which is referred to as friction oxidation or fretting. This mode of bearing damage is called “false-brinelling.” Vibration-induced damage is more of a problem in the case of roller bearings than ball bearings, because whereas a ball bearing can roll in any direction in the presence of arbitrary vibrations, roller bearings can roll in only one direction, and must skid along the other. False-brinelling due to external vibrations is mainly of concern for bearings in large machines. A related form of damage can occur when antifriction bearings (on equipment large and small) are required to undergo small amplitude oscillatory motion. For instance, this type of motion may occur in instruments that carry out some kind of scanning operation. In such cases, the net result is that lubricant is excluded from the space between the rolling
226
Mechanical devices and systems
element and the raceways, and fretting occurs. Steel antifriction bearings are therefore not suitable for this type of application, although hybrid types (with ceramic rolling elements) are acceptable [13]. The use of solid lubricants rather than liquid ones can also be useful in some situations [1]. Often the best approach is such cases is to use a flexural pivot or a linear flexure, rather than an antifriction bearing (see page 219). Unlike plain bearings, rolling element types are extremely sensitive to the presence of dirt. Even small quantities of abrasive particulates will rapidly destroy the latter devices. This is especially true of the precision bearings that are often found in instruments, which quickly lose their repeatability and accuracy [13]. High standards of cleanliness should be maintained while handling and installing antifriction bearings. Such activities should not be carried out in areas where grinding operations are done. An important method of reducing the entry of dust into bearings is to use ones with a built-in environmental seal. Bearing “side shields” incorporating labyrinth-type seals are particularly useful, and can be provided on most types of antifriction bearings. In most laboratory work, bearings with built-in side shields should be used as a matter of course. This is desirable not only to limit the entry of dust, but also to reduce the exit of any oil lubrication onto adjacent parts of an instrument [15]. Grease-lubricated antifriction bearings are generally preferable to oil-lubricated ones (except for very high rotation speeds), because grease tends to stay in place in the bearings, and also inhibits the entry of particulate matter. Excessive heat will greatly reduce the lifetime of a bearing, by causing evaporation of its lubricant [16]. Since the vapor pressure (which determines the evaporation rate) is an exponential function of temperature, even relatively small temperature rises can greatly diminish bearing lifetimes. Degradation of an antifriction bearing can also take place if an electric current is allowed to pass across the rolling element and raceway surfaces. This can cause arcing and pitting of these surfaces, and ultimately failure of the bearing. Sometimes currents can pass through bearings inadvertently, possibly because of current leakage due to faulty insulation in nearby wiring, electromagnetic induction, or stray ground currents. If electric currents must be deliberately passed through a rotating structure, this should be done by using purpose-made “slip rings,” not by means of the bearings. In cases where electric-current related damage is occurring, and it is not possible to insulate a bearing from surrounding metallic structures, a hybrid bearing with ceramic rolling elements may be useful. As discussed on page 227, the use of bearings in ultrahigh-vacuum environments can be problematic. Antifriction bearings with AISI 440 C stainless-steel balls and raceways are often employed in this application. Although difficulties can usually be avoided by using suitable solid-lubricants, bearing lifetimes may still be insufficient. In such cases, the use of hybrid bearings with ceramic rolling elements and 440C stainless-steel raceways may be the best approach. The adhesive wear that normally results from the rolling of metal balls on metal raceways (and which is increased in a vacuum) is greatly reduced by making use of a ceramic. For applications involving slow movement, hybrid bearings with sapphire balls are often constructed in laboratory workshops [15]. Commercial hybrid bearings normally contain silicon nitride rolling elements. Bearings of this type, which have been coated with WS2 lubricant for use in ultrahigh vacuum, are available [17]. (NB: It is generally best
227
8.2 Mechanical devices
to lubricate all bearings, including even ceramic ones.) Ceramic bearings are discussed in Ref. [18].
8.2.6 Gears in vacuum environments Because sliding surfaces generally exhibit high levels of adhesive wear under vacuum conditions (see Section 8.2.7.1), gears that must operate in such environments should preferably undergo rolling motion. That is, the gear teeth should roll, rather than slide, across one another. Although this is normally the case for spur gears and rack-and-pinion devices, worm-gear surfaces are subjected to sliding motion. Even under normal atmospheric pressure conditions, this can lead to troublesome wear problems. In ultrahigh-vacuum environments, cold welding of the gear teeth can be a very serious issue [19]. However, worm-gear drives cannot be easily dispensed with, since they provide a simple means of producing slow, accurate, and stable rotation. For this reason, they are often used in goniometers and the like, for orienting material samples (e.g. in a magnetic field). In situations where worm gears must be used in high or ultrahigh vacuum, special attention should be given to the selection of materials, and the provision of lubrication (see below). (No matter what type of gear mechanism is involved, lubrication is important.) In some cases, the application of hard ceramic coatings (such as TiC or TiN) to the surfaces by vacuum deposition methods can help to reduce cold welding and wear problems. Some guidelines for designing worm-gear mechanisms for use in vacuum environments are provided in Ref. [1].
8.2.7 Lubrication and wear under extreme conditions 8.2.7.1 Introduction It is sometimes necessary to use mechanisms, or other devices containing rubbing parts, in situations where normal lubricants either cannot function, disturb the environment, or degrade. The most common example of such an environment is high and ultrahigh vacuum (UHV). Ordinary liquid lubricants (oils and greases) deteriorate under vacuum, and release vapors that can cause unacceptable contamination. UHV systems are particularly troublesome in this regard, since they are generally extremely sensitive to the latter. Furthermore, the high-temperature bakeout process that is needed in order to achieve the low pressures destroys most ordinary lubricants. Leaving the rubbing parts unlubricated is often not an option, even with very small loads, since clean metal parts that slide against each other in vacuum tend to adhere and weld [15, 20]. This behavior is especially troublesome at pressures of less than about 10−3 –10−6 Pa, at which point surface oxide layers on a metal (which tend to inhibit welding) are not readily replenished after being rubbed-off [21]. One consequence is that high levels of friction are much more of a problem than they would be under normal atmospheric pressure conditions. Also, even just a small amount of use under vacuum frequently causes unlubricated bearing surfaces to become very rough [15]. Other unlubricated mechanical parts, such as gears and cams, can also undergo surface damage when operated in vacuum.
Mechanical devices and systems
228
The high-temperature bake that is needed to achieve UHV conditions makes the situation significantly worse. Under vacuum conditions, clean surfaces under high contact pressures and high temperatures tend to diffusion-weld to each other even in the absence of sliding motion [22]. Threaded fasteners (i.e. bolts, nuts, screws, etc.) in UHV systems tend to weld together (seize) during bakeout. (The welding of stainless-steel bolts or screws to stainless-steel nuts or tapped holes is a particularly common problem.) Such difficulties can be overcome by coating the fasteners with a suitable dry lubricant, as discussed below. Cryogenic environments can also be troublesome for mechanical devices, since oils and greases freeze at low temperatures. Unlubricated low-speed mechanisms with very small loads can operate satisfactorily at such temperatures. These are frequently used in applications involving intermittent operation in low-temperature instruments. However, high friction and its attendant effects (such as slip-stick motion) are still an issue. Large friction coefficients can lead to excessive heating of the low-temperature regions while movement is taking place. In high-radiation environments, ordinary hydrocarbon greases and oils decompose into glue-like substances [23]. Highly reactive conditions are also troublesome for normal lubricants. For example, ordinary oils and greases react violently if exposed to pure oxygen at high pressures (e.g. inside an oxygen pressure-regulator – see page 260). The reliable lubrication of mechanisms in extreme environments is of great importance in space applications, and much effort has been spent on developing lubrication techniques for this purpose. Hence, books and journal articles that deal with spacecraft mechanisms are often very useful sources of information on the subject.
8.2.7.2 Selection of materials for sliding contact Very generally, identical metals should generally not be used for pairs of surfaces in sliding contact. For example, the rotating shaft in a plain bearing should not be made of the same material as the bushing. A general rule of thumb is as follows [24]. Metals with different metallurgical properties (leading to a low formation of solid solutions with each other), and with different crystal structures, tend to have low wear rates when used together in a wear couple. When this is not the case, high wear rates tend to result. This results from the welding and breaking apart of asperities on the surfaces of the metals: an effect that is known as adhesive wear. Some metals behave particularly badly while sliding either against themselves or dissimilar metals. These include aluminum alloys (in general), austenitic (300 series) stainless steels, and titanium alloys. Their wear properties can be significantly improved through the use of suitable surface treatments. For example, aluminum can be hard anodized, and titanium can be vacuum-coated with titanium nitride (TiN). Nevertheless, it is generally best to avoid using such metals for bearing surfaces, or in other applications in which sliding contact takes place.2 2
NB: Neither anodized aluminum nor TiN are lubricants, and hence lubrication must be supplied separately. Anodized aluminum is unsuitable for use in high-vacuum environments, because of its large outgassing rate.
229
8.2 Mechanical devices
While the above points are relevant in most environmental conditions (including air at normal temperatures and pressures), they are especially significant when the sliding takes place in a vacuum.
8.2.7.3 Hydrodynamic and boundary lubrication It is helpful to make a distinction between two modes in which lubricants function, and thereby reduce friction and wear. The first of these is “hydrodynamic lubrication.” This is important for a shaft turning at high speed within a oil-lubricated plain bearing. The oil is acted upon by the rotating shaft in such a way that it piles up at the smallest gap between the shaft and the bearing, thereby producing a force that keeps the shaft and the bearing separated. This force is purely hydrodynamic in origin – depending solely on the viscosity of the oil and the speed of relative motion of the surfaces. When the rotation ceases, the force disappears, and the shaft and bearing can then come into contact. Since shafts are sometimes exposed to large radial loads that overwhelm hydrodynamic forces, and since they must rotate slowly at least at some points during their operation (when they start and stop), a second mode of lubrication must generally be available. This is effected by using lubricants that are capable of forming thin tenacious films on the sliding surfaces, which keeps them apart and reduces direct metal-to-metal contact. Such films must be “soft” in the sense of being easy to shear. This type of lubrication is called “boundary lubrication.” It is also important when one is dealing with reciprocating or slow sliding motion, in which conditions do not permit effective hydrodynamic lubrication to take place. Wear rates are lowest when hydrodynamic lubrication is in effect. The most effective lubricants have both good hydrodynamic lubricating and good boundary lubricating properties.
8.2.7.4 Liquid lubricants for harsh conditions Several useful extreme-condition liquid lubricants are listed in Table 8.1. A potentially important problem with liquid lubricants in general is the migration (or “creep”) of their base oils across surfaces. This movement can be especially rapid if the surfaces are very clean. The possible harmful consequences of creep are the depletion of lubricant from bearing surfaces, and the contamination of nearby sensitive areas, such as optics or experimental samples. Surface migration is particularly severe in the case of silicone and perfluoropolyether lubricants, but can also be problematic for hydrocarbon types. Commercial brush-on barrier films are available that can limit the creep of some types of lubricant. The low vapor pressure of perfluoropolyether (PFPE) oils and greases, and their ability to withstand high temperatures, would suggest that they should be very useful in UHV applications. However, creep is always a possibility, and the vapor pressure of a R LVP) at a typical bakeout temperature is much PFPE-based vacuum grease (Krytox higher than the room-temperature value (10−3 Pa at 200 ◦ C vs. 10−11 Pa at 20 ◦ C) [25]. It can be difficult to remove PFPEs from surfaces where they are not wanted (see page 98). Multiply alkylated cyclopentanes (MACs) are significantly better than PFPEs with regards
Self-lubricating solids (construction materials, not lubricants)
Dry lubricants
Multiply alkylated cyclopentane (MAC)-based oils & greases
Perfluoropolyether (PFPE)-based oils & greases
Polyorganosiloxane (silicone)-based oils and greases
R
· Lubrication under
precision instrumentation bearing apps. requiring low vapour pressure
· High-vacuum and
· Bushings in plain
bearings · Ball-cages in antifriction PTFE/Glass/MoS2 & Polyimide/MoS2 ) bearings · Gears
· Bulk PTFE · Composites (e.g.
extreme conditions (cryogenic, high temp., high radiation) (e.g. silver & gold) · Ultrahigh vacuum (UHV) · Graphite (but not in antifriction bearings; lowvacuum) load, low duty-cycle plain bearings and gears
· MoS2 · WS2 · PTFE powder · Soft metal films
· Pennzane
vacuum apps. · High temperature apps. · Lubrication in pure oxygen environments
· Antifriction bearings in
greases
· Krytox oils and R
O-ring seals (better than hydrocarbon lubricants for this)
· Lubricating dynamic
“high-vacuum grease”
· Dow Corning R
Uses
Examples
extreme conditions · Advantages similar to dry lubricants
· Self-lubricating under
· No creep
pressure
· Extremely low vapor
lubrication
· Very wide temp. ranges · Clean (except for dust) · Good boundary
125 ◦ C · Vapour pressure: 10−10 Pa at 25 ◦ C · Good boundary lubricant
· Temp. range: −50 to
300 ◦ C (continuous use) · Vapor pressure: 10−11 Pa at 20 ◦ C · Highly inert
· Temp. range: −15 to
to 204 ◦ C · Inert · Vapor pressure: 10−7 Pa at 25 ◦ C
Disadvantages
in e-discharges
dynamic O-ring seals
· Forms insulating deposits
· Avoid using, except for
Other points
coatings (aerosols) are available · Graphite does not lubricate in a vacuum (moisture is necessary)
by sputtering
· Dry film lubricant spray
· Often vacuum deposited
ordinary lubricants
only outstanding quality
· Similar in other ways to
· Low vapor pressure is the
and inert of all liquid lubricants · Preferable to silicones in e-discharge devices
cold-flow under loads
dry lubricants
· PTFE (Teflon) tends to
outgassing may be an issue in UHV systems (e.g. with PTFE)
· Disadvantages similar to · Permeation and
lubrication · Short-lived lubrication compared to oils or greases (<108 bearing revolutions) · Not as reliable as oil or grease lubricants
· No hydrodynamic
temp. stable as PFPE and silicone lubricants
· Not as inert and high
as good as ordinary lubricants (degrades eventually under shear) · Prone to creep
· Boundary lubrication not · Probably the most stable
metal-on-metal lubrication · Prone to creep · Can cause harmful contamination (see page 97)
· Grease temp. range: −40 · Not good for
Advantages
Table 8.1 Some extreme-condition lubricants and self-lubricating solids
231
8.2 Mechanical devices
to creep, and also have low vapor pressures. However, they cannot be subjected to UHV bakeout temperatures. Liquid lubricants in general should be avoided in situations where contamination is a major concern, or applied only in very small amounts. Dry lubricants and self-lubricating solids are preferred under these circumstances, including in particular UHV work. On the other hand, liquid lubrication is generally necessary when high-speed rotary motion is required. Also, liquid lubricants tend to produce comparatively low and predictable levels of friction. The only alternative is to use magnetic bearings, which do not require lubrication and produce no friction, but tend to be bulky and very expensive. A method of applying PFPE oil to antifriction bearings (ball bearings), for use in vacuum, is described in Ref. [15].
8.2.7.5 Dry lubricants Solid materials that form coherent films, and have low shear strengths under loads, can often function as “dry lubricants” (or “solid lubricants”). These include sputtered or electroplated films of soft metals (such as silver and gold), layered materials (e.g. MoS2 ), as well as soft plastics (e.g. PTFE), and other materials [21]. Dry lubrication is purely a type of boundary lubrication, since hydrodynamics cannot be involved. Dry lubricants can generally be characterized as being good boundary lubricants, very stable (although air and moisture can cause problems for some types) and clean. They are often used in situations in which liquid lubricants would be unsuitable, such as: (a) at very low or high temperatures (e.g. sputtered MoS2 can be used between −260 ◦ C and 400 ◦ C [26]), (b) in high-radiation environments (e.g. dosages of >107 –108 Gy do not affect MoS2 resin-bonded films [23]), (c) places where liquid lubricants cannot be used because they would attract particles, (d) in ultrahigh vacuum or optical equipment where possible contamination by liquid lubricants or their vapors is a concern (e.g. vapor pressure of MoS2 ≈ 10−12 Pa at 25 ◦ C [27]). Furthermore, unlike liquid lubricants, dry lubricants do not undergo surface migration, or “creep.” The disadvantages of dry lubricants in comparison with liquid ones include the complete lack of hydrodynamic lubrication ability (meaning greater wear of the sliding surfaces – especially at high speeds), and the absence of a self-healing capacity that is characteristic of oils and greases. With regard to the latter, dry lubricants are prone to forming local defects, which are not repaired by natural processes of migration, as occurs in a liquid-lubricated bearing system. Hence, the lifetime of dry-lubricated bearings is more limited and not as predictable as that of liquid-lubricated types [28]. In fact, dry-lubricated antifriction bearings (i.e. ball bearings) are seldom used in applications requiring more than 108 revolutions in vacuum [29]. Liquid-lubricated bearings, in contrast, may be expected to last for 1010 revolutions or more. In applications involving sliding under vacuum, the upper limit for dry lubricant films is about 106 sliding cycles [30].
232
Mechanical devices and systems
Mechanisms with certain dry lubricants in the form of vacuum-deposited films, such as MoS2 , should not be run in air [29]. Also, MoS2 coatings tend to be sensitive to humidity, and should preferably be stored in dry conditions. Layered compounds such as MoS2 and WS2 , and soft metals, are often deposited on antifriction bearings as thin films by vacuum deposition. This takes the form of sputtering or ion plating; MoS2 is generally sputtered, while metals are often ion plated. Vacuum deposition is an expensive method of applying dry lubricants. However, it does tend to give the best results. This process can be carried out by companies that provide coating services. It is also possible to mix a lubricating compound (e.g. MoS2 ) with an organic or inorganic binder, and spray it onto a bearing surface as a kind of lubricating paint (called a “bonded film”). Such mixtures (also referred to as “dry-film lubricant spray coatings” or “solid-film lubricants”) are available from commercial sources. These solid-film lubricants are best suited for lubricating sliding surfaces, operating at low speeds [26]. They are generally not appropriate for use in antifriction bearings. Lubrication and wear properties of solid-film lubricants in vacuum are not as good as those of sputtered MoS2 [29]. However, solid-film lubricants are inexpensive and readily available. The durability of bonded films depends very strongly on the proper preparation of substrate surfaces prior to deposition. Lubricants that contain an inorganic binder (e.g. sodium silicate) are UHV-compatible [19]. (See also Ref. [31].) Techniques for applying bonded MoS2 films are discussed in Ref. [23]. Their use as lubricants for in-vacuum gears is discussed in Refs. [19] and [32]. The burnishing of powdered MoS2 into a surface (using no binder) is sometimes suggested as a method of creating a lubricating film. However, this technique produces coatings with poor adherence and very unpredictable properties, and should generally be avoided in applications involving bearings [26]. (Although it may be suitable for lubricating threaded fasteners.) PTFE lubrication can be applied to surfaces as a loose powder (often using a spray can), and similar concerns also apply to this method. It is also possible to apply MoS2 and WS2 to surfaces using a high-velocity pressurespray technique. The coatings thereby produced are better than those created by burnishing, but are not nearly as long lasting as sputtered or bonded films [26]. However, the process is relatively inexpensive compared with sputtering. Like sputtered films, but unlike bonded ones, high-velocity pressure-spray deposited films are very thin (< 0.5 µm in thickness). This means that they can used in antifriction bearings, which have relatively small clearR ”) are ances. Bearings that have been pressure-spray coated with WS2 (sold as “Dicronite available commercially for use in ultrahigh vacuum [17]. One can also get components custom-coated with WS2 using this technique by various suppliers. The friction coefficient and lifespan of such films has been found to vary considerably from one supplier to another [33]. In the worst cases, the lifespan is likely to be unacceptably small for many purposes. In less demanding applications, soft metals are applied by electroplating. For example, stainless-steel threaded fasteners used inside UHV equipment can be electroplated with gold or silver to prevent welding of their threads during bakeout [34]. Silver layers should be no more than 10 µm thick [35]. Coatings of 5 µm thickness are typical. Normally, only the bolt or screw threads (not the female ones) are coated. If both male and female surfaces are coated they will tend to weld together. The use of gold, rather than silver, makes it easy
8.2 Mechanical devices
233
to check that the film has been properly applied to the stainless steel. Electroplating can (and should) be done by a commercial organization.
8.2.7.6 Self-lubricating solids Solid substances that supply their own dry lubricants during operation as bearing materials are referred to as “self-lubricating solids.” These include bulk PTFE, and various composites, such as PTFE/glass fiber/MoS2 , and polyimide/MoS2 . The composites contain a dry lubricant (such as PTFE or MoS2 ), which has been incorporated into another material (such as glass fiber or polyimide) that provides structural support. Composite materials are generally preferred to bulk PTFE, partly because they are more resistant to slow ongoing deformation (“cold flow”) under stress. Self-lubricating solids are often used to make bushings in plain bearings, or ball cages in antifriction bearings. In the latter capacity, they act as “sacrificial cages,” which continuously supply the balls and raceways with lubricant as they wear [3]. Bearings with this kind of cage are suitable for use only under light loads [30]. Antifriction bearings of this type, which are intended to operate in vacuum and high-temperature environments, are available from commercial sources. Some UHV applications of antifriction bearings with sacrificial cages are described in Refs. [36] and [37]. R SP-3 can be used to make self-lubricating The polyimide/MoS2 material Vespel gears [3].
8.2.8 Static demountable seals 8.2.8.1 General considerations Introduction R Static demountable seals include such things as O-ring joints and ConFlat (or CF) coppergasket seals. The discussions in this section are primarily concerned with vacuum seals, but are also relevant to other types as well. The most common causes of static demountable seal failures are:
(a) (b) (c) (d)
damaged or deteriorated seals and sealing surfaces, the presence of contaminants on these surfaces, unevenly tightened and/or misaligned flanges, and seals that have not been seated properly.
Damage to sealing surfaces and seals Scratches on sealing surfaces are probably the most common cause of demountable seal leaks in many systems. Even the minutest scratch can produce a remarkably large leak. In those cases where a scratch crosses a sealing surface at close to a right angle to the orientation of the seal, leaks are much more likely than would be the case if the scratch
234
Mechanical devices and systems
followed the seal [38]. Hence, in the case of circular flanges, scratches that cross the sealing surface in a radial direction are particularly harmful. Circular flanges are normally machined on a lathe, and as it happens this produces machining marks that are also circular (or nearly circular), and hence well orientated in this regard [15]. In fact, attempts to “improve” lathe turned surface finishes by grinding or polishing them in random directions can be counterproductive. Mirror-smooth surfaces actually have less reliable sealing properties than lathe-turned ones, with their fine concentric grooves [39]. Generally, the use of abrasives to finish circular seal-surfaces should be avoided, because smearing of the material is likely when this is done, and abrasive particles can become imbedded in the surface. It is also more difficult to control the surface marks with this method than it is by turning on a lathe. The types of surface marks produced by milling machines (which must be used to make non-circular O-ring grooves on flanges) are also more likely to lead to leaks than those resulting from turning. These marks should normally be removed after machining using an abrasive such as emery paper – taking steps to ensure that the movements of the paper follow the orientation of the seal. If an attempt is to be made to repair a scratch on any type of flange by using an abrasive, once again, every effort should be made to ensure that the resulting marks follow the seal. At a large particle accelerator facility (Fermilab), scratches on sealing surfaces are removed by rubbing them down with a #600 polishing grit [40]. Flanges that are often handled should, if possible, be designed so that their sealing surfaces are recessed, or protected in some other way against damage [22]. From the point of view of scratch resistance, stainless steel is a better flange material than softer metals such as aluminum. If a metal sealing surface has been oxidized (possibly as a result of heating from a welding or brazing operation), the effect on leaks is equivalent to that of having a rough surface finish [38]. Such surfaces should be machined after being exposed to high temperatures to restore a smooth finish, or protected from oxidation using an inert atmosphere. It is often necessary to use a knife to remove O-rings and components with sealing surfaces from their packaging. These sealing items should be examined afterwards for cuts, abrasion or other damage [41].
Leaks due to contaminants on seals and sealing surfaces The presence of foreign matter of any kind on seals and sealing surfaces is usually undesirable. However, two particularly harmful and common types are hairs and lint, which form leak paths if they happen to be resting across a seal. Very roughly, depending on numerous factors, a hair lying across an O-ring can produce leaks with magnitudes of between 10−7 and 10−4 Pa·m3 ·s−1 [42]. Hard particles on sealing surfaces can lead to scratches. In vertical flanges that are clamped together with bolts, the bolts themselves can be a significant source of debris (e.g. molybdenum disulfide thread lubricant) [43]. It is a good practice to clean metal sealing surfaces and seals with solvent before initially assembling them. O-rings should be wiped with a dry lint-free cloth [15]. Because of the potential for outgassing problems, solvents should not be used to clean O-rings.
8.2 Mechanical devices
235
3 1
7
8
2 6
Fig. 8.2
5 9
5
1
3
7
11
10
6
2
4
8
12
4
Recommended initial bolt-tightening sequences for two flange configurations (see Ref. [45]). The final tightening sequence involves moving from one bolt to its nearest neighbor around the flange (following a circular pattern, in the case of circular flanges).
Protection of seals and sealing surfaces Seals and sealing surfaces must be actively protected from damage. One should not place a sealing face down on any surface without protecting it with a clean, lint-free cloth, or something similar [41]. The practice of sliding flanges across one another as they are being mated should be avoided [44].
Tightening of threaded fasteners on flanges The use of improper fastener (bolt or screw) tightening sequences on vacuum flanges can lead to leaks, because the sealing surfaces may be misaligned as a result. This is mainly a problem when metal seals, such as the copper gaskets used in UHV equipment, are involved. Improper tightening sequences can also cause other damage, such as cracking or warping, of the items being joined. (This is an especially important consideration when working with viewports.) Decapitation or stripping of the fastener heads sometimes take place when these are over-tightened, in a (usually hopeless) effort to cure a leak that has arisen because mating flange surfaces are not sufficiently parallel. The correct installation procedure involves systematically moving back and forth across the flange, while gradually increasing the torque on the bolts. This is continued up to, but not including, the very last tightening stage. In the case of circular flanges, these early tightening sequences should form a star-pattern, as shown in Fig. 8.2 [45]. The complete set of bolts is initially tightened at a low level of torque. Further tightening of the entire set is repeated several times (perhaps three or four) at successively higher torques, until the correct final values are reached. During this process, the flange surfaces should be periodically inspected to ensure that they are parallel. The final tightening sequence involves moving from one bolt to its nearest neighbor in a circular pattern. In the case of the CF flanges used in UHV equipment (see page 239), the two flanges in a mating pair should touch when the tightening is complete, so that there is no visible gap between them. This should be verified with a flashlight. Otherwise, the seal may not function properly when it is baked. (Other flange types may have different requirements.)
Mechanical devices and systems
236
neutral plane
weld flange tube
Fig. 8.3
Cross-sectional view of a flange-tube joint before and after welding (see Ref. [22]). A small amount of distortion can occur if the tube is not welded on the neutral plane of the flange. A torque wrench should be used to tighten the fasteners on vacuum viewports, and preferably also on large metal-sealed flanges in general. (See also page 166.) It is important to keep in mind that over tightening clamps or bolts will not fix leaks, and is likely to result in damage.
Leaks caused by distortion of flanges during welding Leaks sometimes occur at the junction between mating flanges on welded vacuum components, due to distortion of one or both of the flanges. This distortion can take place if a flange has been improperly welded onto a tube, owing to shrinkage of the weld metal (see Fig. 8.3). Distortion is an especially troublesome problem with metal-sealed flanges. Its prevention is discussed in Ref. [22].
A good habit in the event of a leak When a leak suddenly occurs in a vacuum system, it is often a good idea to check the last seal that was disturbed. Such leaks are frequently the direct result of some human action (e.g. removing and replacing an O-ring) [46].
8.2.8.2 O-rings Introduction Circular elastomeric seals with a circular cross-section (i.e. having a toroidal overall shape) are commonly referred to as “O-rings.” They have the advantage, over many other types of seal, of being highly resilient, and therefore more easily able to accommodate irregularities of the hard sealing faces. An O-ring can be compressed to about 70% of its initial section diameter without being overstressed [22]. Furthermore, unlike gaskets with other (e.g. rectangular) cross-sections, they are able to seal in a multitude of different directions across any diameter. Hence, as long as environmental conditions (mainly the temperature) are not too extreme, O-rings tend to be the seals of choice.
237
8.2 Mechanical devices
Damage and deterioration Damaged O-rings are often a cause of leaks [42]. O-rings can be cut while they are being fitted to a sealing surface, as a result of being pulled over sharp corners of O-ring grooves or steps. Such corners should be rounded off, to a radius of about 0.13 mm [22]. Screw threads with burrs are another potential hazard. Deterioration of the O-ring material takes the form of cracking and loss of resilience (hardening). This degradation is accelerated by certain environmental factors, including the presence of ozone or other pollutants in the air, ultraviolet light, and elevated temperatures. Ozone attack is often manifested by the formation of small surface cracks. Stress in the O-ring promotes this phenomenon, and cracks are usually oriented perpendicular to the direction of the stress. (In the case of circular O-rings, this means that some of the cracks are likely to be oriented radially, and therefore liable to cause leaks.) Elevated temperatures can cause hardening and cracking or pitting of the O-ring material. Environments containing high levels of nuclear radiation (such as gamma rays) also cause the deterioration of O-ring materials [28, 47]. Unless special precautions are taken, most common O-rings (except ones made of Viton and other inert materials) will degrade over time, even under what appear to be benign conditions. If O-rings are compressed between flanges over long periods of time, permanent deformation, known as “compression set” can take place. This behavior is worsened by elevated temperatures (especially), excessive initial levels of compression, and the presence of nuclear radiation [47]. Compression set causes an O-ring to lose its circular cross-section, and acquire a flattened shape. O-rings that have undergone a large amount of compression set loose their ability to exert a sealing force, and may start to leak.
Materials properties and selection One O-ring elastomer that is very vulnerable to degradation is “nitrile” or “Buna N” rubber. Nitrile is generally the most common O-ring material. Its installed life is typically in the range 1–5 years [39]. High-voltage equipment can be a prolific source of ozone, and nitrile O-rings should not be used near such equipment or in the presence of ultraviolet light. Nitrile rubber that is under stress can undergo cracking in less than a year even in the presence of the relatively small concentrations of ozone normally present in the atmosphere [39]. Ozone is more prevalent at high altitudes, and in polluted air [47]. Some common O-ring elastomers that provide much better durability than nitrile under such conditions are fluorocarbon types, such as Viton (e.g. Viton-A or -E60C). This material is resistant to damage by ozone, ultraviolet light, many harsh gases and chemicals, elevated temperatures (150 ◦ C, vs. 85 ◦ C for nitrile), and has a long installed life (15–20 years) [39]. In fact, the performance and reliability advantages of Viton O-rings so much outweigh their higher cost that they should generally be used without hesitation in laboratory applications. Even more extreme conditions can be withstood by perfluorocarbon elastomers such as R . However, these are very expensive. Kalrez
238
Mechanical devices and systems
With regards to chemical compatibility, there is no “universal” O-ring elastomer, and each material must be matched to the conditions to which it will be exposed. (Nevertheless, R does come close to being such a universal material.) Kalrez The selection of elastomers for use in vacuum applications is discussed in Ref. [47]. If special elastomers such as Viton are to be used in a laboratory along with ordinary nitrile ones, it is recommended that the former be obtained in some distinctive non-black color, to make identification straightforward. Resistance to damage by mechanical abrasion is a characteristic that should be given some consideration during the selection of O-rings. (Although others, such as heat resistance, usually take precedence.) There is a considerable variation in this property amongst the different O-ring materials. As is often the case, there may be a tradeoff between abrasion resistance and other durability characteristics, such as the ability to withstand high temperatures and harsh chemicals. Among the more common materials, polyurethane and perfluorocarbon rubbers exhibit excellent resistance to abrasion and tearing. Nitrile, Viton, and ethylene propylene rubbers are good in this regards. Silicone rubber has poor abrasion resistance and a low tear-strength [39]. Consequently, silicone is not normally considered to be an adequate vacuum O-ring material, despite its advantages in other respects (e.g. long life and high temperature resistance). One advantage of nitrile rubber is its tendency to resist compression set. Viton is not as good in this regard [39]. However, of the two commonly available compositions (Viton-A and Viton-E60C), the latter is much better. It is possible to obtain O-rings made of solid PTFE, or Teflon. This material is, however, not an elastomer but a somewhat ductile plastic, and has a very low resilience compared to other O-ring materials. Nevertheless, it can be helpful in some situations, because it is highly inert and able to function over a very wide temperature range. In particular, PTFE O-rings can be useful in certain cryogenic applications (but generally not as vacuum seals), because PTFE retains some ductility even at the lowest temperatures, at which normal O-ring elastomers become hard and brittle. The use of elastomer O-rings that are provided with a thin jacket of PTFE (intended to provide resistance against harsh chemicals) should be avoided, since the jacket will crack eventually and expose the core of the O-ring to rapid attack in the presence of such chemicals [41]. In obtaining O-ring vacuum fittings, such as “tees,” “elbows,” etc., one often has a choice between aluminum or stainless steel as the materials. While aluminum fittings are cheaper than stainless steel ones, the latter are much more resistant to the formation of scratches on their sealing surfaces.
Installation and removal When O-rings are being installed in O-ring grooves, they should not be allowed to become twisted or kinked. They should not be rolled, but gently pulled, into position [41]. It is not advisable to use an O-ring with a major diameter that is too small, so that it is necessary to stretch it over the inside wall of the O-ring groove [38]. This approach (which is the most common mistake in using O-rings), decreases the minor (cross-sectional) diameter of the O-ring below what is acceptable for effective sealing. It also tends to open up irregularities
239
8.2 Mechanical devices
in the O-ring, which then act as leak channels. The excessive stress in the O-ring also makes it more vulnerable to attack by ozone. Lubrication of O-rings with vacuum grease is generally not necessary if these are to be used in static seals [22]. In fact, grease tends to attract debris, and therefore is more of a liability than an asset. Also, the grease itself, and any gases trapped within it, can act as contaminants in a vacuum system. If grease must be employed in order allow an O-ring to be installed in a difficult location, only a very thin coating should be used. Ethanol may be suitable as a temporary lubricant for this purpose. The application of grease to O-rings is sometimes done in an emergency to allow these to seal effectively against surfaces that have become scratched. However, this is not a good long-term solution to such problems [38]. Some lubricants can damage O-ring materials. For example, the silicone oil in silicone grease will attack silicone rubber, and mineral oil attacks ethylene propylene rubber [41]. As discussed in more detail on page 97, there are numerous other reasons for avoiding the use of silicone lubricants in vacuum systems. O-rings should not be removed from their grooves with metal tools, because of the risk of damage to the sealing surface and the O-ring. Plastic or wooden objects, such as a piece of plastic card or a toothpick, should be used for this purpose [44].
Inspection It is a good idea periodically to inspect O-rings that are subjected to potentially damaging mechanical treatment, or exposed to harsh environments. The O-ring should be examined for signs of surface damage, such as cracks or tears, in a slightly stretched state. This is done by gently pulling the O-ring between the thumb and forefinger. Evidence of permanent deformation should also be noted. If there is any surface degradation, or if the O-ring no longer has a circular cross-section (i.e. it has edges), then it should be replaced [38].
8.2.8.3 Flat metal gasket seals of the “ConFlatR ” or “CF” design A method of forming a seal in an annular copper gasket, by squeezing it between two stainless steel flanges equipped with knife edges, has become almost universally adopted for use in ultrahigh-vacuum systems. When the flanges have been correctly assembled and bolted together, this configuration allows very high pressures to be exerted on the copper, thereby forcing it to flow and fill surface imperfections. As a result, CF seals tend to be unaffected by scratches in the gasket or the flange surface, that would cause leaks in other sealing devices. For this and other reasons (e.g. the ability to operate over a very large temperature range: −196 ◦ C to 500 ◦ C), CF seals have acquired a reputation for being highly reliable [28]. The main disadvantages of these devices are their inherent bulk, the non-reusability of the copper gaskets, and the need to fit and tighten a large number of bolts when the flanges are assembled. (It is actually possible to reuse the copper gaskets in some cases. However, this practice is not recommended if high levels of sealing reliability are desired [44].) The above comments about CF flange reliability should not be taken as an indication that they can be subjected to abuse. They should be given the same amount of care as other
Mechanical devices and systems
240
stainless-steel flange copper gasket beaded edge
Fig. 8.4
Cross-sectional view of a metal-gasket face-sealed fitting. A pair of special threaded-fasteners (not shown) is used to hold the two flanges together.
sealing devices, as discussed above. It is particularly important to protect knife-edges from nicks or other damage. In order to prevent the lowering of sealing forces during bakeout, which may possibly result in leaks, the rate of temperature rise of a CF flange assembly should not exceed 150 ◦ C per hour [44]. An additional point of interest concerns an unusual mode of sensitivity to the presence of contamination. If contaminants are present on the surfaces of gaskets that are subjected to repeated bakeouts, these can lead to oxidation of the copper, which can cause tiny leak paths to open up [28]. Also, if copper gaskets are subjected to baking for long periods, their surfaces tend to form heavy, but loosely adhering, oxide layers. These oxides can flake off when a gasket is removed from its flange, and cause contamination of the inside of the vacuum system and its flange sealing surfaces [44]. This problem can be prevented by using silver-plated copper gaskets.
8.2.8.4 Metal-gasket face-sealed fittings for small-diameter tubing In systems involving small-diameter tubes that are used for supplying high-purity gases in a laboratory, connections are often made with flat copper gaskets sandwiched between stainless-steel flanges. These devices often employ a “beaded-edge” or “radiused knifeedge” configuration to make the seal (see Fig. 8.4). These beaded-edge devices are UHV compatible, and capable of providing seals that are helium leak-tight to 4 × 10−10 Pa·m3 ·s−1 . While they are superficially similar to CF flanges in their use of a knife-edge, beaded-edge seals lack crucial features that make CF flanges highly reliable. On the contrary, beadededge seals do not tolerate scratches or other damage on the gaskets or flange sealing surfaces [39], [48]. Furthermore, they should be kept substantially free of contaminants such as particulate matter. However, like CF flanges, they can withstand relatively harsh environmental conditions such as extremes in temperature.
8.2 Mechanical devices
241
outer jacket
Fig. 8.5
helical spring inner lining
flanges
Cross-sectional view of a HelicoflexR seal, in position between a pair of flanges. The axis of the helical spring is perpendicular to the surface of the page.
8.2.8.5 HelicoflexR metal O-ring seals A number of all-metals seals are available which mimic (at least to some extent) the behavior R seal,” employs a helical of ordinary rubber O-rings. One such device, called a “Helicoflex metal spring that forms a ring, which exerts pressure on a C-shaped layer of soft metal (such R seals are compressed between two as aluminum or silver) on the outer surface. Helicoflex flat flange surfaces (see Fig. 8.5). Owing to their all-metal construction, they are UHV compatible. R seals over CF ones are: The main advantages of Helicoflex (a) they can be used in flanges similar to those that accept O-rings (i.e. they do not require knife-edges), (b) the flanges do not have to be circular, (c) comparatively reliable seals with very large dimensions can be created, and (d) the flanges do not need to be nearly so bulky as CF types. R seals can withstand very large pressure differentials. Also, Helicoflex Like CF seals, these devices can withstand extreme environmental conditions. However, they are in practice not nearly as reliable as CF seals, and are susceptible to sealing-surface damage and contamination. In this regard, they are also not so reliable as ordinary rubber O-rings. The surface finish on the flanges makes a difference – this must neither be too rough, nor too smooth. Selection of the material for the outer surface of the seal (the outer jacket) is also important. For example, in one application, silver was found to be significantly superior to aluminum with regards to leak resistance [49]. R seals with a slightly Improved leak resistance can be achieved by using Helicoflex different shape from the standard type illustrated in Fig. 8.5. These devices, called “HeliR seals,” have two small -shaped ridges on the sealing surfaces, which result coflex Delta in higher contact pressures between the seal and its mating flanges. They can be helium leak-tight to a level of 10−14 Pa·m3 ·s−1 .
Mechanical devices and systems
242
8.2.8.6 Indium seals for cryogenic applications In low-temperature work, seals with rubber O-rings must generally be avoided because of the complete lack of resilience of elastomers at such temperatures. One alternative is to use CF flanges. However, the considerable bulk of these devices makes them incompatible with most types of cryogenic equipment, where space is usually restricted, and the need to minimize mass in order to reduce cooling requirements is important. One method that satisfies virtually all the requirements for low-temperature work involves the use of extruded indium wire as the sealing material. A length of such wire is formed into a loop on a flange, and its ends are crossed over one another. When the wire is compressed between opposing flanges, it flows into imperfections in their sealing surfaces in much the same way as the copper in a CF seal does. In the case of indium seals, however, the indium wire is sufficiently soft that it is not necessary to use knife-edges. These devices are capable of providing vacuum-tight (and even superfluid tight) seals with excellent reliability – even in the presence of moderate amounts of damage to the flange surfaces. Of the various gasket materials that could be used for cryogenic sealing, indium in the form of wire has proved to be the most consistently successful [50].3 (Nevertheless, as in the case of other reliable seal designs, it is generally not advisable to tempt fate by allowing sealing surfaces to become damaged.) A frequent cause of failures of indium seals (when they do occur) is the insufficient tightening of the flange screws and compression of the indium [42]. Other factors, which reduce the reliability of seals in general (as discussed above) can also sometimes be troublesome. Techniques for making and unmaking indium seals are described in Ref. [42]. Problems can also arise by ignoring differences in thermal contraction of the various components of the seal (flanges, screws, and indium wire), which can be very important when these devices are cooled from room temperature to that of liquid helium (−269 ◦ C).4 Unlike O-ring rubber, indium cannot function as a spring material. As with CF seals, this property must normally be provided by the screws that hold the flanges together. If the thermal contraction of the screws is too small relative to that of the flanges, gaps may open up when the setup is cooled. For example, this can happen if carbon-steel screws are used, when the flanges themselves are made of stainless steel. Generally, the flanges and the screws in an indium joint should be made of the same materials. If the use of such problematic material combinations is unavoidable, steps can be taken to reduce the effects of thermal contraction. One method is to place a washer with a small thermal expansion coefficient (made of, e.g., titanium) between the screw head and the flange. Another (probably superior) technique that is used in industrial cryogenic applications is to use a stack of Belleville spring washers to ensure that the indium is always loaded, despite changes in the dimensions of the various parts of the seal [50]. Slow cooling of the apparatus is desirable in order to avoid leaks due to differential contraction. An example has been provided in which two mating flanges were made of 3 4
Cryogenic-seal gaskets made of various plastics are sometimes described in the literature. These are often not very reliable, and are probably best avoided. Most thermal contraction takes place between room temperature and that of liquid nitrogen. If an indium seal works reliably at the temperature of liquid nitrogen, it will probably do so at that of liquid helium.
243
8.2 Mechanical devices
dissimilar materials (brass and stainless steel) [51]. A leak-tight seal could always be maintained if the apparatus was cooled slowly (over half an hour) to liquid-nitrogen temperature, but never if this was done suddenly. On the other hand, if both flanges were made of brass, the apparatus could be cooled rapidly without causing any leaks. One of the reasons for the success of the classical indium seal design is that the indium wire forms a metallurgical bond with the flange surfaces [52]. In this sense, the technique is really a form of soldering at ambient temperatures without the use of flux, and with mechanical support provided by the flanges and their screws. In order to allow effective bonding to take place, the flange surfaces should normally be completely free of oils, greases, or other contaminants that could otherwise inhibit it. (However, exceptions are sometimes made in the interests of convenience – see below.) One disadvantage of this bonding behavior is that the flanges must often be pulled apart with considerable force. This is usually done with the aid of special “jacking screws” that are built into them. Also, it is generally necessary to scrape the remains of the indium off the flange surfaces with a soft (non-metallic) tool before they can be reused. This procedure can be somewhat time consuming. In order to make it easier to separate flanges, in situations where this must be done frequently, some workers intentionally coat the flange surfaces with a thin layer of grease before mounting the indium wire. This also makes it easier to remove the indium afterwards. Silicone grease is normally used for this purpose [53], although Apiezon-N grease is also suitable [54]. The resulting seals seem to be reasonably reliable. (NB: If silicone grease is applied to a flange, it can be extremely difficult to completely remove later on – see page 97.) If indium seals of the highest reliability are desired, consideration should be given to the use of a flange design in which the indium is “captured” or “trapped.” That is, the flanges are configured so that it is difficult for the indium to escape the clamping pressure by flowing away from the sealing area. (This principle is also used to good effect in CF flanges.) One example of an indium sealing arrangement that incorporates this approach is the “ring and groove seal” [52]. In this design, the indium wire is placed in a groove in one flange (similar to an O-ring groove), and is compressed by a corresponding protruding ring on the opposite flange. Such a configuration is known as a “fully trapped” seal, in contrast with other flange designs, in which the indium is only partially trapped, or completely untrapped (see Fig. 8.6). One disadvantage with the use of a fully trapped seal design is that it may be difficult to remove indium from the sealing surfaces after the flanges have been separated, because of the convoluted form of these surfaces. In practice, the use of a “partially trapped” flange design may often be preferable, because it provides for easier removal of the indium, and because the reliability of seals made using such flanges is usually adequate. The resulting improved ease of use is of particular importance in flanges that have to be joined and unjoined repeatedly. The softness of indium, and therefore its ability to deform under pressure, increases with its level of purity. As a result, it is possible to make reliable seals at lower sealing pressures by using indium with smaller impurity concentrations. Changing from 99.9% to 99.999% pure material can make a significant difference to seal quality [55]. In most applications,
Mechanical devices and systems
244
untrapped indium
flange
partially trapped
fully trapped (“ring and groove”)
Fig. 8.6
Some indium seal configurations. The nomenclature is that of Ref. [52]. Indium thicknesses have been greatly exaggerated for clarity.
99.99% pure indium should be adequate. Indium can be softened by gently warming it. This may be useful if the vacuum flanges are made of a soft material, such as copper, which may otherwise be prone to distortion when the flange screws are tightened. Owing to the relatively high cost of indium, efforts are often made to recycle it within the laboratory. This involves melting down old seals, casting the molten indium in a suitable die, and then extruding it to form wire. (A procedure for doing this is described in Ref. [56].) While such a process usually results in perfectly satisfactory seal material if done carefully, there is a possibility that foreign matter (dust, grease, etc.) will be accidentally embedded in the indium. Also, the purity of the indium is likely to diminish. For these reasons, new indium is preferred if seals of the highest reliability are required. Indium has a tendency to creep (i.e. deform slowly but continuously under stress) at room temperature [22]. For this reason, if an untrapped flange design is being used, and spring washers are not present to maintain pressure as the indium flows, it may be necessary to retighten the flange bolts at some point. If a circular seal with a diameter of greater than
245
8.2 Mechanical devices
about 50 mm is being used, the bolts should be retightened about an hour after the initial tightening process and prior to cooling the apparatus [42]. Loose and leaking indium seals are often found in certain cryogenic instruments after they have been transported from the factory [57]. The tendency for pulsating loads on the sealing faces to reduce the thickness of the indium [50] could at least partly account for this.
8.2.8.7 Conical taper joints for cryogenic applications Conical taper joints (or greased cone seals) are used in some cryogenic instruments which require a vacuum seal that must be made and unmade on a regular basis. Such a seal involves two surfaces, each having the shape of a truncated cone with a shallow taper angle (5–15◦ ), which fit snugly together when joined [53]. One of the surfaces is coated with a layer of grease, and this provides the seal when the two are brought into contact. In comparison with indium seals, conical taper joints have the advantages of being compact, and (in some cases) not requiring screws or bolts to hold the mating surfaces together. Also, they are nearly (but perhaps not quite) as reliable as a well-designed indium seal [58]. A potential problem with conical taper joints is that the grease (which is usually a silicone high-vacuum type) tends to spread over everything in the vicinity of the seal [58]. Silicone grease is extremely difficult to remove completely from most surfaces. Moreover, its presence on leak-prone items like solder joints can hamper any necessary leak-detection operations (see page 171). Silicone grease, as with liquid silicones in general, can also cause other problems (see page 97.) The general view appears to be that, as a sealant, silicone high-vacuum grease is reliable. However, some workers have found that a 50:50 mixture of glycerol and soap makes a much more reliable sealant than either silicone or Apiezon N vacuum grease [59]. Another researcher notes that Apiezon greases of any type are highly unreliable in comparison with Dow-Corning silicone grease [53]. The reliability of glycerol–soap mixtures depends on certain details of their production (these are discussed in Ref. [60]). Such mixtures can be removed from sealing surfaces using alcohol.
8.2.8.8 Weld lip connections The weld lip vacuum seal combines the high reliability associated with permanent (nondemountable) welded stainless-steel joints with (to a limited extent) the convenience of demountable connections such as CF flanges. Weld lip seals (also called “cut-weld joints”) consist of an annular plate, which is TIG (tungsten inert gas) welded on its inner radius to the end of one of the two connecting tubes, and on its outer radius to its mating counterpart on the opposing tube. The connection is unmade by cutting away the weld metal joining the two plates using a special tool, or a hand grinder. Such joints can be made and unmade many times (e.g. 15 [22]) before the replacement of the assembly is needed. Unlike other demountable seals, weld lip connections do not suffer from leaks due to damaged sealing surfaces or damaged seals, and are considered to be very reliable [43]. They are used in large
246
Mechanical devices and systems
vacuum equipment, such as nuclear-fusion reactors and particle accelerators, which contain numerous demountable connections that might otherwise be troublesome. An extensive discussion can be found in Ref. [22].
8.2.8.9 Pipe threads Pipe thread fittings are widely used sealing devices in certain items of plumbing, such as hydraulic and compressed air lines, in which tiny leaks are of little or no consequence. They consist of tapered threads on the mating members that tighten to produce a sealing action when they are screwed together. The sealing element is not a well-defined gasket as such, but a malleable material such as PTFE tape, or a hardenable liquid (known as “pipe sealant”). These substances are respectively either wrapped around or smeared on the threads before the fittings are mated. Because of the long and convoluted geometry of their sealing surfaces, pipe thread fittings are perhaps the worst type of sealing device, with regards to freedom from leaks [61]. As such, they have no place in high-vacuum systems or equipment for handling high-purity gases. Even in liquid- or gas-handling systems for which the leak requirements are not stringent, it is often better to avoid pipe thread fittings in favor of devices that use O-rings, or some alternative, more reliable, sealing arrangement [13]. One O-ring device that is useful in such situations is the SAE (Society of Automotive Engineers) straight thread fitting. In cases where pipe threads must be employed, it is best not to use PTFE tape as a sealing medium. Anaerobic pipe sealants are easier to use, and provide greater assurance against leaks [62]. Also, pieces of PTFE tape can detach from a fitting and clog gas or fluid line components – see page 262.
8.2.9 Dynamic seals and motion feedthroughs 8.2.9.1 Devices employing sliding seals The transmission of mechanical motion into a hermetically sealed environment, such as a vacuum chamber, is a frequent requirement in experimental apparatus. Perhaps the most common example of a device that has such a function is a valve. The most elementary method of transmitting rotary or linear motion into a closed chamber involves the use of sliding (or dynamic) seals. For example, a circular shaft can rotate or translate within an O-ring seal, with air at atmospheric pressure on one side of the seal, and a vacuum on the other. In general, such sliding seals are not very reliable devices, with regards to freedom from leaks. This is especially true of those used for linear translation, which are useful only at pressures above 10−4 Pa [48]. The failure modes of dynamic seals (due to particulates, scratches on sealing surfaces, etc.) are similar to those of static ones. However, one also has to contend with the possibility of damage to the seals and sealing surfaces arising during the operation of these devices. In order to avoid rapid degradation of the seal from abrasion, the moving shaft must be polished, and kept in this condition. Sliding seal motion feedthroughs should be operated intermittently, not continuously.
8.2 Mechanical devices
247
Unlike static seals, dynamic O-ring seals must generally be lubricated with grease. The exceptions are those that use PTFE (which is self-lubricating) as the sealing material. Silicone and perfluoropolyether5 greases, rather than ones based on hydrocarbons, are the preferred types, because with the former there is a reduced tendency for the shaft to stick to the O-ring after long periods without movement [44]. Otherwise, both silicone and perfluorocarbon lubricants have serious disadvantages, which are discussed on page 97. In the case of sliding seals undergoing linear motion, a potential problem with lubricants is that they can act as contaminants after they pass through the seal. At any rate, greases will carry small amounts of gas with them into the vacuum system in the form of bubbles [48]. The use of multiple O-rings can help solve some of these problems to a limited extent. A very effective way of improving the leak characteristics of sliding seals is to use a guard vacuum, whereby two seals are used, and the space between them is pumped (see the discussion on page 164).
8.2.9.2 Metal bellows motion-feedthroughs In devices and systems in which even small leaks cannot be tolerated, and in which the use of guard vacuums is an undesirable complication (e.g. high-vacuum valves), metal bellows can be used to transmit movement without the need for sliding seals. Both linear and (surprisingly, with a special design) rotary motion can be passed into the hermetically sealed environment by such means – see Fig. 8.7 [63]. Bellows-based motion feedthroughs are really only suitable for low speed motion. They are particularly useful in ultrahighvacuum equipment, where the need for bakeout limits the utility of some of the alternative methods. However, because they are easy to use, bellows feedthroughs are often employed generally in vacuum equipment, even when a sliding-seal arrangement could be made to work. Although the leak properties of bellows-based mechanical feedthroughs are generally greatly superior to those of feedthroughs based on sliding seals (without a guard vacuum), bellows have some disadvantages that may limit their usefulness in some cases. For example (and especially in the case of edge-welded bellows) they can be relatively fragile, and are among the most leak-prone vacuum system components. In situations in which the bellows may be subjected to abuse and cannot be protected, feedthroughs based on other principles may be a better option. Bellows reliability problems are discussed on page 167.
8.2.9.3 Magnetic fluid seals One particularly reliable method of feeding rotary motion into a sealed environment involves the use of a “magnetic fluid seal.” In such a device, a fluid consisting of a suspension of fine magnetic particles in a low-vapor-pressure liquid is suspended in the gap between the rotating shaft and a collar by means of permanent magnets. This fluid is in intimate contact with the shaft and the collar, and is prevented from moving away by the magnetic field. Hence, it acts as a barrier that prevents the passage of air through the gap between these parts. This arrangement is much less prone to leaks than those involving sliding seals. 5
R PFPE – e.g. Fomblin .
Mechanical devices and systems
248
(a) flange
linear bearing
(b) flange
shaft
bearing
Fig. 8.7
Schematic diagrams of feedthroughs, for: (a) linear motion, using edge-welded bellows; (b) rotary motion, using rolled bellows (see Ref. [68]).
Furthermore, since there is no contact between the rotating and stationary parts, magnetic fluid seals do not undergo wear, and consequently have a very long lifespan. Unlike bellows feedthroughs, magnetic fluid seals can be used to transmit very high-speed rotary motion. On the downside, the sealing ability of these devices is liable to disruption in the presence of large external magnetic fields [43]. Magnetic fluid seals are also vulnerable to the entry of particulate matter into the gap between the shaft and the collar. More information on magnetic fluid seals can be found in Ref. [64].
8.2.9.4 Magnetic drives Another motion feedthrough arrangement, called a “magnetic drive” or “magnetic feedthrough,” uses a magnetic field to provide a coupling between moving magnetic members outside and inside the hermetically sealed environment [22]. This is the most reliable
249
8.2 Mechanical devices
of the various motion feedthrough approaches, with regards to freedom from leaks. Both rotary and linear motion can be transmitted – at very high speeds, if necessary. One version, which is often used for in-vacuum sample manipulation, consists of a moveable permanent magnet that is mounted in air on the outside of a non-magnetic stainless-steel tube. This tube is closed at one end and connected at the other to the vacuum apparatus. Inside the tube, within the vacuum environment, is another piece of magnetic material that is connected to the item to be moved. This type of device is very popular in ultrahigh-vacuum work, since it is simple, and lacks the vulnerabilities of other motion transmission systems. If high-speed rotary motion is required, or electrical control of the positioning is needed, the external permanent magnet can be replaced by electric coils, so that the resulting device is a form of electric motor (see e.g. Ref. [37]). The main disadvantage of these magnetic coupling arrangements is that they lack rigidity (i.e. the coupling is very soft) and the means to transmit very large forces or torques into the vacuum. One possible failure mode is the decoupling of the external permanent magnet or electric coils from the internal magnetic member due to unexpectedly high forces acting on the latter. If the device is being used to vertically position an object within the vacuum system, this event may result in the object falling – with potentially disastrous results.
8.2.9.5 Use of electric motors in the sealed environment One potentially very useful way of generating motion in a sealed environment is to use electric motors (or other electric actuators) located within it to produce the motion. This would appear to be contrary to the principle that trouble-prone things should be kept in the ambient environment. However, the advantages of direct drive, in situations that might otherwise require complicated mechanisms to take the motion where it is needed, may more than compensate for this drawback in some cases. (See the discussion on direct versus indirect drive mechanisms on page 222.) Also, by putting motors inside the system, one eliminates the need for mechanical feedthroughs. For example, in low-temperature experiments one sometimes needs to rotate the crystalline axes of material samples with respect to an applied magnetic field. A standard method of doing this might involve the use of a driveshaft, passing through a room temperature feedthrough at the top of the cryostat, and turning a rotation mechanism at the bottom. Because of the need to limit the passage of heat into the cryogenic environment, such driveshafts tend to be long and spindly. Hence, the rotation system as a whole may have considerable windup, stiction and backlash problems. In such situations, an electric motor, placed near the sample, can sometimes be used to advantage. In certain cases, there may be no space in the cryostat for driveshafts, or any other motion-transmitting device, and the use of a motor then becomes essential. The major problem with using electric motors in a vacuum environment is that of removing the heat produced during operation. If this is not adequately taken care of, the result is likely to be burnout of the motor windings. If the motor is being used under cryogenic conditions, a more likely result of not allowing for heat generation is that the required low-temperature conditions will be lost. However, in the case of experiments
250
Mechanical devices and systems
operating at liquid helium temperatures, this difficulty can sometimes be avoided by using motors with superconducting windings. Small motors that are capable of operating in vacuum and/or cryogenic environments are available commercially. These can either be the usual types that make use of electromagnetic forces, or ones that work on the basis of piezoelectric effects. In the former case, “stepper motors” are generally preferred, because they lack the moving electric contacts (such as commutator brushes or slip rings) than can cause reliability problems. Furthermore, they make it possible to produce slow and well-controlled motions without the added complexity and potential reliability issues associated with a speed reduction system, such as a gearbox. Methods of modifying ordinary commercial motors so that they can operate in the aforementioned environments have also been discussed in the scientific instrumentation literature (see, e.g. Ref. [65]).
8.2.10 Valves 8.2.10.1 Introduction Valves are often trouble prone. They generally comprise several devices that perform separate functions, and each of these devices can exhibit its own reliability problems. There is the primary sealing or flow-controlling device, which is susceptible to leaks, blockages, or erratic operation. A mechanical motion-feedthrough arrangement will also frequently be present, and this can leak. In some cases (as with gate valves) there may also be a mechanism to transform a particular type of motion into some other kind (e.g. rotary to linear) that is suitable for performing the sealing or flow controlling operation. Depending on its complexity, and other factors, this mechanism may be vulnerable to the kinds of failures exhibited by mechanical devices in general. Such a mechanism may have to work in high- or ultrahigh-vacuum, under which conditions friction and wear are important issues. For laboratory applications, one should generally use top-quality valves. This is especially important if they are to be used in vacuum work where helium mass-spectrometer leak tightness is important, and/or if they are one of the unreliable types listed below. Some particularly unreliable valves, and their main failure modes, are: (a) pressure relief valves (internal leakage across the valve seat, failure to open at the intended pressure setting), (b) all-metal valves that must create a seal (internal leakage across the valve seat), (c) metering valves (erratic behavior, lack of flow control due to mechanical damage of precision parts, blockages), (d) needle valves used for non-metering purposes (internal leakage across the valve seat, blockages), (e) valves with sliding-seal motion feedthroughs (leaks at the feedthroughs), (f) gate valves (motion feedthrough leaks, malfunctioning of the gate mechanism, internal leakage across the valve seat), and (g) check valves (internal leakage across the valve seat).
251
8.2 Mechanical devices
Valve failures are often caused by human error. A classic example is the overtightening of a leaky valve, in what usually amounts to a futile attempt to stop the leak. Such actions frequently result in further damage to the valve. Hence, it is normally good practice to repair or replace such items without delay. One simple measure for preventing overtightening is to provide valves with handles which are sufficiently small that even the strongest user in a laboratory cannot damage them. Unfortunately, workers will sometimes bring out pliers and the like, in order to provide themselves with extra leverage. Therefore, especially in the case of delicate devices (such as needle valves), a better method is to install torque limiters on the valve stems (see page 224 and Ref. [66]). Valves that function by sliding of the sealing surfaces (e.g. ball valves) are generally highly resistant, if not immune, to overtightening. In the case of valves with pneumatic actuators, the analog of overtightening is to use an excessively high air pressure. Problems also frequently occur because of mistakes in determining (by visual inspection) whether a valve is opened or closed. In order to prevent such errors, some commercial valves are provided with visual indicators. (These are sometimes available as optional extras.) For example, certain manually operated high-vacuum valves display a red band beneath the handle when they are opened. Such arrangements are known as “mechanical position indicators.” If it is desirable to monitor their status remotely, valves can also be obtained with “electrical position indicators.” These contain limit switches, or similar devices, with contacts that close or open depending on the position of the valve mechanism. The status of valves that are taken from a fully closed to a fully open state in a quarter of a turn (such as ball valves) is usually easily discernable by the orientation of the handle. If debris or particulates may be present in a vacuum system, valves should be positioned and oriented so that these contaminants cannot get inside them. Particular care should be taken when ion and titanium sublimation pumps are present, since these can release flakes of titanium. If a titanium sublimation pump has been injudiciously oriented with respect to a valve, it is also possible for sublimed titanium to deposit directly onto a sealing surface. Certain types of non-evaporable getter (“NEG”) pump tend to release dust. The sorption pumps used for rough-pumping UHV equipment may present a problem, since the molecular sieve within them can degrade and turn into a powder following prolonged use. This material tends to enter the vacuum system, and can foul the seats of nearby all-metal valves [63]. Aside from ensuring that such valves are not placed near a sorption pump, it is desirable to replace the molecular sieve in a pump as soon as evidence of powdering appears. Also, if the sorption pump is not handled correctly, molecular sieve can be blown into the vacuum chamber when the latter is vented to the atmosphere [67]. Before valves are installed in a system, they should be examined for the presence of contamination and debris. Unless the manufacturer has pre-cleaned the valve and placed it in protective packaging, it is normally a good idea to clean the flanges and the internal sealing surfaces. The exposure of valves to large variations in temperature (e.g. in bakeable vacuum systems) is a common cause of malfunctions [22]. Valves in high- and ultrahigh-vacuum systems can present problems due to the large amounts of contamination that may be contained in their internal mechanisms and interior volumes. This is especially true of valves containing complex drive systems and bellows
252
Mechanical devices and systems
feedthroughs that are exposed to the vacuum, such as gate valves. During actuation, valves can release contaminants into the vacuum system as a result of sliding or rolling of surfaces in contact, exposure of previously unexposed surfaces, and leaks [22]. Valves are actuated by manual, electromagnetic, or pneumatic methods. Electromagnetic schemes usually involve the use of a solenoid, or (more rarely) an electric motor. Pneumatic arrangements comprise a piston that is moved within a cylinder by compressed air. These frequently also make use of electromagnetic actuation, because the compressed air is often activated by a solenoid valve. The most durable actuators are manually operated, using a rotating hand-wheel or a crank [15]. The primary sealing surfaces in valves (i.e. the valve seats) are generally static – not dynamic. Hence, they usually do not require lubrication. In the case of ball valves (in which sliding motion of the sealing surfaces takes place), one of the surfaces is PTFE, which acts as a solid lubricant. Sliding seals in motion feedthrough mechanisms can be a troublesome source of leaks. In good-quality valves designed for high- and ultrahigh-vacuum work, bellows-sealed motion feedthroughs are used.
8.2.10.2 Pressure relief devices Pressure relief valves Valves that are designed to prevent the development of excessive pressures inside closed spaces (such as vacuum vessels) are referred to as “pressure relief valves” or “safety relief valves.” They are usually very simple devices – often comprising, in essence, a plate that is pressed against an O-ring seal by means of a spring. If the gas or liquid pressure produces sufficient force to overcome the spring, the valve opens and the gas or liquid is allowed to flow – thereby reducing the pressure in the space. These simple pressure relief valves must be capable of operating under the influence of fluid pressure alone. Furthermore, they are often required to operate at relatively low pressures, and hence low actuation forces, in comparison with those needed to fully deform the elastomer seal. (Devices of this kind are commonly used in cryogenic and vacuum equipment.) Therefore, in contrast to other kinds of valve (such as manually operated types) that are capable of exerting large sealing forces, these low-pressure pressure relief valves tend to be prone to leaks. The same is also true of check valves (used to make liquids and gases flow in one direction only), which often have a very similar construction. Since leaks are often caused by contamination on the sealing surfaces, it is particularly important to ensure that pressure relief valves are kept clean. Contamination may be deposited in the valve during a pressure relief operation, in which gas from the vessel rapidly moves up the passageway leading to the valve – taking dust and debris from the vessel or the passageway with it. Cryopumps used in ultrahigh vacuum (UHV) systems usually contain a pressure relief valve. Leaks in these are often caused by charcoal dust that has migrated from the interior of the pump onto the valve O-ring during regeneration [68]. (Cold activated-charcoal is used to adsorb gases during cryopump operation.) Such leaks can be averted either by wiping the O-ring with a dry, lint-free cloth following regeneration, or by installing a particle filter
253
8.2 Mechanical devices
in line with the valve. In order to reduce outgassing and slightly lower the ultimate pressure of a UHV system, the pressure relief valve in a cryopump is sometimes replaced with a rupture disc. This has the side effect of reducing leak problems. A potential cause of sudden leaks in pressure relief valves is ageing of the O-ring material, which can cause it to harden or crack. (O-ring degradation is discussed on page 237.) If an O-ring becomes hard, the valve may provide leak-free service until a pressure relief operation takes place, but then not seal properly afterwards [69]. Other effects, such as thermal expansion and vibration, can also cause a valve with a hardened O-ring to lose its seal. Hence, O-rings in pressure relief valves must be replaced regularly. The use of Viton, rather than nitrile rubber, as the O-ring material can reduce or eliminate this requirement (see page 237). Another cause of relief valve leaks is incorrect reseating of the moving part after a relief operation. This can be caused by a temporary misalignment of the sealing surfaces. Alternatively, it may be due to damage to the valve seat, possibly owing to erosion, or perhaps “chattering.” The latter effect is the repeated and rapid opening and closing of the valve, which occurs when the pressure is too close to the relief pressure and pressure transients take place [70]. Distortion of relief valves due to misaligned piping can lead to both leaks and chattering [69]. One way of greatly reducing leak problems is to use a “supplementary-loaded safety valve.” These devices provide an additional force (using an electric solenoid or a pneumatic device) to ensure that the valve is completely closed when the pressure is below the threshold. When the pressure exceeds this value, the supplementary force is removed, and the relief valve operates normally. The extra force is only reapplied when the pressure drops back below the threshold [70]. Since the primary purpose of a pressure relief valve is to prevent undue pressure buildups within a closed vessel, the inability to do so at the correct level is the most important failure mode. Such faults sometimes occur because the relief valve or the passageway between it and the vessel has become clogged. This can take place over time, if debris from the vessel gets blown into them during large pressure relief events [71]. Leaky pressure relief valves that are attached to evacuated cryogenic spaces can be very dangerous. Contaminants such as water vapor and air from the surrounding atmosphere will enter the cold areas and solidify – possibly blocking the relief valve passageway [72]. (See also the discussion on page 290.) The failure of pressure relief valves to operate on demand can be caused by sticky deposits in the moving parts that have been deposited from the liquid or gas handled by the valve. Other causes of this behavior are (a) damage to the sliding surfaces in the valve caused by vibration, chattering, or corrosion, and (b) foreign material in the part of the valve containing the spring (the “bonnet”), which prevents it from opening [70]. Because of all the above possibilities, it may be desirable to provide redundant pressure relief devices in some systems. After a pressure relief valve has operated, and the condition producing this action has been corrected, it is always good practice to test the device to ensure that it will open at the desired pressure, and that it is free of leaks [73]. Sometimes it may not be possible to know whether a relief event has occurred. Hence, for this and other reasons, such tests should be
254
Mechanical devices and systems
carried out periodically in any case. Keep in mind the possibility that relief valves may be tampered with.
Rupture discs A relatively common method of relieving pressure, which does not present the same problems of leaks and other mechanical difficulties characteristic of ordinary pressure relief valves, is to use a “rupture disc.” These devices (also called “bursting discs”) are extremely simple. The most common version (the “forward-acting dome type”) consists of a thin metal dome membrane that is sealed over an orifice in the container. The concave side of the dome faces the pressure. When the pressure in the container reaches a predetermined value, the resulting tensile stresses in the disc will cause it to tear open, thereby releasing the gas or liquid. Rupture discs are considered to be very reliable pressure relief devices. Because they tend to be inherently free of leaks, and can be exposed to temperatures considerably beyond the limits of elastomers, they are often used in ultrahigh vacuum and cryogenic systems. Rupture discs have a number of drawbacks, which mean that they cannot always be used as a direct replacement for pressure relief valves. For example, rupture discs cannot be reused. Following a relief event, they must be replaced. Another potential problem is that, unlike a pressure relief valve, the orifice covered by the rupture disc does not re-close when the pressure diminishes. This might lead to a contamination problem in certain cases. For example, air can enter a cryogenic vacuum system. This difficulty can be overcome in some situations by placing a rupture disc and a relief valve in series in the pressure relief line. The rupture disc directly faces the clean environment, and the relief valve prevents the reverse flow of gas or liquid after a pressure relief event. Forward-acting rupture discs are susceptible to fatigue failure if the system being protected undergoes pressure fluctuations near the bursting pressure of the disc [70]. For this reason, they must be replaced periodically if such conditions occur. Also, owing to the small thickness of the rupture discs used in many situations, even very small amounts of corrosion can reduce their reliability. The manufacturers of rupture discs usually offer a selection of different disc materials to suit various corrosive conditions. A particular problem with some types of rupture disc, such as the elementary type described above, is that upon bursting they release fragments that can travel downstream, and possibly cause blockages there. Discs with a “scored tension” design do not have this problem [69]. A special type of rupture disc, with a “reverse-acting” design, has some important advantages over the usual forward-acting type. In the case of reverse-acting discs, the convex side of the dome faces the pressure. When the pressure threshold is exceeded, the dome collapses and flips sides. As it does so, it breaks open along score marks, or is cut open by knife blades built into the rupture disc holder. Such rupture discs can operate reliably, without fatigue, at a higher fraction (e.g. 90%) of their rupture pressure than conventional forward-acting types. (The most common, un-scored, forward-acting rupture discs are limited to operation at 70%–75% of their rupture pressure.) Furthermore, the disc material is thicker, so that reverse-acting discs are more resistant to corrosion [69]. Also, reverse-acting discs do not fragment when they rupture.
255
8.2 Mechanical devices
8.2.10.3 All-metal valves In situations where sealing is required at extreme temperatures (such as in ultrahigh vacuum and cryogenic systems), valves with all-metal sealing surfaces are sometimes used. Because metals lack the resilience of elastomers such as Viton, valves with metal seals are generally very vulnerable to the presence of particulates and damage on the sealing surfaces. Usually, all-metal valves require better surface finishes than are needed in the case of those that have resilient seals. The more successful configurations often involve one sealing surface that is made of some hard material, such as hardened beryllium copper, and another composed of a relatively soft metal, such as silver (as in the case of certain cryogenic valves [65]). In ultrahigh-vacuum work, valves comprising a copper gasket that bears down on a stainless steel knife-edge are fairly common. As long as they have been well designed and manufactured, such devices can provide helium leak-tight seals across their valve seats with good reliability, even after bakeout. Damage to all-metal valves in general is frequently caused by the presence of grit or metal particles on the sealing surfaces, or because of overtightening. (Also, leaks due to particles often lead to overtightening.) Ultrahigh-vacuum copper-gasket and knife-edge valves are normally tightened with a torque wrench. Generally, all-metal valves should not be used in sealing (as opposed to flow-control) functions if possible. In those cases where it is necessary to use such valves, filters should be installed to prevent the entry of dust particles into the sealing regions. (Unfortunately, in some situations – as in the case of ultrahigh-vacuum valves – filters cannot be used, because they would introduce unacceptable flow impedances.)
8.2.10.4 Metering valves Metering valves (also called “gas flow regulator” or “leak valves”) are components that are used to precisely control the flow of a gas or liquid. A number of different metering valve designs have been developed. The classic configuration is referred to as a “needle valve.” This consists of a long, narrow and gently tapered rod, which is moved (often with a micrometer positioning device) into a hollow conical seat [15]. (See, for example, the design in Ref. [66].) Leak valves are used to admit gas into a vacuum system at very low flow rates. They usually consist of a sapphire or metal plate that is pushed up against a metal knife-edge, although needle-valve arrangements are also employed. Generally, both of the opposing surfaces in metering valves are made of non-resilient materials, such as metals. This need not be an inherent disadvantage, since the purpose of such valves is usually to control flow – not to shut it off completely by forming a seal. In fact, neither of the above valve types should be used as shutoff devices [15]. This function should normally be reserved for a completely separate shutoff valve. Any attempt to shut off a flow by bringing the opposing surfaces into contact will probably damage them, and permanently alter their flow-control behavior. Needle valves in particular are very delicate devices. Some commercial valve designs make use of built-in hard stops to prevent direct contact between vulnerable surfaces. (Also, some commercial
256
Mechanical devices and systems
metering-valve units have a built-in shutoff device, so that the presence of an external shutoff valve is unnecessary.) Metering valves are also generally very vulnerable to damage by even the smallest particles. For this reason it is important to place filters at the inlets of these devices. Aside from damage to flow control surfaces, metering valves suffer other reliability problems in the form of erratic operation and blockages. Both problems can be caused by the presence of foreign matter in vulnerable areas. Damage to the delicate parts can also lead to erratic behavior. Needle valves are especially vulnerable to full or partial blockages, because of the small lateral dimensions of their passageways. If the valve is being used to control the flow of a gas, condensable vapors such as moisture can be a problem. When the valve is nearly closed, water can bridge the gap between the flow control surfaces, forming a “liquid seal.” This seal may not disappear until the valve has opened up considerably, and the gap between the surfaces has become relatively wide. When the passageway does open up, it can do so suddenly and uncontrollably. Hence, the gases being regulated should be free of moisture and other condensable vapors [48]. A more rugged alternative to needle valves is the “capillary-type metering valve.” Unlike needle valves, these devices vary the flow impedance by adjusting only the length, rather than both the width and the length, of a channel. A simple version of a capillary valve consists of a cylinder that is slid axially within a closely fitting tube. The small annular gap between the cylinder and the tube acts as a variable length, and constant width, channel. Devices of this kind are robust and reliable [74,75]. Many types of needle valve are prone to slow relaxation over time, causing an unpredictable drift in the flow rate for a given valve setting and pressure drop [76]. This can, of course, be overcome in certain cases by periodic readjustment of the valve. As with other precision mechanical devices (see page 221), hysteresis can also be an issue [77]. A general solution to the problems of needle valve instability and hysteresis is to use a “mass flow controller.” These are electromechanical devices that use a sensor to monitor the flow, and a feedback system involving control electronics and an electrically operated valve to set the flow and correct for any variations. Mass-flow controllers and their failure modes are discussed in detail in Ref. [78]. In situations in which particulates are a necessary constituent of a fluid, or cannot be easily removed by filtration, another electromechanical metering device may be useful. In ordinary mass flow controllers, the valve that adjusts the flow is continuously variable between the “fully on” and the “fully off” state. Because of this, it is vulnerable to contamination in the same way as any manually operated metering valve. One solution to such a problem is to rapidly change the state of the valve back and forth between the “fully on” and the “fully off” condition. The flow is then varied by adjusting the ratio of the times spent in the on and off states. This “pulse width modulation” or “PWM” flow control scheme has the advantage that blockages are unlikely to occur, since the valve does not depend on the use of narrow gaps to control the flow [79]. Also, it is not necessary for such a valve to contain precision parts, which are susceptible to the type of damage that can affect accurate control of the flow. Resilient sealing surfaces can even be used. Simple solenoid valves are often employed for this purpose. A major disadvantage of flow control using pulse width modulation is that the moving parts are unlikely to last long, compared with those
257
8.2 Mechanical devices
of regular flow controllers, because of the rapid cyclic operation of the device. Another problem with this arrangement is that the fluid is delivered in pulses, and only the average flow is controlled. However, it may be possible to ameliorate this by means of a “pulsation damper” component placed downstream of the flow controller.
8.2.10.5 Needle valves used for non-metering purposes Some valves have needle-valve geometries, but are not used for the precise control of gas or liquid flow. For instance, such devices are often employed in cryogenic instruments as compact, but relatively coarse, cold-helium flow-control valves (see page 300). They are also used for controlling the flow of high-pressure gases [80]. In these situations, it is often necessary to bring the opposing surfaces in the valve into contact in order to create a seal. In such cases, the driving mechanism should be devised so that the needle is pushed into the seat – not rotated into place with a screw-like motion. This will result in less wear and longer-lasting sealing surfaces. Cryogenic needle valves are discussed on page 300.
8.2.10.6 Gate, poppet, and load lock vacuum valves Gate valves are usually employed in order to provide flow control and sealing in largediameter vacuum lines. A common application is to provide isolation between a pump (such as a cryopump) and the main chamber in a vacuum system. The main drawback of these devices is their mechanical complexity. A special mechanism is used to slide the sealing plate (or “valve plate”) laterally across the sealing surface, and then (once the two are aligned) in an orthogonal direction in order to create the seal. The presence of this mechanism (which must operate in the vacuum) makes gate valves the most complicated mechanical components normally found in vacuum systems. Wear of the moving parts is a potential cause of failure [28]. Despite this, even in high-quality stainless-steel valves with bellows motion feedthroughs (as opposed to sliding-seal ones), leaks in the feedthrough are the most common form of failure. One alternative to gate valves in the above-mentioned applications is the “poppet valve.” These devices are mechanically simpler than gate valves, because two distinct motions in orthogonal directions are not required. Instead, the valve plate in a poppet valve is shifted with a single linear motion onto the sealing surface. In some kinds of valve (“load lock” types) the movement involves swinging the valve plate through an arc like a door. In both cases, the reliability of their mechanisms is considerably higher than that of gate valves. Although poppet valves and load lock valves take up more space than gate valves, and their conductances are smaller, their greater reliability and lower cost makes them preferable in many cases [28]. However, the largest valves available from commercial sources tend to be gate valves. The above types of valve can be obtained in versions with all-metal seals. Generally, these are used in extreme ultrahigh-vacuum conditions where the use of any elastomer seal would be unacceptable because of excessive outgassing. As these valves get larger, their maintenance requirements and probability of failure also rise. Although it is possible
258
Mechanical devices and systems
to obtain all-metal gate valves with diameters of as much as 40 cm, such devices start to become very unreliable at diameters above about 30 cm [28]. The presence of metal particles on the sealing surfaces of all-metal gate valves can result in damage that is very expensive to repair.
8.2.10.7 Solenoid- and pneumatic-valves The feature that distinguishes solenoid valves and pneumatic valves from other types is the method of producing the mechanical power needed to operate them. Solenoid valves use linear short-stroke actuators that are activated by electromagnetic forces. Pneumatic types employ compressed air as a source of power. Vacuum valves, and especially large ones, are sometimes operated by a pneumatic actuator. As mentioned earlier, pneumatic valves often contain solenoid valves that control the compressed air. Solenoid valves and pneumatic valves are frequently used in vacuum systems that are under automatic operation. Since the electric windings in solenoid valves are generally cooled only by a combination of conduction through the layers of wire, and natural convection, they have a tendency to overheat under certain conditions. This behavior, which can lead to burnout of the windings, is a particular problem when the duty cycle (ratio of operating time to total time) approaches 100%. In order to avoid this, it is necessary to pay attention to the maximum duty cycle when selecting a valve. Furthermore, one must ensure that the ambient temperature is not excessive, and that convective cooling is not inhibited. In some situations in which a valve regularly overheats, it may be worthwhile to provide it with a cooling fan. (General equipment cooling issues are discussed on page 63.) In the case of solenoid valves driven by d.c. voltages, other measures can also be taken to prevent overheating. For example, one can employ a more sophisticated scheme for driving the coil than the simple “fully-on or fully-off” arrangement that is normally used. Special solenoid valve driving circuits are available that reduce the coil voltage after the valve has been activated. (This is viable because the activation voltage is higher than the holding voltage.) The voltage reduction is not based on the actual position of the valve, but is done automatically after a short time delay. Changing the drive voltage in this way greatly reduces the average power dissipation in the solenoid. Another approach is to employ a “latching solenoid valve.” These devices can be switched from one state to another by a short voltage pulse. They then remain in that state until another voltage pulse with the opposite polarity is applied. If a solenoid valve is to be used in an application requiring very high reliability, some sort of position indicator will be needed to provide confirmation of its status. One should not rely on the presence or absence of a driving voltage alone to determine whether the valve is open or closed. Position indication is sometimes provided by a light that is operated by a “proof of closure” or “position indication” switch. Some solenoid valves are equipped with manual actuators, which allow them to be operated by hand in the event of an electrical actuator failure. A pneumatic valve can fail due to incorrect air pressure, failure of its solenoid valve to operate, and other faults common to many types of valve. Failure of the air compressor is a potential source of problems, since these machines can be trouble prone. They require
259
8.2 Mechanical devices
regular oil replenishing and replacement, and it is necessary to periodically drain condensed water from the air reservoir. Moisture from the compressed air system can accumulate in a pneumatic valve, and cause corrosion and leaks (see also the discussion on page 96).
8.2.10.8 Advantages of ball valves – particularly for water The moving sealing element in a ball valve comprises a metal sphere (usually made of stainless steel) with a circular hole across its diameter. When the valve is open, the hole in this sphere is aligned with the bore of the valve. The sphere can be rotated about an axis that is perpendicular to the axis of symmetry of the bore. When the hole has been rotated a full 90◦ , fluids can no longer pass through it, and the valve is closed. The sealing material in which the sphere rotates is usually PTFE, although other materials can also be used. Unlike most other types of valve, the action in a ball valve involves sliding of the two sealing surfaces against one another, rather than the narrowing and closing of a gap between them. This sliding movement has the advantage that particles and debris tend to be wiped off these surfaces, rather than pressed into them. Furthermore, any particles that are able to enter the space between the sealing surfaces are embedded in the soft PTFE – thereby avoiding damage to the sphere. Ball valves also have other features that tend to improve overall reliability. They have virtually no dead space, thereby minimizing the potential to act as reservoirs for particles and debris. Purging and cleaning are very easy. They have a long lifespan, and require almost no maintenance. Furthermore, ball valves are essentially impossible to overtighten. Also, the limited (90◦ ) rotation of the valve means that it is very easy to determine its status just by noting the orientation of the handle. Ball valves are very useful in controlling water flow – particularly when they are employed as shutoff valves. Many common water shutoff valves tend to stick in position if they are not operated for long periods (possibly due to corrosion). Sometimes, the valve mechanism can become so severely immobilized that it is impossible to operate the valve without breaking it – clearly a dangerous situation in the event of a major leak or flood. (In order to avoid this, it is often recommended that such devices be operated at least once every few months.) On the other hand, ball valves are generally immune to this type of fault [81]. Furthermore, they usually do not present the leak problems that are often characteristic of other types of valve. The primary disadvantage of ball valves in this application is that they tend to shut off flow very quickly. This can result in violent shocks due to water-hammer effects, which can damage piping systems and their components. In order to avoid this, it is important to shut off these valves slowly. Ball valves are also very useful in other applications in which contamination is a potential problem. They are employed to control the flow of other liquids besides water, as well as gases, and are also used in high-vacuum equipment. Ball valves that operate at pressures of 10−7 Pa are available. Their resistance to the effects of contamination is beneficial in dirty vacuum systems, such as chemical vapor deposition (CVD) equipment. In such cases, they are often used in place of gate valves, since they present a straight passageway that has the same diameter as the attached tubing. One disadvantage of conventional ball valves is that they provide relatively poor control of fluid flow compared with other types of valve. This characteristic can be improved by using a sphere with a non-circular passageway cross-section. Valves containing balls with
260
Mechanical devices and systems
triangular bores (called “V-ball valves”) are sometimes used to obtain a linear change in flow with angle.
8.2.10.9 Pressure regulators Gas-pressure regulators are often sensitive to the presence of particulates, moisture, and other contaminants, such as oil. For example, leakage sometimes occurs as a result of the presence of metal particles originating from the pressurized gas cylinder. Hence, suitable filters should generally be installed on the inlets of pressure regulators. Cleanliness and dryness of the gas is particularly important for the reliability of precision multi-stage regulators. Pressure regulators can fail due to the presence of contamination on their valve seats or rupture of their diaphragms, which can expose downstream components to the full regulator inlet pressure. The installation of a pressure relief device downstream of the pressure regulator is necessary if this is a hazardous condition. Care must be taken to ensure that a regulator is compatible with the gas being used. For example, oxygen (especially if it is at high pressure) should be used only with special oxygen regulators. Likewise, chlorine and other highly reactive gases should be controlled only by pressure regulators that are designed for use with them. Regulators intended for use with oxidizing or reactive gases must be clean, and generally should be free of lubricants. In those cases where it is necessary to lubricate the regulator, it is essential to use a lubricant that is compatible with the gas (see page 230). If these points are ignored in the case of oxygen, for example, a fire may result. Pressure regulators should not be used as shutoff valves. Instead, a separate valve for this purpose should be placed downstream of the regulator. More information on pressure regulators and their problems can be found in Ref. [82].
8.3 Systems for handling liquids and gases 8.3.1 Configuration of pipe networks Networks of pipes should be designed to avoid the presence of dead end passages (or “dead-legs”) as much as possible. The latter tend to collect particles, debris, and other contamination that can cause blockages, damage valves, and lead to corrosion. Similarly, very small diameter pipes (especially capillaries) should be avoided, if possible, since these tend to collect particles and debris, and are particularly prone to blockages.
8.3.2 Selection of materials Tubing and components in gas and liquid handling systems should preferably be made of an austenitic stainless steel (300 series – especially 316L). In contrast to other materials, such stainless steels:
261
8.3 Systems for handling liquids and gases
(a) can be easily welded (resulting in strong, clean and flux-free, and long-lasting joints), (b) are generally resistant to corrosion, and (c) are usually free of porosity or other defects that can result in leaks. Copper tubes frequently carry cuprous oxide dust, which can damage some types of valve [83]. When pipework is to be installed in systems that are subject to large vibrations (e.g. on a compressor), it is important to keep in mind that copper is particularly prone to highcycle fatigue failure. Stainless steel is better in this regard [84]. In some of these cases, purpose-made flexible hose may be more appropriate than rigid metal tubing [80]. Plastic pipes may appear to be an attractive alternative to metal ones in some undemanding applications (such as cooling water and helium gas-return lines), because of their lower cost. However, unlike copper or stainless steel, the plastic that is typically used for this purpose (PVC) tends to become brittle with age. Fatigue cracks can form as a result of differential thermal expansion and contraction, pressure surges and water hammer in water lines, or mistreatment. Leaks are not uncommon in old PVC pipes.
8.3.3 Construction issues Problems concerning plumbing in general (whether it is used for carrying, e.g., water, gases, or liquid cryogens) often arise because of the presence of debris and other contaminants that were left inside during construction. For example, balls of solder or welding scale created during a joining operation can be dislodged from their place of origin (perhaps as a result of vibration or fluid motion), whereupon they travel downstream and possibly block pipes or filters, or damage the sealing surfaces of valves. The chips of material released during the cutting of tubes, and other foreign matter, such as solder fluxes, abrasives, or lubricants, can also produce such effects. Capillary tubes, which are frequently used in cryogenic apparatus, are particularly vulnerable to blockages resulting from poor workmanship. In order to avoid introducing metal particles into tubes when they are being cut (and especially when cleanliness of a system is important), it is desirable that a pipe cutter (not a hacksaw), with a sharp blade, be used for this operation. A capillary tube is easily cut by scribing it with a razor blade, and bending it back and forth very gently until it breaks [85]. If it is necessary to abrade the ends of tubing in order to square, deburr, or clean them, the tube should be purged with gas while this is being done in order to prevent particles from entering [61]. Welding (which is difficult with copper, but works well with stainless steel) is the preferred joining method. Orbital welding in particular is an excellent technique to use with stainless-steel tubes (see page 155). Brazing is not as good, but may be an acceptable alternative. Soldering should be avoided, if possible. In the case of small capillaries, which are often blocked by solder, any solder joints should involve small clearances and long joint overlaps. The insertion of a length of tungsten wire or steel piano wire into the bore of the capillary (which is removed after the joining operation is completed) can help to prevent
262
Mechanical devices and systems
the entry of solder [86]. Capillaries can also become blocked by the corrosion products produced as a result of soldering them using a highly active flux. It may take a considerable time for these corrosion products to form, possibly leading to failure while the capillary is in use. The soldering of stainless steel normally requires the use of corrosive flux, which frequently leads to eventual pinhole leaks and other problems (see page 159). The methods used for joining metals in vacuum apparatus are relevant here, and are discussed in Chapter 6. Before a pipe system is put into operation, it is normally desirable to flush it out with gas or liquid at a particularly high flow rate, in order to remove as much contamination as possible from internal surfaces.
8.3.4 Problems caused by PTFE tape Demountable connections using pipe threads (i.e. “pipe thread fittings”) are a common feature in fluid-handling systems. A common method of sealing these involves wrapping PTFE (Teflon) tape around the threads. However, unless this is done correctly, it is likely that slivers of tape will become detached from the fitting. They then tend to travel downstream and clog components such as metering valves (needle valves), regulators, filters, etc. These slivers are a particular menace in systems that handle gases, and hence PTFE tape should not be used in such. Pipe thread fittings are generally not very good ones in general (with regards to leaks, as discussed on page 246). However, if they must be employed, the use of PTFE tape should be avoided in favor of a liquid pipe sealant – preferably an anaerobic type [84].
8.3.5 Filter issues Filters that are used to trap particles and debris in gas and liquid systems can be a significant source of reliability problems, because the filter elements must generally be replaced when sufficient material has accumulated. (Filters in general are very often poorly maintained.) One way of ensuring that replacement is done before the filter becomes completely blocked is to use a device to sense the pressure drop across the unit and provide a warning when this becomes too high. Two purpose-made components for doing this are “differential pressure indicators” (which provide a visual warning), and “differential pressure switches” (which can be configured to activate an alarm, or send a signal to a control system). Sometimes these are combined into a single package. In some cases where particle filtration is needed, it may be possible to employ filters that automatically clean themselves when the pressure difference across them exceeds a certain value. These “self-cleaning filters” work either by passing a liquid or gas through the filter in the reverse direction, dislodging particles that are resting on the surface of the filter and purging them from the system. Alternatively, they may use a mechanical device to physically wipe the filter element. Self-cleaning filters are particularly common in systems for water and other liquids.
8.4 Water-cooling systems
263
8.3.6 Detection and location of leaks Information on the prevention and detection of leaks in vacuum systems presented in Chapter 6 is also of relevance in the present context. Vacuum leak detection methods (as discussed in Section 6.10.1) are often useful with systems that carry gases. The “ultrasonic leak detection” method can be used to detect the escape of gas from pipes under pressure, even at large distances from the leak, and then to locate the leak (see page 178). A slight variation of this method can be used to detect and locate holes in pipes that cannot be pressurized or evacuated. In certain situations in which a pipe network is being used to carry liquids (e.g. in water cooling systems) “UV fluorescent leak detection” can be a very useful approach for locating small leaks. This technique is particularly appropriate when the liquid is circulated. The procedure involves injecting a small amount of a fluorescent tracer-dye into the liquid, which is then allowed to flow through the network. Ultraviolet light is then shone on the external surfaces of suspect items of plumbing under low-light conditions. Any leaks will show up as brightly glowing patches on the plumbing and nearby areas. A variety of techniques for detecting and locating leaks are discussed in Ref. [87].
8.4 Water-cooling systems 8.4.1 Introduction The use of water as a cooling medium can be a significant weak point in any experimental apparatus. It is true that water is highly effective in transporting heat out of equipment – much more so than forced air-cooling or natural convection. It is particularly useful, and often even essential, when high-power electrical devices (such as RF induction heaters and electromagnets) and large items of machinery (such as compressors) are involved. Nevertheless, if it is possible to avoid using devices that require water cooling, significantly higher overall levels of reliability can be expected. Water-cooling systems are liable to fail, or cause failures, as a result of: (a) (b) (c) (d) (e)
leaks (e.g. from burst hoses), blockages (full or partial), corrosion (galvanic, electrolytic, and chemical), erosion of pipe and pipe-fitting materials,6 condensation and consequential dripping water from pipes inside and outside the watercooled apparatus, (f) freezing (and possibly bursting) of pipes and other components,
6
Erosion is the removal of material as a result of cavitation action or moving abrasive particles in the flowing water.
264
Mechanical devices and systems
(g) in closed-loop systems, formation of conductive deposits inside rubber hoses acting as electrical insulators, causing current leakages, and (h) failure of pumps, valves, and other items of machinery and mechanical components. In central water-cooling facilities serving a large laboratory with many users, events in one section can affect the entire system. For example, a leak in one piece of apparatus may cause an unsustainable water loss, which forces the whole system to shut down. Other possible effects include troublesome variations in water pressure, flow rate, or temperature, depending on the activities of other users of the system. There is perhaps a tendency to treat the installation of a water-cooling system in the same cavalier way that one might with, for example, an ordinary tap water supply. A much better approach is to think of such a system as a complex and trouble-prone combination of plumbing, machinery, and temperature-control devices, and to make purchasing decisions with this in mind. This is particularly true of large multi-user systems, where failures can have major consequences. For instance, the author is familiar with a large central system that has had almost endless condensation problems, over a number of years, as a result of poor design. Especially in the case of large systems, it may be worthwhile to seek the advice of a consultant who specializes in water cooling. The preferred types of water-cooling system are those in which the water is sealed from the atmosphere and flows in a closed loop. In such systems, the water is cooled by another source of water (e.g. from a large reservoir) by using a heat exchanger, or by means of a chiller. (The latter is a machine that operates in a similar way to a refrigerator or an air conditioner.) With these closed-loop systems, the water can be isolated from external sources of contamination, and steadily filtered to improve its quality. Closed-loop systems in which the water is directly cooled by evaporation (e.g. using a cooling tower), and is therefore exposed to the outside air, should be avoided. Such arrangements generally require the use of chemical additives, such as biocides, that can cause corrosion in the cooled equipment. An additional failure mode of such open-to-air systems is their periodic infestation by harmful organisms. A common example of the latter is the bacterium Legionella pneumophila, which causes Legionnaires’ disease. It is also preferable to avoid using tap water as a cooling medium, except possibly in undemanding applications, or when it is to be used as a backup to a primary closed-loop water-cooling system. Tap water often contains contaminants, such as dissolved minerals and particles, which can foul the cooled apparatus. The concentrations of such contaminants can vary with the seasons. Water temperatures can likewise undergo seasonal variations [88]. Frequently the temperatures are low enough to cause troublesome condensation. Tapwater cooling systems are generally arranged so that the water flows only once through the cooled apparatus. For this reason, unlike the situation in systems that circulate the water, it is difficult to treat tap water in order to improve its purity, or to regulate its temperature. Also, the pressure in tap-water lines can vary, depending on the activities of other users of water from the system. Cooling-water pressure or flow variations can be reduced by using a pressure or flow regulator. If many users must be provided with cooling water, it may be best to do this by supplying each with a self-contained water-cooling system. Large centralized systems tend to multiply the possibilities for failure, which can affect all users (as in the example above).
8.4 Water-cooling systems
265
In situations where the provision of a constant supply of cooling water is critical, the use of backup sources should be considered. Some possibilities are (a) the tap water supply, (b) a redundant chiller, and (c) a large tank containing cold water (for short-term use).
8.4.2 Water leaks 8.4.2.1 Introduction Leaks are perhaps the most dangerous cooling-system malfunctions, and are also very common. They create the hazard of water damage to objects in their immediate vicinity, as well as to items in adjacent rooms and on lower floor levels. Even small leaks can cause water to spray onto nearby equipment – often resulting in serious damage. High-voltage devices are particularly vulnerable. The leakage of water into vacuum systems can lead to very costly and time-consuming repairs. More importantly, leaks can and do cause electrical fires.7 The potential problems are sufficiently serious that some laboratories have formal requirements and inspections, in order to ensure that water-cooling hardware (especially hoses and their connections) is acceptable and in good condition. Leaks are commonly caused by: (a) detachment of hoses from their fixed connections (“blow-offs”), or a loss of sealing at the latter, (b) cracking or bursting of hoses, (c) corrosion or erosion of pipes and fittings, (d) aging, corrosion, and fatigue of solder joints, (e) corrosion and fatigue failure of metal bellows (used to carry water, e.g., in vacuum equipment), and (f) arcing in high-voltage equipment (particularly high-frequency arcs, which can drill holes in pipes). Leaks most often occur at or near joints in a water line, although they sometimes take place at other locations. For example, pinhole leaks occasionally develop in the walls of copper tubing. Probably the most frequent cause of leaks is the rupture of flexible hoses, or failure of their connections to rigid pipes. In communal cooling systems this can be a particularly insidious form of failure, since it often occurs late at night when water usage is at a minimum, and line pressures are therefore high. The most straightforward way of avoiding both problems is to use rigid metal pipes to connect apparatus with the watercooling system. Flexible hoses are really needed only when movement of the apparatus relative to the fixed plumbing is anticipated, vibrations are present, or when an electrical break is required in the water line. If only very small movement is required, it may be possible to avoid hoses by using coiled-metal-tube flexures. The use of a water pressure-regulator can also be beneficial in preventing leak problems.
7
For instance, the author knows of a case in which a jet of water, originating from a leak on a water-cooled RF generator, landed on a nearby high-voltage power supply. This caused the latter to catch fire.
266
Mechanical devices and systems
8.4.2.2 Selection and treatment of hoses When flexible hose is really needed, the choice of hose material should be given close attention. It is desirable to avoid plastic tubing, except perhaps in the case of small-diameter water lines, because plastic (even reinforced varieties) tend to cold flow, or creep. The result of this degradation is often blow-offs, ruptures, or leaks [89]. Cold flow can be a particular problem when such tubing is exposed to the elevated temperatures at the water exits of cooled equipment. Hose materials that contain chlorides (such as PVC) should be avoided in systems containing stainless steel, because of the possibility of chloride corrosion of the latter. Hose made of rubber is the preferred type, since its tendency to creep is relatively low compared with plastic. The presence of reinforcing elements in the rubber, comprising strong materials such as braided plastic fibers, is essential [90]. Ordinary un-reinforced rubber tubing, of the kind often found in laboratories, is not sufficiently strong for this purpose. The hose should be specifically designed to withstand the full static water line pressure, with generous margins of safety. When the hose is being used as an insulator, its electrical resistivity will also have to be considered. Such information will normally be supplied with suitable commercial hoses. As an example, the “PARFLEX” series of hoses by Parker [91], and the “ORTAC” series by Goodyear [92] have been found to be satisfactory in certain electrical equipment used in particle accelerators [93]. Water hoses are often made of nitrile rubber, and hence their aging behavior will have to be taken into account during use. Such hoses should normally be inspected for signs of degradation (e.g. cracks or hardening) at least once a year [94]. Replacement may be necessary every three to five years, depending on the environment. Excessive temperatures can greatly reduce the life of a hose. Exceeding the maximum recommended temperature by 10 ◦ C can decrease the lifespan by a factor of 2 [95]. In and around high-voltage systems, the concentration of atmospheric ozone will be relatively high compared with normal levels. Since this gas attacks some common elastomers, the use of an ozone-resistant material (such as EPDM rubber) should be considered in such cases. Environments with high levels of nuclear radiation also tend to degrade rubber hoses. Repeated flexure of a hose during service can decrease its lifespan. Other potential hazards to hoses include solvents, oil, corrosive substances (e.g. solder flux), fumes, insects, and rodents [95]. Hoses can become damaged if they are not correctly handled and installed. The minimum specified bend radius of a hose should not be exceeded. Also, one should not allow a hose to become twisted during installation, since this misaligns the reinforcing material inside it, and increases its susceptibility to failure under pressure [95]. Hoses attached to vibrating machinery can undergo catastrophic wear if they are allowed to rub against other objects or each other. A discussion of potential failure modes of hoses is presented in Ref. [96].
8.4.2.3 Termination of hoses The proper attachment of hoses to their rigid pipe terminations is a matter of considerable importance. One must not rely on the elasticity of a hose to secure it to the pipe – some
267
8.4 Water-cooling systems
form of hose clamp is essential. The pipe itself must be provided with a ribbed surface that will prevent the hose from slipping off. A bead formed at the end of the pipe, or fluting on a hose-barb, will serve this purpose. The beading of a pipe can be carried out either using a commercial beading tool, or by laying down a bead of metal using a welder [84]. One potential source of trouble is the use of a hose with an inner diameter that is too large for the hose barb. Despite attempts to compensate for the mismatch (e.g. by aggressively tightening the hose clamp) such a configuration is frequently a source of leaks. A possible cause of accidental hose-detachments (“blow-offs”) is the practice of greasing hose barbs and the like. This is sometimes done in order to make it easier to slip a hose onto the hose-barb, but also makes it much more likely that it will slip off again later under the influence of water pressure. Greases or oils may also damage the hose material. A better approach is to heat the end of the hose in hot water, which softens the material and makes it easier to expand over the hose barb. Clamps used for securing water hoses should be of the highest quality. Worm-gear-type hose clamps (or “Jubilee clips”) are often employed for this purpose. If this type of clamp is used, stainless-steel ones with through slots are to be preferred. Clamps with slots that are merely formed should be avoided, as these are likely to slip. High-quality clamps will not slip (or jump a groove) when they are tightened to high torque levels. Lesser ones may do this even below their design torque, and then never work properly again. The overtightening of hose clamps, to the point where the clamp is damaged, is a frequent cause of failure [84]. Since even rubber hoses undergo a certain amount of cold-flow as they age (and especially if they get hot), the occasional retightening of worm-gear hose clamps is often necessary. The first retightening should be done after several hours of use. In order to avoid having to do this, it may be preferable to use a hose clamp that has some spring characteristics, so that the seal can, in effect, be preloaded. Spring-loaded clamps, called “constant torque” or “constant tension” hose clamps are available, which are designed to compensate for cold flow. Even when using reliable hose clamps, it is often good practice to provide redundancy by using two clamps to secure a hose. In the case of worm-gear clamps, the screw heads should be located 180◦ apart on the hose, so that potential leak paths due to discontinuities on the inner surfaces of the bands are not aligned. Fittings are available for hoses that allow them to be easily connected to, and disconnected from, other hoses or rigid pipes. One example is the JIC (Joint Industry Conference) type. These fittings are commonly attached to hoses by special methods, such as crimping, which are generally relatively reliable. The use of these demountable fittings is necessary if the cooling-water pressures are greater than 8.6 × 105 Pa [89].
8.4.2.4 Automatic detection of water leaks It is possible to obtain electronic water-leak detection systems, which can trigger an alarm and/or shut off the equipment and its water supply if a leak is detected. Some such systems use continuous lengths of a special water-sensing cable instead of discrete water sensors.
268
Mechanical devices and systems
8.4.3 Water purity requirements 8.4.3.1 Types of impurities and their effects Particles, debris, and dissolved minerals are common causes of blockages in water-cooling systems. They can also disable or damage mechanical components (such as valves) in the flow path. The first two items include, for example, sand, rust, copper oxide, swarf, and sealing tape. Some particularly problematic dissolved minerals are calcium bicarbonate and calcium phosphate [97]. If organisms (such as algae, bacteria, and fungi) are allowed to establish themselves, these can also foul cooling passages and heat exchange surfaces. They can also lead to corrosion (“biocorrosion”) by causing chemical or electrochemical reactions. Furthermore (as discussed earlier) some organisms can create a health hazard that may necessitate shutting down the system in order to facilitate their removal. The presence of organisms is mostly a concern with water-cooling systems that are open to the air, and is infrequently a problem with closed types. If a system is thought to be contaminated in this way, its true condition can be determined by having the water tested by a water-quality laboratory. Minerals and chemicals in water can also lead to galvanic, electrolytic, and chemical corrosion of system components. In the case of cooled electrical systems, the presence of dissolved compounds in the water can also cause undesired leakage currents. If minerals such as calcium and magnesium salts are present in the water, these can deposit onto surfaces (e.g. in the form of “limescale”), and prevent the transfer of heat. It is possible to obtain industrial de-scaling chemicals that can be passed through pipes in order to remove mineral deposits. However, these chemicals are often highly acidic, and potentially harmful to pipe materials. The best strategy is to prevent such deposits from forming in the first place. The periodic inspection of water for signs of cloudiness or discoloration can be useful in establishing the presence of particles, minerals, and other contaminants [88]. Dissolved oxygen is another contaminant in cooling water, and can lead to corrosionrelated problems in certain cases. For example, in systems containing copper pipes, oxygen may react with the copper, to form copper oxide. This substance comes off the pipes in the form of small particles, which can cause blockages. Iron pipes can also be attacked by dissolved oxygen, to form ferric oxide, or hematite. Metal oxides can also deposit on insulating surfaces (such as rubber hoses), and cause electric current leakage [89,98].
8.4.3.2 Removal and control of impurities For the purpose of removing particles and debris, it is generally essential to have particle filters in the flow path. Particularly in the case of sensitive or critical cooled equipment, such filters should be located close to their water intakes. Self-cleaning filters are available (see page 262) [97]. Apparatus and devices that are cooled with water at low flow rates are particularly prone to blockages. If a large water-cooling system is to be turned off for maintenance, these low flow-rate devices should be isolated from the rest of the cooling system until the latter has
269
8.4 Water-cooling systems
been brought back into action and is running smoothly. This allows the system to be purged of the particles and debris that are often created during maintenance [93]. In some simple closed-loop water-cooling systems, organisms are dealt with by the addition to biocides to the water. However, for various reasons, and particularly in the case of certain items of high-grade laboratory equipment (such as lasers) it is preferable to avoid using such chemicals. A better way of controlling the proliferation of organisms is to expose the water to ultraviolet radiation. Commercial devices are available to carry out this treatment, which is referred to as “ultraviolet light disinfection” [97]. It is possible to reduce the harmful effects of dissolved minerals in cooling water by treating it with various chemical additives. These include sludge-dispersion compounds to prevent scale formation, and corrosion inhibitors. Such chemicals are often used in systems that involve cooling towers. However, for many types of water-cooled laboratory apparatus, the use of additives to control the water chemistry is undesirable, because these can have unforeseen side effects, and may themselves react with wetted surfaces in certain cases. Preferred methods of removing unwanted substances from water involve the use of ion exchange beds and/or reverse osmosis filtration. The removal of contaminants from the water can be done continuously as the water is circulated (this is referred to as “polishing”). Alternatively, in particular for some small cooling systems that serve a single piece of apparatus, the water is obtained in pure form from some external supply. It is then installed in the cooling system’s holding tank. In this case, periodic replacement of the water is necessary. The quality of the water can be determined by using special devices that monitor its resistivity. Low resistivities are undesirable, because they indicate the presence of excessive concentrations of dissolved ions. Values of about 1 M-cm are often aimed for. Very highresistivity water in water-cooling systems is corrosive (the water is said to be “hungry”). De-ionized water usually has this property. For this reason, manufacturers of water-cooled equipment often recommend that resistivity values be kept below a specified level – perhaps several M-cm. This is sometimes done by adding either tap water or ethylene glycol to water that has an excessively high resistivity. Equipment manufacturers may also set upper limits for the hardness of the water, which may be defined as the concentration of dissolved minerals (e.g. calcium), in parts per million. This can be determined by having the water tested by a water-quality laboratory. Maintaining the correct balance between acidity and alkalinity is also important – if the pH is below 6 or above 8, the water will be corrosive to many materials that often come into contact with cooling water [88]. This property can be measured by using pH paper or a pH meter. Oxygen is removed from the water (in systems that are not open to the air) by using special ion-exchange resins, or possibly with vacuum de-aerator devices. Its entry into the cooling water can also be reduced by blanketing the empty spaces in water tanks with nitrogen. The oxygen content in cooling water can be continuously monitored by using suitable instruments. Concentrations of less than 1 part in 108 are often aimed for. Stagnant water becomes corrosive with time. For this reason, water-cooled equipment should be drained if it is not being used for several months or more [88].
270
Mechanical devices and systems
8.4.4 System materials selection and corrosion The use of high-purity water can do much to reduce the various corrosion problems in water-cooling systems. Nevertheless, the proper selection of the materials that come into contact with the water is still important. Because of the possibility of galvanic corrosion, care should be exercised in the deployment of dissimilar metals (see the discussion of galvanic corrosion on page 99). Stainless steel (particularly austenitic types, such as 304L or 316) is the preferred material for pipes, fittings, heat exchangers, pumps, valves, and other components. In applications involving the cooling of electrical components (in which electrolytic corrosion is an issue), austenitic stainless-steel hose fittings can last much longer (perhaps by a factor of 10) than those made of other metals [89]. Welding is the preferred method of joining stainless steel. Some other materials, such as copper, brass, and various elastomers and plastics (the latter including polycarbonate, polyethylene, and nylon) are also often acceptable in water-cooling systems [88]. Some materials that should generally be avoided are: iron, steel, and aluminum. The latter metal may be a particularly troublesome cause of galvanic corrosion in systems that contain copper. For joining metals in situations where welding cannot be used, brazing is preferable to soft soldering (see the discussion on joining techniques in the chapter on vacuum-system leaks). Crevice corrosion (see page 100) can be very troublesome at locations where metal parts in contact with water are brazed to other parts made of different materials [89]. Complete wetting of the items to be joined by the brazing alloy is necessary to prevent this. Pipe thread fittings should be avoided as much as possible in water-cooling systems [89]. Many liquid pipe sealants tend to contaminate otherwise high-purity water, and PTFE tape can cause blockages (see page 262).
8.4.5 Condensation One of the most harmful phenomena that can occur during the use of cooling water is condensation. This is particularly true in the case of electrical apparatus (especially highvoltage devices), where short circuits, arcing, and complete catastrophic failure are possible outcomes. (See also the discussion of humidity issues in Section 3.4.2.) The water temperature must always be kept above the ambient dew point. This value is not fixed, but depends on various factors, including the weather, the time of day, and the season. Hand-held meters can be obtained that measure the relative humidity and temperature of the air, and use these to calculate the dew point. If condensation is a potential problem, then the use of a temperature-controlled closedloop water-cooling system should be considered. Simple pre-heating of the water before it enters the apparatus, or dehumidifying the environment, are two other possible solutions. If tap water is being used, mixing hot water with the cold cooling water is a convenient method of pre-heating. Electronic “condensation detectors,” “condensation monitors,” or “dew sensors” are available that can be attached to the cooling water inlet of equipment. These will trigger when condensation first starts to form on the pipe, and can set off an alarm and/or activate
271
Further reading
an interlock device that shuts off power and water to the equipment. Some devices of this type will trigger just before condensation occurs (if the relative humidity at the pipe is higher than a certain level – e.g. 90%), and thereby provide a margin of safety. The use of a thermal interlock arrangement, which prevents the equipment from operating if the water temperature is too low (see below) is another way of dealing with condensation problems. Leaving water flowing through apparatus when the latter is not operating is a potential cause of condensation problems. This is because most temperature-regulating cooling systems depend on the presence of a heat source in order to maintain the correct water temperature [88].
8.4.6 Water flow and temperature interlocks and indicators Lack of water flow during operation ranks high among the serious disasters that can happen to water-cooled apparatus. A common cause of such accidents is human error – forgetting to open up the appropriate supply valves. For this reason, all well-designed water-cooled equipment should be provided with a flow sensor connected to an interlock arrangement, which will prevent the application of power unless the required flow is present. It is preferable to place such a sensor at the outlet, rather than the inlet, of the equipment’s cooling water line [28]. Interlock arrangements are not infallible, and it is unwise to rely on them to ensure the presence of adequate water flow. It is always a good idea to have some sort of flow indicator in the water line, which makes it possible to determine the line’s status by visual inspection. Visual flow indicators comprising a spinning ball or paddle wheel are particularly useful. In some water-cooled apparatus, it may also be desirable to employ a thermal interlock arrangement that will shut off the apparatus if the water temperature at the inlet gets too high. If condensation is a potential problem, a similar setup can be used to prevent the apparatus from operating if the water temperature is too low (below the ambient dew-point).
8.4.7 Inspection of water-cooled equipment Water-cooled equipment should be examined twice daily for the presence of leaks, drips, or condensation. This should be done with the cooling pump on – before the equipment is used (when it is cold), and after use (while it is still warm).
Further reading Discussions of the design and use of flexures can be found in Refs. [4] and [99]. Various actuation devices for these are covered in Ref. [13]. Information on the properties of various classes of bearing can be found in Ref. [13]. Details regarding the use of plain and antifriction bearings are presented in Ref. [15].
272
Mechanical devices and systems
Liquid and solid lubricants for use in vacuum and cryogenic environments are discussed extensively in Ref. [100]. For vacuum applications, see also Ref. [28]. A very comprehensive discussion of static seals, motion feedthroughs, and valves for vacuum systems can be found in Ref. [38]. (This work mostly just describes the various devices, without discussing their merits and disadvantages. However, it does give many references.) Other information on these subjects is provided in Refs. [39], [44], and [48] (static seals); and [22] and [28] (static seals, motion feedthroughs, and valves). Some particularly useful sources of information about water-cooling system problems are Refs. [88], [89], and [94].
Summary of some important points 8.2.1 Overview of conditions that reduce reliability Mechanical devices in general are vulnerable to particulate matter, moisture, vibration, temperature changes, and lubrication problems.
8.2.2 Some design approaches for improving mechanism reliability (a) Devices that make use of flexural motion are well suited for small-amplitude and/or high-precision movements. They are also robust, wear-free, and suitable for operation under vacuum and in other extreme environments. (b) Generally, better mechanical performance can be achieved by generating motion directly and where it is needed (using a motor or other electric actuator), rather than indirectly (with the aid of gears, long drive shafts, drive belts, etc.). (c) Wherever possible, functions should be performed using electronics, rather than by mechanisms (e.g. optical scanning may be done by an array of detectors, rather than a single detector and a rotating mirror).
8.2.3 Precision positioning devices in optical systems (a) For the purpose of improving stability, the number of position and angle adjustments in an optical system should be minimized. (b) In order to prevent unwanted position shifts due to natural movements, as well as to discourage unauthorized changes, adjusting devices should be positively locked after being set. (c) Precision translation stages (especially ones involving ball bearings) are extremely sensitive to permanent damage resulting from mechanical shocks.
273
Summary of some important points
8.2.4 Prevention of damage due to exceeding mechanical limits (a) In mechanical systems that have limits to motion, one should never trust software alone to prevent damage – simple protective devices, such as limit switches, must always be present. (b) In situations where excessive forces or torques can arise in mechanisms, simple mechanical protective devices, such as shear pins or torque limiters, may be used to prevent damage.
8.2.5 Bearings (a) Plain bearings, in which the load is supported by sliding surfaces, are a very robust type of bearing. However, owing to large levels of static friction, they are not well suited for precision positioning applications. (b) Antifriction bearings (e.g. ball bearings) are, in contrast, very vulnerable to damage by shock and vibrations, and to particulate matter. On the other hand, owing to low levels of static friction, they are very useful in precision positioning devices. (c) Antifriction bearings should generally be obtained with protective barriers, such as “side shields” to prevent the entry of particulates. (d) Except for ball bearings with ceramic rolling elements, antifriction bearings are not well suited for applications involving small amplitude oscillatory motion. (e) Ball bearings with ceramic rolling elements are useful in ultrahigh-vacuum applications, owing to their ability to work reliably in the near absence of lubrication.
8.2.7 Lubrication and wear under extreme conditions (a) Ordinary lubricants are generally not adequate for lubricating mechanisms under conditions of high and ultrahigh vacuum, high temperatures, very low temperatures, and other extreme environments. (b) Operating mechanisms in a vacuum without lubrication is often not an option, even with very small loads, since clean metal parts that slide against each other in vacuum tend to adhere and weld. (c) Very generally (even in air at ambient conditions) identical metals should not be used for pairs of surfaces in sliding contact. (d) Perfluoropolyether oils and greases have very low vapor pressures, and hence are suitable for use in high-vacuum environments. They are also stable at high temperatures and in the presence of highly reactive substances (e.g. pure oxygen). (e) Dry lubricants (including MoS2 , WS2 , and soft metals such as silver and gold) are very useful lubricants under extreme conditions. The latter include ultrahigh-vacuum, cryogenic, high-temperature, and high-radiation environments. (f) As lubricants, dry lubricants are not as effective as liquid ones, are not as reliable, and do not last as long (in terms of the number of revolutions or sliding cycles of a bearing).
274
Mechanical devices and systems
(g) Dry lubricants can be applied to surfaces by vacuum deposition, spray coating, or by a high-velocity pressure-spray technique. The first method generally produces the most long-lasting lubricating films, but is expensive. (h) The threads of screws and bolts used in ultrahigh-vacuum systems can be electroplated with films of gold or silver, in order to prevent welding during bakeout. (i) Self-lubricating solids are construction materials that contain a dry lubricant, such as PTFE or MoS2 . These materials can be used to make bearings, gears, etc., which supply their own lubricant during operation.
8.2.8 Static demountable seals 8.2.8.1 General considerations (a) Failures of static demountable seals (e.g. O-ring joints) are most commonly caused by (i) damaged or deteriorated seals and sealing surfaces, (ii) the presence of contaminants on these surfaces, (iii) unevenly tightened and/or misaligned flanges, and (iv) seals that have not been seated properly. (b) Scratched sealing surfaces are a particularly frequent cause of leaks. (c) Even the most minuscule scratches can cause surprisingly large leaks, especially if they are at right angles to the orientation of the seal. (d) In the case of circular flanges, sealing surfaces that have been left with their lathe-turned surface finishes are generally more reliable than those that have been finished using abrasives. (e) Highly polished surfaces have less reliable sealing properties than lathe-turned ones, with their fine concentric grooves. (f) Although the presence of any type of foreign matter on sealing surfaces is to be avoided, hairs and lint are particularly harmful and common. (g) Especially in the case of metal seals, unevenly tightened flanges often result in leaks. (h) Leaks frequently happen if flanges have distorted as a result of improper welding during vacuum component fabrication. (i) When a leak suddenly occurs in a vacuum system, it is often a good idea to check the last seal that was disturbed.
8.2.8.2 O-rings (a) Damaged O-rings are often a cause of leaks. Such damage sometimes occurs when these items are pulled over sharp corners or burrs while being installed. (b) O-rings made of nitrile rubber are the most common types. They are vulnerable to the formation of surface cracks due to ozone in the atmosphere (and promoted by mechanical stress), as well as hardening over time. (c) The installed life of a nitrile O-ring is typically one to five years. (d) Nitrile O-rings should not be used in and around high-voltage equipment, since the latter can produce large amounts of ozone.
275
Summary of some important points
(e) Viton rubber is much more resistant to cracking, hardening, and other degradation than nitrile, and is preferable for most laboratory applications. (f) Stainless steel O-ring fittings (e.g. “tees,” “elbows,” etc.) are preferable to aluminum ones, because their sealing surfaces are much more resistant to being scratched. (g) O-rings should not be stretched in order to allow them to fit into an O-ring groove that is too large for them. This can reduce the sealing properties of the O-ring, and (because of the resulting high stress) can accelerate ozone damage. (h) Lubrication is generally neither necessary nor desirable for O-rings used in static seals. (i) O-rings that are exposed to potentially damaging treatment or harsh environments should be periodically inspected for surface damage, such as cracks or tears, in a slightly stretched state.
8.2.8.3 Flat metal gasket seals of the “ConflatR ” or “CF” design R (a) CF (or “Conflat ”) copper gasket seals (used primarily in UHV equipment) are an exceptionally reliable type of seal. (b) Care should be taken not to damage the knife edges. (c) The copper gaskets used in CF flanges should not be reused.
8.2.8.4 Metal-gasket face-sealed fittings for small-diameter tubing Unlike CF seals, the beaded-edge type of seal, which is commonly used in face-sealed fittings for gases, is intolerant of sealing surface damage or the presence of particulates
8.2.8.5 HelicoflexR metal O-ring seals Helicoflex seals are capable of functioning in UHV environments, and providing helium leak-tight seals (like CF seals). However, they are not as reliable as CF seals, or even ordinary rubber O-ring ones.
8.2.8.6 Indium seals for cryogenic applications (a) Indium wire seals, which are commonly used for cryogenic vacuum-sealing applications, are generally highly reliable. (b) Two common causes of indium seal failure are insufficient tightening of the flange screws, and changes in the sealing pressure due to differential thermal contraction of the flanges and their fasteners. (c) The softness of indium, and therefore its ability to seal at a given flange pressure, depends on its purity. For most applications, the purity should be at least 99.99%.
8.2.8.7 Conical taper joints for cryogenic applications (a) Conical taper joints are easy-to-use cryogenic vacuum seals, and are nearly (but perhaps not quite) as reliable as a well-designed indium seal.
276
Mechanical devices and systems
(b) Although the silicone grease that is normally used in these devices produces an effective seal, its presence can lead to other problems. (c) Apiezon greases are highly unreliable in comparison with Dow-Corning silicone grease, when used as cryogenic vacuum sealants.
8.2.8.8 Weld lip connections Weld lip connections are welded stainless-steel joints that provide compact and exceptionally reliable seals. They can be made and unmade (like other seals in this section), but only for a relatively small number of times.
8.2.8.9 Pipe threads Pipe thread fittings are very prone to leakage, and should be avoided – especially in applications involving high vacuum or pure gases.
8.2.9 Dynamic seals and motion feedthroughs (a) Linear- or rotary-motion feedthroughs involving sliding seals are not very reliable, with regards to leakage, unless a differential pumping (or “guard vacuum”) scheme is used. Linear-motion types are the most trouble prone. (b) For low-speed linear or rotary motion under conditions of high vacuum, feedthroughs making use of metal bellows usually provide acceptable reliability. (c) Feedthroughs using magnetic fluid seals are reliable and long-lived devices for low- or high-speed rotary motion. (d) With regard to freedom from leaks, magnetic drives (in which motion is transmitted into the sealed environment using magnetic forces) are the most reliable feedthroughs for low- or high-speed rotary or linear motion. (e) In certain situations, in which the transmission of motion into the sealed environment using normal methods would be complicated, the use of electric actuators (such as motors) inside the environment may be the best approach.
8.2.10 Valves 8.2.10.1 Introduction (a) Generally, some particularly unreliable types of valve are (i) pressure relief valves, (ii) all-metal valves that must create a seal, (iii) metering valves, (iv) valves with sliding-seal motion feedthroughs, (v) gate valves, and (vi) check valves. (b) Human error is a common cause of valve problems. This includes, for instance, overtightening leaky valves, and errors in determining valve status. (c) High-quality valves employed in high- and ultrahigh-vacuum applications use bellowssealed motion feedthroughs, rather than sliding-seal types.
277
Summary of some important points
(d) Valves that are particularly sensitive to the presence of particulates include (i) pressure relief and check valves, (ii) all-metal valves, (iii) metering valves, and (iv) precision multi-stage pressure regulators. For valves that handle gases and liquids (except possibly pressure relief types), filters should be placed immediately upstream.
8.2.10.2 Pressure relief devices (a) Ageing of O-rings is a potential cause of sudden leaks in pressure relief valves. Viton O-rings should be used, if possible. (b) Pressure relief valves can be damaged by “chattering” (repeated and rapid opening and closing), which can occur when the operating pressure is too close to the relief pressure. (c) Pressure relief valves should be periodically tested to ensure correct operating pressure and freedom from leaks – especially after a pressure relief event. (d) Rupture discs are very simple and dependable pressure relief devices. These can be used as alternatives to pressure relief valves, or in combination with them, in order to improve reliability. (e) Rupture discs must be replaced periodically if they are exposed to pressure fluctuations near their rupture pressure. Reverse-acting rupture discs can be operated closer to their rupture pressure than un-scored, forward-acting types.
8.2.10.3 All-metal valves All metal valves should generally be avoided (when sealing is required), if possible, in favor of valves that use more resilient (e.g. elastomeric) seals.
8.2.10.4 Metering valves (a) Metering valves (or leak valves) should generally not be used as shutoff devices. This function should be provided by a separate valve. (b) Metering valves should always be provided with particle filters at their inlets. (c) Gases passed through metering valves should be dry, since moisture and other condensable vapors can cause erratic blockages.
8.2.10.6 Gate, poppet, and load-lock vacuum valves (a) Gate valves are relatively prone to mechanical faults in their actuation mechanisms. Nevertheless, leaks in motion-feedthrough bellows are the most common cause of failures. (b) In many cases, poppet valves and load lock valves are satisfactory and high-reliability alternatives to gate valves.
278
Mechanical devices and systems
8.2.10.7 Solenoid- and pneumatic-valves (a) A common failure mode of solenoid valves is overheating of the electric windings. Care must be taken not to exceed the maximum recommended duty cycle, and to adequate cooling of these devices. (b) Pneumatic valves can fail due to failure of their solenoid valves (used to control the compressed air), or problems with the air supply system itself. The latter includes air-compressor difficulties, and the presence of moisture in the air supply.
8.2.10.8 Advantages of ball valves – particularly for water (a) Ball valves are very dependable and long-lived devices, and unusually so (in comparison with other valves) in the presence of particles and debris. (b) Ball valves are particularly suitable as water shutoff valves, since (unlike many other water valves) they are resistant to freezing in position after long periods without operation.
8.2.10.9 Pressure regulators It is essential to ensure that pressure regulators are compatible with the gas being controlled (e.g. high-pressure oxygen). Furthermore, they must be completely clean when oxidizing or reactive gases are being used.
8.3 Systems for handling liquids and gases (a) Pipe networks should be designed without dead end passages, if possible, since these tend to accumulate particles and debris. (b) Small-diameter tubes – especially capillaries – are particularly prone to blockages by particles and debris. (c) Blockages often arise because of the presence of debris and other contaminants that were left in the pipes during construction, such as metal chips (swarf), solder fluxes, and abrasives. (d) Welding is the preferred method of joining pipes (especially in the case of stainless steel). Soft soldering should be avoided, if possible. (e) The use of PTFE tape for sealing pipe threads should be avoided (especially in the case of gas systems) – since slivers of tape are likely to be released, and block downstream metering valves, filters, pressure regulators, etc. (f) Filters can be monitored for blockages by measuring the pressure drop across them – e.g. by using “differential pressure indicators.” (g) It is possible to obtain self-cleaning filters – particularly for water and other liquids.
279
Summary of some important points
8.4 Water-cooling systems 8.4.1 Introduction (a) Water-cooling arrangements can be a major weakness in any experimental apparatus. If practical, the use of devices that employ forced air or convection cooling is likely to lead to significantly greater reliability. (b) Water-cooling systems are apt to fail, or cause failures, as a result of, e.g., leaks, condensation, blockages, corrosion, failure of valves, pumps and other mechanical items, and human error. (c) The preferred types of water-cooling system are those in which the water flows in a closed loop, and is sealed off from the outside air. Arrangements using tap water and (especially) those using evaporative cooling (e.g. with cooling towers) should be avoided.
8.4.2 Water leaks (a) Whenever the ability to accommodate relative movement or vibration is not required, it is better to attach equipment to the cooling system by using rigid pipes, rather than hoses. (b) High-quality, reinforced rubber hose should generally be used for flexible water lines. This should be designed to withstand the full static line pressure, with ample safety margins. (c) Nitrile rubber is susceptible to degradation in environments with high concentrations of ozone (e.g. high-voltage equipment). EPDM rubber may be better for such applications. (d) Hoses should be inspected for signs of degradation (e.g. hardening and the formation of cracks) at least once a year. (e) Hose clamps should be of the highest quality. Stainless-steel worm-gear-type clamps with through-slots (not formed ones) are preferred. (f) Failures of hose clamps often occur because they have been overtightened. (g) Even when using reliable hose clamps, it is often good practice to provide redundancy by using two clamps to secure a hose.
8.4.3 Water purity requirements (a) Water-cooling systems are frequently plagued by blockages, caused by particles, debris, and dissolved minerals in the water. (b) The presence of particle filters (preferably placed immediately upstream of the watercooled equipment) is essential. (c) The use of ion exchange beds and/or reverse osmosis filtration to remove non-particulate contaminants from the water is preferable to adding chemicals such as sludge-dispersion compounds and corrosion inhibitors.
280
Mechanical devices and systems
(d) The electrical resistivity of water is an important measure of quality. Values of about 1 M-cm are usually suitable. (e) Low resistivities indicate the presence of excessive concentrations of dissolved ions. (f) Very-high-resistivity (e.g. de-ionized) water is corrosive. (g) The pH of cooling water is also important – this should lie in the range 6–8.
8.4.4 Water-cooling system materials selection and corrosion Pipes, fittings, valves, etc., in cooling water systems should preferably be made of an austenitic stainless steel, such as 304L or 316.
8.4.5 Condensation In order to avoid condensation, the temperature of the cooling water must always be kept above the ambient dew point. This is especially important in electrical apparatus.
8.4.6 Water flow and temperature interlocks and indicators In order to prevent the inadvertent operation of apparatus in the absence of adequate cooling water, the former must always be provided with a flow sensor connected to an interlock arrangement.
References 1. National Aeronautics and Space Administration (NASA), Space Mechanisms Project, Space Mechanisms Lessons Learned Study. www.grc.nasa. gov/WWW/spacemech/vol1.html 2. J. W. Ekin, Experimental Techniques For Low-Temperature Measurements: Cryostat Design, Material Properties, and Superconductor Critical-Current Testing, Oxford University Press, 2006. 3. A. M. Cruise, J. A. Bowles, T. J. Patrick, and C. V. Goodall, Principles of Space Instrument Design, Cambridge University Press, 1998. 4. S. T. Smith, Flexures: Elements of Elastic Mechanisms, Gordon and Breach, 2000. 5. W. Trylinski, Fine Mechanisms and Precision Instruments: Principles of Design, Pergamon Press, Warszawa:Wydawnictwa Naukowo-Techniczne, 1971. 6. The Newport Resource 2004 (catalog), Newport Corporation, 1791 Deere Ave., Irvine, CA 92606, USA. www.newport.com 7. P. Fredericks, L. Rintoul, and J. Coates, in Ewing’s Analytical Instrumentation Handbook, 3rd edn, J. Cazes (ed.), Marcel Dekker, 2005. 8. P. Y. Bely (ed.), The Design and Construction of Large Optical Telescopes, Springer, 2003.
281
References
9. P. C. D. Hobbs, Building Electro-Optical Systems: Making it all Work, John Wiley and Sons, 2000. 10. D. Vukobratovich, in Applied Optics and Optical Engineering,Vol. XI, R. R. Shannon and J. C. Wyant (eds.), Academic Press, 1992. 11. Hardware & Positioners, CVI Melles Griot. www.mellesgriot.com/technicalsupport/ faq/faqhardware.asp 12. N. Sclater and N. P. Chironis, Mechanisms and Mechanical Devices Sourcebook, 4th edn, McGraw-Hill Professional, 2007. 13. A. H. Slocum, Precision Machine Design, Prentice Hall, 1992. 14. A. Wysocki and B. Feest, EC&M 96, Pt. 2, 52 (1997). 15. J. Moore, C. Davis, M. Coplan, and S. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. 16. R. J. Sherman, in Optical Scanning, G. F. Marshall (ed.), Marcel Dekker, 1991. 17. Ferrovac GmbH, Thurgauerstrasse 72, Z¨urich, Switzerland. www.ferrovac.com 18. J. D. Grout, in Space Vehicle Mechanisms: Elements of Successful Design, P. L. Conley (ed.), John Wiley & Sons, 1998. 19. R. E. Kirby, G. J. Collet, and E. L. Garwin, Proc. 8th Intl. Vac. Congr., II, J. P. Langeron and L. Maurice (eds.), Cannes, France, Sept. 1980. 20. E. E. Bisson, in Advanced Bearing Technology, (NASA, SP-38), E. E. Bisson and W. J. Anderson (eds.), Washington: Office of Scientific and Technical Information, National Aeronautics and Space Administration, 1964. 21. M. J. Neale (ed.), Lubrication and Reliability Handbook, Butterworth-Heinemann, 2001. 22. N. Milleron and R. C. Wolgast, in Vacuum Physics and Technology, G. L. Weissler and R. W. Carlson (eds.), Academic Press, 1979. 23. F. J. Clauss, Solid Lubricants and Self-Lubricating Solids, Academic Press, 1972. 24. M. A. Moore, in Smithells Metals Reference Book, 6th edn, E. A. Brandes (ed.), Butterworths, 1983. 25. DuPont (E. I. du Pont de Nemours and Company). www2.dupont.com 26. J. R. Lince and P. D. Fleischauer in Space Vehicle Mechanisms: Elements of Successful Design, P. L. Conley (ed.), John Wiley & Sons, 1998. 27. V. R. Friebel and J. T. Hinricks, J. Vac. Sci. Technol. 12, 551 (1975). 28. J. F. O’Hanlon, A User’s Guide to Vacuum Technology , 2nd edn, John Wiley and Sons, 1989. 29. E. W. Roberts, in Space Vehicle Mechanisms: Elements of Successful Design, P. L. Conley (ed.), John Wiley & Sons, 1998. 30. R. L. Fusaro, Lubrication of Space Systems, NASA technical memorandum 106392, National Aeronautics and Space Administration, 1994. Available from Glenn Technical Reports Server. gltrs.grc.nasa.gov/ 31. A. Thomas, The friction and wear properties of some proprietary molybdenum disulphide spray lubricants in sliding contact with steel. Report No. ESA-ESTL-50 ESACR(P)-1537, European Space Tribology Laboratory, Risley, England, 1982. 32. R. A. Mattey, Lubrication Engineering 34, 79 (1978).
282
Mechanical devices and systems
33. M. J. Anderson, M. Cropper and E. W. Roberts, The tribological characteristics of Dicronite, Proc. 12th Euro. Space Mechanisms & Tribology Symp. (ESMATS), Liverpool, UK, 2007. 34. L. Westerberg, Engineering [accelerator vacuum systems]. CAS-CERN Accelerator School: Vacuum Technology, Snekersten, Denmark, 1999. doc.cern.ch// archive/cernrep//1999/99–05/p255.pdf 35. A. Contaldo, Rev. Sci. Instrum. 36, 1510 (1965). 36. T. Engel and D. Braid, Rev. Sci. Instrum. 56, 1668 (1985). 37. T. Engel, Rev. Sci. Instrum. 52, 301 (1981). 38. A. Roth, Vacuum Sealing Techniques, AIP Press, 1994. 39. R. N. Peacock, in Handbook of Vacuum Science and Technology, D. M. Hoffman, B. Singh, and J. H. Thomas III (eds.), Academic Press, 1998. 40. L. C. Cadwallader, Vacuum system operating experience review for fusion applications, prepared for the US Department of Energy, Office of Energy Research; Report Number: EGG-FSP-11037 ITER/US/93/TE/SA-18; March 1994. 41. B. J. Woodley, in Mechanical Seal Practice for Improved Performance, J. D. SummersSmith (ed.), Mechanical Engineering Publications, 1992. 42. N. H. Balshaw, Practical Cryogenics: an Introduction to Laboratory Cryogenics, Oxford Instruments Superconductivity Ltd, 2001. 43. T. Winkel and J. Orchard, Leak evaluation in JET and its consequences for future fusion machines, Report No. JET-P(89)57; JET Joint Undertaking, Abingdon, UK. 44. N. Harris, Modern Vacuum Practice, 3rd edn, Published by N. S. Harris, 2005. www. modernvacuumpractice.com (First edition published by McGraw-Hill, 1990.) 45. Naval Ships’ Technical Manual – Chapter 075: Fasteners, Revision 2, December 1, 1977; available from the Federation of American Scientists. http://www.fas. org/man/dod-101/sys/ship/nstm/index.html 46. S. K. Chapman, Maintaining and Monitoring the Transmission Electron Microscope, Oxford University Press, 1986. 47. R. N. Peacock, J. Vac. Sci. Technol. 17, 330 (1980). 48. N. T. Peacock, in Handbook of Vacuum Science and Technology; D. M. Hoffman, B. Singh, and J. H. Thomas III (eds.), Academic Press, 1998. 49. R. B. Fleming, R. W. Brocker, D. H. Mullaney, and C. A. Knapp, J. Vac. Sci. Technol. 17, 337 (1980). 50. W. J. Tallis, in Cryogenic Engineering, B. A. Hands (ed.), Academic Press, 1986. 51. G. K. White and P. J. Meeson, Experimental Techniques in Low-Temperature Physics, 4th edn, Oxford University Press, 2002. 52. C. C. Lim, Rev. Sci. Instrum. 57, 108 (1986). 53. E. N. Smith, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 54. M. Kuchnir, M. F. Adam, J. B. Ketterson, and P. Roach, Rev. Sci. Instrum. 42, 536 (1971). 55. Y. A. Aleksandrov, G. S. Voronov, V. M. Gorbunkov, N. B. Delone, and Y. I. Nechayev, Bubble Chambers, translated by Scripta Technica, Inc., W. R. Frisken (translation editor), Indiana University Press, 1967.
283
References
56. T. P. Bernat, D. G. Blair, and B. Hymel, Rev. Sci. Instrum. 46, 225 (1975). 57. G. Nunes, Jr., and K. A. Earle, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), AddisonWesley, 1988. 58. N. H. Balshaw, in Handbook of Applied Superconductivity, Vol. 1, B. Seeber (ed.), Institute of Physics Publishing, 1998. 59. D. I. Bradley, T. W. Bradshaw, A. M. Guenault et al. Cryogenics, June 1982, 296. 60. A. C. Mota, Rev. Sci. Instrum. 42, 1541 (1971). 61. R. C. Thomas, Solid State Technol. September 1985, p. 153. 62. H. P. Bloch, Improving Machinery Reliability, Gulf Publishing Company, 1998. 63. G. F. Weston, Ultrahigh Vacuum Practice, Butterworths, 1985. 64. W. Helgeland, in Handbook of Vacuum Science and Technology, D. M. Hoffman, B. Singh, and J. H. Thomas III (eds.), Academic Press, 1998. 65. F. Pobell, Matter and Methods at Low Temperatures, 2nd edn, Springer-Verlag, 1996. 66. W. L. Edwards and L. R. Maxwell, Rev. Sci. Instrum. 9, 201 (1938). 67. J.H. Singleton, in Handbook of Vacuum Science and Technology, D. M. Hoffman, B. Singh, and J. H. Thomas III (eds.), Academic Press, 1998. 68. J. F. O’Hanlon, A User’s Guide to Vacuum Technology, 3rd edn, John Wiley and Sons, 2003. 69. C. Matthews, A Quick Guide to Pressure Relief Valves, Professional Engineering Publishing, 2004. 70. C. F. Parry, Relief Systems Handbook, Institution of Chemical Engineers, 1992. 71. L. C. Cadwallader, Cryogenic system operating experience review for fusion applications, prepared for the US Department of Energy, Report number: EGG-FSP–10048 DE92 012509. 72. D. E. Daney, J. E. Dillard, and M. D. Forsha, in Handbook of Cryogenic Engineering, J. G. Weisend II (ed.), Taylor & Francis, 1998. 73. R. A. Bolmen, Jr. (ed.), Semiconductor Safety Handbook: Safety and Health in the Semiconductor Industry, Noyes Publications, 1998. 74. W. Snelleman, J. Sci. Instrum. 43, 832 (1966). 75. R. Eichhorn and J. D. McLean, Rev. Sci. Instrum. 40, 463 (1969). 76. W. A. Sanders, D. J. Bogan, and C. W. Hand, Rev. Sci. Instrum. 57, 3059 (1986). 77. L. V. Lieszkovszky, in Handbook of Vacuum Science and Technology, D. M. Hoffman, B. Singh, and J. H. Thomas III (eds.), Academic Press, 1998. 78. L. D. Hinkle, in Handbook of Vacuum Science and Technology, D. M. Hoffman, B. Singh, and J. H. Thomas III (eds.), Academic Press, 1998. 79. M. S. Beck, N. E. Gough, and J. N. Smithies, Measurement and Control 5, 52 (April 1972). 80. A. J. Croft, Cryogenic Laboratory Equipment, Plenum, 1970. 81. R. Burch, Hydraulic Pneumatic Power 20, 129 (April 1974). 82. C. T. Johnson, Instrum. Control Systems 52, 43 (June 1979). 83. A. J. Croft, Cryogenics 3, 65 (1963). 84. C. Smith, Carroll Smith’s Nuts, Bolts, Fasteners and Plumbing Handbook, Motorbooks, 1990.
284
Mechanical devices and systems
85. D. G. Cahill, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 86. A. J. Croft, in Experimental Cryophysics, F. E. Hoare, L. C. Jackson, and N. Kurti (eds.), Butterworths, 1961. 87. G. L. Anderson, in Metals Handbook, ninth edition, volume 17, Nondestructive Evaluation and Quality Control, ASM International, 1989. R Ion Laser Systems, Coherent, Inc., 5100 88. Laser Cooling Water Guidelines for Innova Patrick Henry Drive, Santa Clara, CA, USA. www.coherent.com 89. W. O. Brunk, C. A. Harris, and D. B. Robbins, IEEE Trans. Nucl. Sci. 18, 887 (1971). 90. R. H. Alderson, Design of the Electron Microscope Laboratory, North-Holland, 1975. 91. Parker Hannifin Corp., 6035 Parkland Boulevard, Cleveland, Ohio, USA. www.parker. com 92. The Goodyear Tire & Rubber Company,1144 East Market Street, Akron, Ohio, USA. www.goodyear.com 93. D. Wolff and H. Pfeffer, EPAC96 – Fifth European particle accelerator conference, Vol. 3; S. Myers, A. Pacheco, R. Pascual, Ch. Petit-Jean-Genaz, J. Poole (eds.), Institute of Physics Publishing, 1997, pp. 2337–2339. 94. F. Specht and G. Welch, Preventative Maintenance Keeps Induction Systems in Peak Condition: Part 1, Industrial Heating, July 2001. www.industrialheating.com 95. S. Lough, Hydraulics & Pneumatics, p. 45, May 1997. 96. R. Mramor, Hydraulics & Pneumatics, p. 47, September 2000. 97. C. Frayne, Cooling Water Treatment: Principles and Practice, Chemical Publishing, 1999. 98. G. L. Homsy and W. J. Jones, IEEE Trans. Nucl. Sci. 16, 650 (1969). 99. R. V. Jones, J. Sci. Instrum. 39, 193 (1962). 100. P. L. Conley (ed.), Space Vehicle Mechanisms: Elements of Successful Design, John Wiley & Sons, 1998.
9
Cryogenic systems
9.1 Introduction Lack of reliability in cryogenic apparatus is an especially troublesome problem. This is partly because of the inherent susceptibility of such apparatus to breakdowns. The difficulties are magnified by the inaccessibility of the low-temperature components while they are in their working environment, which makes troubleshooting difficult, and immediate repair of faults virtually impossible. Because of the long times that are often needed to cool the equipment down and warm it up again, the correction and detection of defective items can be a very tedious process. Some faults that manifest themselves at low temperatures may vanish when the apparatus is warmed to room temperature. Frequently, it is possible to identify and locate these only through the use of laborious cross-checks [1]. The high cost of liquid cryogens, and especially liquid helium, can lead to large expenditures when low-temperature faults appear. For these reasons, particular care is needed in the design, construction, and use of cryogenic equipment to prevent failures from occurring. Reliability problems in cryogenic apparatus are primarily the result of the following effects. (a) Leaks can occur into evacuated spaces used to provide thermal insulation. Even if such leaks are of negligible size at room temperature, they may become very large at low temperatures, owing to the special properties of gases and liquids under such conditions. (b) Mechanical stresses are produced in the various parts during thermal cycling. These stresses are caused by differences in thermal expansion coefficients and temperature gradients. They can cause tubes to bend, wires and solder joints to break, leaks to form, and moving parts to seize. Fatigue failure (see Section 3.11) is potentially a serious issue. (c) Items at very low temperatures are vulnerable to unacceptable temperature rises even in the presence of miniscule levels of heating. (d) Support structures, pumping lines, electrical leads, and other items in a cryostat are often thin and hence relatively frail. This is due to the need to minimize heat leaks along these into the low-temperature environment. (e) Soft solders, which are often used to make vacuum joints, tend to be brittle at low temperatures. (f) Moisture in the air will condense, and possibly freeze, on cold surfaces. At the temperature of liquid helium, other gases in air will do the same. 285
286
Cryogenic systems
(g) It is possible for large and potentially dangerous pressure buildups to take place in cryogenic equipment, due to the vaporization and expansion of cryogenic liquids and gases. (h) Substances often display unusual behavior under low-temperature conditions (e.g. superfluidity of liquid helium, superconductivity in metals, very large paramagnetic susceptibility of solid and liquid oxygen). This means that ordinary techniques and devices, which would usually be very reliable in an ambient environment, may not be so in a cryogenic one. Methods of creating reliable vacuum joints and dealing with leaks are presented in Chapter 6. Cryogenic demountable vacuum seals (e.g. indium seals) are discussed on pages 242–245. Cryogenic wiring is very vulnerable to damage and degradation. Such problems are considered on page 468. Information concerning cryogenic mechanisms is presented on pages 219 and 228. An important strategy for the prevention of faults in cryogenic apparatus is to design them so that, whenever possible, unreliable items are at room temperature. For example, electrical circuits can be brought into a vacuum environment via room-temperature feedthroughs, and the wires routed to the low temperature regions through pumping lines, or other tubing. (Cryogenic electrical feedthroughs, and especially homemade types, are generally prone to leaks.) Such placement substantially eliminates thermal stresses on these items, and also makes it possible to avoid notoriously troublesome low-temperature leak problems. (If any leaks do occur, they will be at room temperature, and hence should be very much easier to deal with.) It also makes it possible to employ vacuum feedthroughs that are in general use at room temperature, and readily available from commercial sources. Mechanical vacuum feedthroughs should also be at room temperature. It is a good idea to arrange things so that potentially unreliable devices and systems in the apparatus can be tested at intermediate temperatures (e.g. 77 K and 4 K), while the apparatus is being cooled to base temperature. This can save much time if a fault should develop – especially in the case of apparatus, such as dilution refrigerators, which operate at the lowest temperatures. For instance, electrical circuits and cryogenic feedthroughs should be tested in this way.
9.2 Difficulties caused by the delicate nature of cryogenic apparatus This is a point that is hard to overemphasize, but often overlooked. Cryogenic equipment, and in particular its vacuum joints, wiring, and mechanical support structures, tends to be very delicate and susceptible to mechanical damage from rough treatment. This damage takes the form of vacuum leaks, open or short circuits in the wiring (often intermittent), touches that result in excessive heat leaks, misaligned optics, and other problems. Such vulnerability is often most acute in the case of apparatus that must operate below 1 K (e.g. dilution refrigerators, adiabatic demagnetization systems, etc.).
287
9.2 Difficulties caused by the delicate nature of cryogenic apparatus
For instance, it is not too uncommon for cryogenic inserts to become damaged (sometimes very severely so) if they happen to get stuck while being removed from their dewars. This can be especially serious if a hoist that provides a large mechanical advantage is used for removal, since large forces can be applied, and because the operator may not notice that something is going wrong. For instance, an electrical cable that is attached to a delicate tube at the top of an insert may get caught on something, resulting in bending of the tube and consequent vacuum leaks. Hand-operated hoists are preferable to electric ones for loading and unloading inserts into and from their dewars. This is because the former devices offer the possibility of tactile feedback to the operator, which can be essential in averting damage should something become stuck. For the same reason, the lifting capacity of the hoist should not be much larger than is necessary to handle the weight of the insert. The provision of a load-limiting safety device, such as a slip clutch (a type of torque limiter – see page 224), on the hoist should be considered. Apparatus that is designed to go into helium storage dewars (i.e. “dip-stick cryostats” or “dipper probes”) is often at risk, because the formation of solid air or ice on the inside necks of the dewars is likely. This means that there is a good chance that the insert will freeze in place during a loading or unloading procedure. Every effort should be made to prevent the entry of air into a dewar – see the list of precautions on page 291. Solid air or ice that has built up on the inside neck can usually be cleared by lowering a close-fitting warm metal tube into it. If an insert becomes jammed due to the presence of these substances, it may be necessary to allow the entire dewar to warm up before attempting to remove it. A common cryogenic device that is particularly prone to damage as a result of mishandling is the liquid-helium transfer tube (or “siphon”). This is partly because such tubes, which are generally very delicate, must routinely undergo manipulation during everyday use. Following use, they are often “stored” in inappropriate locations, where eventual damage is almost inevitable. Damaged helium transfer tubes are also difficult (if not impossible) to repair, and are expensive to replace. Touches between the inner and outer walls, caused by bends or kinks in the tube, are one common problem. Leaks at vacuum joints and (especially) in flexible bellows are another. The following measures can help to prevent such problems. (a) Always provide dedicated safe storage areas for transfer tubes (e.g. on a wall, using hangers for suspension). (b) Provide protective cryogenic gloves in helium transfer areas, partly for reasons of safety, but also so that the transfer tube can be handled carefully when it is cold. (c) Employ high-quality, all-welded (not soldered) commercial transfer tubes. The use of a split transfer tube (with a demountable joint in the horizontal section) can reduce the potential for damage due to awkward handling. Such items may also be easier to store safely than the standard type. Flexible transfer tubes, which contain metal bellows, can also be helpful in the first regard. (d) Bending of flexible transfer tubes should be minimized, and they must never be overbent (so that they take a permanent set). The bellows in these devices are prone to fatigue failure.
288
Cryogenic systems
Leak problems in joints and bellows are dealt with in Chapter 6. A discussion on the construction of cryostats, with information pertaining to mechanical robustness, can be found in Ref. [2].
9.3 Difficulties caused by moisture In cryogenic apparatus, the most potentially harmful outcome of the presence of moisture (usually resulting from condensation) is the blockage of gas or liquid passageways connected to exhaust vents and pressure relief devices by ice. (Leaky relief devices, such as pressure relief valves, may themselves be the source of the moisture or air.) Other possible problems are as follows. (a) The presence of ice in dewar openings may make it difficult to remove items, such as helium-transfer tubes or cryogenic inserts, from these. (b) Cryogenic mechanical devices, such as needle valves and sample rotation mechanisms, can become immobilized if moisture is left inside them and allowed to freeze. (c) If water is present in confined spaces, and is allowed to freeze, it may cause damage (since water expands as it solidifies). Moisture that has soaked into porous materials can cause the latter to degrade following repeated freezing and thawing cycles. Adhesive bonds between materials can thereby be destroyed by water that has found its way into them. Superconducting magnets are susceptible to being damaged this way [3]. (d) Water can also cause corrosion, short circuits, and changes in material properties due to moisture absorption. The problems in (a) and (b) are very frequently also caused by solid air. Further information on moisture problems in a general context can be found in Section 3.4.2. Some useful measures for preventing moisture-related problems are: (a) ensuring that cold gas vents are warmer than 0 ◦ C (see below), (b) using devices such as vacuum locks, sliding seals, and bellows to prevent the unwanted entry of air into low-temperature regions (see also page 291), (c) repairing or replacing leaky pressure-relief devices (see also the discussion on page 290), (d) purging tubes and other enclosed spaces with helium in order to prevent the entry of air, to expel air, and to drive off moisture that has condensed on cold surfaces, (e) waiting until items have warmed to room temperature before taking off vacuum cans and other protective enclosures, and (f) removing any water from delicate items using a stream of dry nitrogen (this is much more effective than laboratory air), an oven at temperatures of 60–70 ◦ C, or possibly by placing the items in a chamber, which is then evacuated using a rotary pump (see Ref. [4]). Regarding point (a), one way of ensuring that vent openings are sufficiently warm is to attach plastic extension pipes over the vent tubes, which will allow the temperature of
289
9.4 Liquid-helium transfer problems
cold nitrogen or helium to rise to over 0 ◦ C before they enter the room temperature environment [5].
9.4 Liquid-helium transfer problems Troubles related to the transfer of liquid helium are among the most common basic difficulties associated with the use of cryogenic equipment. Although the loss of time resulting from such problems is a concern, the main issue is usually wasted liquid helium, and the very high costs associated with this. Some typical problems are as follows. (a) Insufficient liquid helium may pass through the transfer siphon. A major cause of liquid loss in the siphon is a poor vacuum in the space between the inner and outer tubes. This is normally revealed by the appearance of condensation along the length of the siphon. Another potential cause of liquid loss is accidental contact between the inner and outer tubes (i.e. a “touch”), causing a heat leak that leads to excessive helium evaporation. This is frequently revealed by the appearance of icy patches on the siphon. Touches are often the result of damage. Liquid helium may be prevented from passing through the siphon because of blockage. This is sometimes because of the presence of frozen air or ice inside the siphon (which may have entered it as gases following the most recent transfer). This problem can be prevented by purging the siphon with warm helium gas prior to the start of a liquid-helium transfer. Blockages are generally dealt with by warming up the siphon and purging it with warm helium gas. Another possible cause of siphon blockages is the entry of particles of ice and frozen air, which are often suspended in the liquid helium near the bottom of a storage dewar. Bumping the intake of a siphon against the bottom of a dewar is often followed by immediate blockage. Such a plug may allow gas to pass through the transfer siphon, even though the movement of liquid is inhibited. Blockages can be reduced by ensuring that the transfer siphon never comes near the bottom of the storage dewar during a transfer. Siphons may also become blocked due to structural damage. This may take the form of internal damage that is not visible on the outside of the siphon. For example, the inner tube may become flattened due to an overpressure of gas in the vacuum space [6]. The cone in the experimental cryostat that receives the transfer siphon, or the tube below it, may be blocked with solid air or ice. The apparatus should be warmed up in order to clear it. (b) If the seal between the transfer siphon and the transfer siphon port (on the cryostat) is not air tight, cryopumping will take place. This causes air to enter the cryostat and freeze in the gap between the transfer siphon and the tube within the cryostat that guides it. This can cause the transfer siphon to become stuck firmly within the cryostat, and may necessitate warming the entire cryostat to 77 K or more in order to free it.
290
Cryogenic systems
(c) Some other possible reasons for inadequate liquid helium transfer are [7]: (A) the pressure in the storage dewar is too low (this may occur if the helium recovery line on the experimental cryostat has an unusually high pressure, or because of backpressure due to check valves in the recovery line), (B) the storage dewar is empty (a good indication of this is that the pressure of the dewar drops rapidly after it has been re-pressurized), (C) the transfer siphon has insufficient length to reach the liquid helium in the storage dewar (an extension tube for the siphon may be used to alleviate this, but helium losses will increase, and the extension should not be more than 30 cm long), or (D) liquid nitrogen used to pre-cool an experimental cryostat prior to the transfer has not been completely removed (solid nitrogen has a very large heat capacity). During a helium transfer, a particularly useful diagnostic technique is to watch the amount of ice forming on the outside of the helium recovery line. If the length of line covered with ice is too much or too little compared with predictions based on previous experience, an investigation into the possible causes may be warranted. Unusual behavior may indicate that too much or too little liquid helium is being transferred. A comprehensive discussion of liquid-helium transfer problems can be found in Ref. [7].
9.5 Large pressure buildups within sealed spaces A particularly dangerous failure mode in cryogenic apparatus is the rapid increase of pressure in a closed space due to the vaporization of cryogens that have leaked into it. Even very small leaks can, over time, allow considerable quantities of liquid helium or nitrogen to enter a space that is intended to be completely sealed. If the apparatus is then warmed quickly to room temperature, these substances will rapidly expand. The dimensions of the leak might be too small to allow the gases to escape, and unsustainable pressures may arise within the space. For instance, the pressure created when liquid nitrogen is warmed to 300 K is about 2900 atm [8]. Dewars can blow up as a result of this phenomenon [6]. For this reason, all sealed spaces in a cryostat must be connected to a pressure relief device, such as a relief valve or a rupture disc, in order to allow gases to vent. This is especially important in the case of dewar vacuum-insulation jackets. Since relief valves in cryogenic systems sometimes malfunction, it is preferable to install two of them in each case, for redundancy [2]. Pressure relief valves are themselves prone to leakage. Of particular relevance in the present case is the tendency of the rubber O-rings in such valves to become hard when they get cold, which can prevent them from sealing properly. (Although leaks can arise in other ways – see page 252.) Leaky relief devices can allow air or water vapor to enter the lines leading to them, where it can solidify and form blockages. In this way, such devices may be rendered ineffective. One type of relief valve that should generally be avoided is the “Bunsen valve.” This is a home-made device, which consists of a rubber tube that is connected to the pressurized vessel at one end, closed off at the other, and has a slit cut
291
9.6 Blockages of cryogenic liquid and gas lines
along its side. Rupture discs are generally the most reliable type of pressure relief device for use in cryogenic equipment. Further information on pressure relief valves and rupture discs is provided on pages 252 and 254, respectively. A relatively common problem with storage dewars is the blockage of their necks with solid air or ice. In principle, this can be an extremely dangerous occurrence. However, most modern commercial storage dewars are provided with relief devices in order to prevent the buildup of hazardous pressures. The removal of air or ice blockages from the necks of storage dewars can be achieved by using a stainless steel tube with a jagged end to bore through the obstruction [3].
9.6 Blockages of cryogenic liquid and gas lines A relatively common problem with cryogenic systems that pass liquid helium through small capillary tubes (e.g. those that use a “1 K pot” – see page 300) is blockages by solid air or ice. Several sources of these substances exist. For example, any air that is trapped in a capillary line during cool-down can freeze. Also, if air is allowed to enter the intake of the capillary line while the latter is still cold (possibly as a result of cryo-pumping following withdrawal of the apparatus from the dewar), air or moisture may condense inside it. If the apparatus is then re-cooled without taking any measures to remove such contaminants, they will freeze and form a plug. The usual method of preventing such problems is to completely purge lines of this type with dry helium gas (possibly for a period of several hours) prior to, and during, cool-down. Another source of ice or solid air is the small particles of these substances that tend to collect in the liquid helium near the bottom of the dewar. (Suspensions of very fine particles of solid air in the liquid have a milky appearance.) Ice or solid air particles are especially abundant in dewars that are seldom warmed to room temperature, because the particles accumulate over time. These include, for example, storage dewars, and those used in cryogenic instruments into which samples can be inserted and withdrawn without warming the apparatus up (e.g. some commercial SQUID magnetometers). If they are freshly introduced into a cryogenic system, or disturbed after coming to rest on a surface, solid air particles can take hours or longer to settle [8]. The most useful general advice that can be given about preventing this is that one should ensure as far as possible that liquid helium is kept clean. The necessary precautions include the following. (a) Prevent exposure of the liquid helium to the atmosphere, unless it is absolutely necessary – by covering up dewar openings, minimizing dipstick helium level measurements, repairing leaky valves, etc. Liquid helium should not be directly exposed to the air for more than a few seconds at a time. When dipstick cryostats are being withdrawn from storage dewars, it is essential to prevent the air that condenses on them from dripping into the dewars. (b) When large items are being lowered into a storage dewar, the entry of contaminants can be prevented or reduced with the aid of metal or plastic bellows, or sliding-seal
Cryogenic systems
292
arrangements. Another technique is to slightly pressurize the dewar with a helium gas cylinder, in order to produce an outflowing stream of helium in places where air might otherwise enter [3]. (c) In those situations when a bellows or sliding seal cannot be used, condensed air can be prevented from dripping from a dipstick cryostat into its dewar by wrapping a cloth around the cryostat at the opening of the dewar. The cryostat can then be pulled out of the dewar through this improvised “cloth seal” [2]. (d) Regularly clean out helium dewars, either by warming them up (perhaps every few weeks) in order to allow contaminants to evaporate [2], or by taking other measures (see below). (e) If possible, avoid the use of helium storage dewars that are known to be used for experiments involving dipstick cryostats. (Some apparatus of this type, such as certain commercial 3 He refrigerators and devices making use of adiabatic demagnetization cooling, are themselves vulnerable to the presence of particulates, because they employ 1 K pots.) The use of filters to prevent the entry of particles into capillaries can be beneficial [1]. If this is done, one should bear in mind that solid air particles may have submicron dimensions [8]. The filter mesh size should be as small as needed to cope with fine particles, but no smaller – because it may otherwise present an excessive flow impedance. The selection of filter type is important. Sintered metal filters (comprising small metal particles that are bonded under heat and pressure) have been used [1]. However, these may fail as a result of thermal shock [9]. Sintered filters also tend to have large flow impedances. Woven wire-type filters (e.g. using a “dutch weave,” or a “twill dutch weave,” for the finest filtration) are particularly robust when exposed to cryogenic liquids [9], and have relatively small flow impedances. The installation of a filter should be carried out with consideration of any requirements for helium purging of the capillary lines. The removal of particles that have already found their way into a dewar can be difficult. Warming the dewar to room temperature may not always be an option. The use of in situ filtration methods has been given some attention in the literature. One article describes a scheme for passing the liquid helium through a special type of filter that is located inside the dewar. This device, called an “electret filter,”1 uses electric fields produced by the filter material to remove small particles from a fluid without presenting a large flow impedance. This arrangement was employed in order to remove particles that were scattering light in laser experiments [10]. Cryogenic systems in which helium is circulated by external room temperature pumps (e.g. dilution refrigerator systems) can block because of air that is admitted through leaks in these devices. Another possible contaminant is hydrogen that is created by the cracking of the oil within any diffusion pumps [11]. Such systems often employ “cold traps” (essentially filters that operate by adsorbing vapors onto cold surfaces), which are intended to intercept the majority of this contamination. However, if they are loaded with large amounts of gas,
1
An electret is an insulating material (e.g. polypropylene) that has been given a quasi-permanent electric charge.
9.8 Cryogen-free low-temperature systems
293
traps soon become saturated and ineffective. Frequent cleaning of such traps is beneficial in forestalling plugs. Rotary pumps used for circulating helium should normally be of a “helium sealed” design that is especially made for this purpose. Leak problems on rotary pumps that have not been helium sealed are primarily the result of inadequate drive-shaft or oil-level sight-window seals [12]. Ballast valves can be a major source of air if they are not tight. In fact, it is normally a good idea to remove the ballast valves from helium circulation pumps [7]. A possible way of avoiding shaft seal leak problems is to use pumps in which the coupling between the motor and the interior of the pump is accomplished by using a magnetic drive [3]. More information about causes of blockages can be found on page 261. Other discussions are provided in Refs. [6] and [13].
9.7 Other problems caused by the presence of air in cryostats If films of air are present in cryogenic mechanisms when they are cooled to liquid-helium temperatures, the resulting films of solid air can prevent them from functioning [2]. (Solid air is very sticky.) Particles of solid air in a liquid-helium bath can have the same effect. Solid oxygen is very strongly paramagnetic, and if a superconducting magnet is present in a cryostat, particles of this substance can be drawn up into the field, where it can interfere with the operating of mechanisms, or cause problems with magnetic measurements. (Liquid oxygen is also strongly paramagnetic.)
9.8 Cryogen-free low-temperature systems Many of the difficulties that occur in cryogenic work do so because of the presence of liquid cryogens (nitrogen and helium). In addition to the problems already discussed, others exist. For example, shortages of cryogens can happen on a day-to-day basis for various reasons. These include the non-delivery of commercial supplies, surges in demand, liquefier breakdowns, etc. (The breakdown of central helium liquefiers in particular is a classic lowtemperature physicist’s nightmare.2 ) In the case of liquid helium, a further difficulty may arise because of the high cost of this substance. It is not unknown for research groups to be forced to cease their investigations for a considerable fraction of the year, because their helium budget has been depleted. Further uncertainties in a research program may arise during negotiations for helium allocations in a large multi-user laboratory. In recent years, an alternative to the use of liquid cryogens has emerged in the form of mechanical cryogenic refrigerators, or “cryocoolers.” These generate the required 2
Such facilities really have a chance of functioning adequately only if a top-quality machine is used, and it is run and maintained by highly competent personnel (see the discussion in Ref. [6]).
Cryogenic systems
294
temperatures within the experimental equipment by means of self-contained apparatus that is run from the electricity supply. (Cooling water is usually also required, and this can be a cause of problems.) Perhaps the most commonly used machine of this type is the Gifford–McMahon (GM) refrigerator. It has the drawback of generating sizeable vibrations, which may cause difficulties during sensitive measurements [2]. A relatively new type of device, called a pulse-tube cryocooler, has a number of advantages over the GM kind. These include roughly 102 times lower vibration levels and potentially significantly greater reliability.3 Cryogen-free systems based on cryocoolers can potentially be much more reliable (all things considered) than traditional cryostats operating with bulk liquefied gases. The most important disadvantage of cryocoolers is their very high initial cost. Also, in certain applications, the vibrations generated by any type of cryocooler may be unacceptable. Instruments that contain a cryocooler sometimes have special provisions for allowing the cryocooler to be switched off, while remaining at base temperature for extended periods. This makes it possible to take measurements in the absence of vibrations, at least for a while. However, in some cases, this will be insufficient, and ordinary cryogen-cooled systems must then be used. A discussion of cryocoolers can be found in Ref. [2].
9.9 Heat leaks Perhaps the most troublesome, and one of the most frequent, causes of unforseen thermal loading in low-temperature apparatus is a poor vacuum. Leaks and outgassing are two common sources of unwanted gas in vacuum spaces. Another possibility is traces of gas that has been used as a heat transfer medium during cool-down of the apparatus. Helium presents the greatest problems – owing to its ubiquity in cryogenic apparatus, very high thermal conductivity, high vapor pressure at all but the lowest temperatures, and the difficulty of removing it by pumping. Teflon (or “PTFE”) is a plastic that is frequently used in cryogenic apparatus, because of its good low-temperature mechanical properties. However, it is porous, and can outgas enough helium to produce troublesome heat leaks [3]. Residual gases in cryogenic vacuum spaces are sometimes removed by placing activated charcoal in these places, which acts as a cryopump when cold. A potential problem with using this method at very low temperatures (near 4 K) is that the charcoal will pump helium – so that leak detection operations may be hindered (see page 176) [7]. Vacuum leak and outgassing issues are discussed in detail in Chapter 6. Touches, which are unintended contacts between objects at different temperatures (“thermal short circuits”), are another cause of heat leaks. These may arise if parts, such as support structures in a cryogenic instrument, become bent. Another possibility is misalignment of 3
However, in the author’s (limited) experience with pulse-tube cryocoolers, these devices do produce noticeable vibrations, and are also acoustically noisy. Sound levels of 65 dB(A) were measured 1 m away from a commercial instrument containing such a device, and 70 dB(A) at 1 m from its separate compressor unit. GM refrigerators can produce similar levels of noise.
9.9 Heat leaks
295
parts during the assembly of a cryostat. Common examples are the contacts that sometimes occur between heat shields in a cryostat and other heat shields, or with the vacuum can that surrounds them. Touches can often be avoided during the design of cryogenic apparatus simply by providing generous clearances between any items that may have such problems. A useful diagnostic device for establishing the presence and location of touches is a “touch detector” [11]. As an example, suppose that it is necessary to determine whether a cylindrical heat shield is making contact with a surrounding vacuum can. To make a touch detector in this case, one places a piece of Kapton tape on the region of the shield that is most likely to make contact with a vacuum can. A piece of copper tape is then stuck over the Kapton (which insulates the copper tape from the shield). Wires from the copper tape and the vacuum can are run to the top of the cryostat, and the resistance between the two is used to indicate the presence of a contact. In long and narrow tubes containing helium gas, which link the cryogenic and roomtemperature environments, pressure oscillations driven by temperature gradients may arise. These disturbances (called “thermoacoustic” or “Taconis” oscillations) are very effective in generating heat in the low-temperature environment. Power output levels of about a few watts are typical [14]. Taconis oscillations are most commonly observed, and put to use, in the simple dipsticks that are employed to determine the amount of liquid helium in a dewar (i.e. “acoustic level detectors”). If these dipsticks are accidentally left to vibrate in a cryostat, they can cause considerable amounts of helium to evaporate. Such oscillations are also encountered in the neck tubes of dewars, or other gas filled tubes in cryogenic equipment. As well as creating heat, Taconis oscillations can also cause vibrational disturbances that may affect sensitive measurements [1]. The frequencies of the oscillations are usually several tens of hertz [8]. Tubes with diameters of less than about 1 cm are particularly at risk [3]. A number of methods have been developed for damping the oscillations, which include (see Refs. [1], [8]): (a) (b) (c) (d)
putting a wire or nylon fishing line in the warm end of the tube, creating several holes in the cold end of the tube, attaching an extra volume (a Helmholtz resonator) to the warm end of the tube, and if the warm end of the tube is attached to a helium gas exhaust line, placing a damping element, such as a valve, in this line.
The presence of Taconis oscillations, and the effectiveness of measures to eliminate them, can be established by placing a microphone in the helium gas [1]. Thermoacoustic oscillation problems are discussed in detail in Ref. [8]. At the temperatures created by the adiabatic demagnetization of copper (less than a few mK), the introduction of very small amounts of heat can present a major problem. At such temperatures, low-power phenomena such as the pickup of radiofrequency energy by the low-temperature stages, vibrations, and the relaxation of mechanical stress at press contacts, can all lead to troublesome heat leaks. These issues are examined in Refs. [1] and [3]. The reduction of vibration and the prevention of radiofrequency energy pickup are also discussed in Sections 3.5.3 and 11.2.2, respectively.
296
Cryogenic systems
9.10 Thermal contact problems 9.10.1 Introduction The conduction of heat away from material samples is often achieved by using electron transport through a metallic object, such as a length of copper or silver wire. Since the metallic link generally cannot be continuous, it is normally necessary to have one or more metal–metal contacts in the heat conduction path. Like their electrical counterparts, thermal contacts often present reliability problems. For example, in the case of press contacts made of copper, oxidation at the interface will result in a gradual decrease in the thermal conductance over time. Problems also result from a reduction of the force between the contacts as a result of differential thermal contraction during cooldown of the apparatus, or loosening of the fasteners that are used to apply the force. Lack of good thermal contact between thermometers and samples is by far the most common cause of temperature measurements errors (see page 301).
9.10.2 Welded and brazed contacts The best thermal contacts between metals are achieved by TIG (tungsten inert gas) welding or brazing them together. (However, sometimes the alloys that form at the interfaces during brazing can have a high thermal resistance [1].) Soldering is also a possibility, although most solders become superconductors at sufficiently low temperatures. If the temperature of operation is much below the superconducting transition point, the solder will behave like an electrical insulator (such as a ceramic or plastic) with regards to its ability to transport heat. (Normal 60/40 Sn/Pb solder has a transition temperature of about 7 K.) Furthermore, solder has other potential problems related to reliability (e.g. low-temperature brittleness, and possible fatigue failure resulting from thermal cycling). If thermal contact at temperatures much below 100 mK must be achieved, one should bear in mind that some cadmium- and zinc-bearing brazing alloys become superconductors (or contain superconducting regions) under such conditions [15,16]. This may have an effect on the thermal conductivity of these alloys at ultralow temperatures.
9.10.3 Mechanical contacts 9.10.3.1 Thermal conductance vs. contact force In practice, mechanical press contacts are frequently used to join metal objects intended to conduct heat. When using press contacts, it is important to keep the following point in mind. As long as no filler material (such as grease) is applied between the contacts, the thermal conductance of the arrangement is dependent only on the force used to bring them together, and is independent of the apparent surface area of contact. The relationship between force and thermal conductance is generally an asymptotic one. Conductance will
9.10 Thermal contact problems
297
Apiezon Grease Layer
Correct
Incorrect
Fig. 9.1
Correct and incorrect dispositions of grease between two press contact surfaces for the purpose of enhancing their thermal conductance. The grease should be used only to fill in minute gaps between two surfaces that are otherwise in firm contact. increase very roughly in proportion to the applied force up to a certain level, beyond which further increases in force will have a diminishing effect.
9.10.3.2 Use of grease as a gap filler to enhance thermal conductance On a microscopic scale, surfaces in contact have a rough texture, so that true metal-onmetal contact occurs only at isolated points (perhaps as few as three [17]),4 and not over the entire surface. For this reason, especially if the available force is small, it can be very R ), beneficial to place a filler material, such as a low-temperature grease (e.g. Apiezon-N on the surfaces. When these are pressed together, and the excess grease is squeezed out of the gap (see Fig. 9.1), the thermal conductance of the small number of asperities in contact is greatly supplemented by that of the grease. Even though the grease is non-metallic (and as long as the temperature is not too low), its use can improve the conductance of press contacts by more than an order of magnitude (see Fig. 9.2). A potential problem with the use of grease as a filler is that it becomes hard at low temperatures. If the applied force is too low, so that the grease layer is excessively thick, the embrittled grease can separate from the contact surfaces. This can make the thermal conductance worse than if the grease had not been used at all. To prevent this from happening, the contacts should be firmly clamped together at room temperature (so that the grease layer is as thin as possible), and the clamping force should be maintained as the contact assembly is cooled [17].
9.10.3.3 Indium foil as a gap filler Indium foil is also useful as a filler material, in those situations in which sufficiently large forces can be applied to cause it to deform. For this purpose, it may be more reliable than grease, because (unlike the latter) it remains ductile at low temperatures. Also, unlike grease, 4
If the two surfaces are not being pressed together so firmly that they bend or deform, they can be stable if there exist only three points of contact between them. This fact is used in the kinematic (or geometric) design of apparatus with moveable parts.
Cryogenic systems
298
1000 Force between surfaces: 670 N 100
Thermal Conductance 10 (mW/K)
Cu-Apiezon Cu-In Cu-Au Cu Cu-Al washer
1
0.1 1
2
3
4
5
6
7
8
9 10
Temperature (K)
Fig. 9.2
Thermal conductances of various press contact arrangements as a function of temperature (data are from Ref. [17]). The terms “Cu-Apiezon” and “Cu-In” denote copper contact surfaces sandwiching Apiezon-NR grease and indium foil, respectively. The data points “Cu” and “Cu–Au” denote contacts involving bare copper on bare copper, and gold-plated copper on gold-plated copper, respectively. The points for “Cu–Al washer” denote a contact configuration involving a gold-plated aluminum washer sandwiched between two copper surfaces. indium bonds well to metal surfaces – acting as a kind of solder. (Unlike joints formed by true soldering, the bonds formed by compressing indium between two such surfaces will generally be to surface oxides, not the metals themselves.) For best adhesion of the indium to the surfaces, steps should be taken to ensure that the latter are completely clean. The pressure applied to the indium must be sufficient to cause it to flow. Indium undergoes a superconducting transition at 3.4 K, but this does not affect the thermal conductance of contacts made with it, at least down to 2 K [17]. Suitable indium foil (50–100 µm thick) can be obtained from commercial suppliers of scientific materials. If soft gap fillers such as grease or indium foil are employed, the thermal conductance of the contact is far less dependent on the applied force than would be the case if no filler were used (see Ref. [17]). In such cases (and in contrast to the situation in which no filler is used), maximizing the apparent area of contact is important [2]. In the case of indium, relatively low contact forces (as little as 20 N) may be adequate for achieving good thermal conductance.
9.10.3.4 Optimizing heat transport through direct metal-to-metal contacts At the lowest temperatures, non-electrical-conductors such as grease, and superconductors such as indium, are ineffective as transporters of heat. Under such conditions, one must rely only on direct metal-to-metal contact. If copper contacts are involved, the use of gold
9.10 Thermal contact problems
299
plating can be helpful in eliminating surface oxidation and thereby improving the thermal conductance (see Fig. 9.2).5 The discussion concerning the creation of electrical contacts in Secttion 12.3.4.2 of this book is of relevance. Since the thermal conductivity is dependent on the applied force, the use of high strength threaded-fasteners (i.e. bolts, screws, etc.) to join the contact surfaces can be beneficial (see Ref. [1]). Strong fastener materials that are suitable for low-temperature use include, for example, hardened beryllium–copper and MP35N [18]. If thermal contact must be made to multi-strand wires, the use of a crimping technique, as discussed on page 422, can be beneficial. Crimp terminals are normally electroplated with tin, which becomes a superconductor at 3.7 K. If the assembly must be operated at temperatures very much lower than this, the tin should be replaced with a non-superconducting metal, such as gold. Ensuring that the contact force is maintained in the presence of differential thermal contraction of the various parts of the contact assembly (i.e. the contacts and fasteners) can be a problem. One way of doing this is by using washers with a low thermal-expansion coefficient in the fastener assembly [1]. Suitable materials for this purpose are titanium, molybdenum, and tungsten. (Washers made of titanium are commercially available.) The main problem with this approach is that, since the compensation takes place at low temperatures, one cannot check it by direct inspection prior to cooling down. Furthermore, the thermal contraction properties of the materials making up the assembly must be accurately known, and their cooling rates must be roughly equal [19]. A useful summary of linear thermal contraction data for pure metals and alloys is presented in Ref. [3], and a comprehensive compilation can be found in Ref. [20]. A better method from this point of view is to use a spring washer, such as a Belleville type, which can change its shape to compensate for any relaxation of the contact assembly (see, e.g., Ref. [21]). Since it is possible to observe the compression of the washer at room temperature, one can be reasonably sure that the force between the contacts will be maintained at low temperatures as well. Furthermore, the use of Belleville washers can help to prevent the fasteners from unscrewing, whether from vibrations or other causes. Belleville washers made from strong materials that are suitable for use at low temperatures (e.g. beryllium–copper and Inconel 718) are commercially available. A hard flat washer (made of, e.g., beryllium–copper, Inconel 718 or MP35N) should be used to distribute the load from the Belleville washers when soft contact materials, such as copper, are used. Thermal contacts are often made and unmade in setting up a cryogenic instrument for new experiments. In order to ensure that press contacts are engaged with a consistently high force from one experiment to the next, and so as to avoid damaging the fasteners, it is generally beneficial to use a torque wrench or torque screwdriver to tighten the latter. If the contacts involve metal pressing against metal (without soft fillers), the assembly can be tested at room temperature by measuring its electrical resistivity. Because of the low resistances involved, this must be done using a four-terminal or “Kelvin” setup (i.e. separate leads for the current and voltage). Commercial micro-ohmmeters are suitable for such measurements. Room-temperature contact resistances should generally be in the microohm range or less. At a temperature of 4 K, contact resistances of 0.1 µ can be achieved 5
Unlike copper and silver, gold does not oxidize or tarnish.
300
Cryogenic systems
routinely [1]. (The resistance is a relevant parameter because the thermal conductivities of pure metals (e.g. copper and silver) at low temperatures are directly proportional to their electrical conductivities, according to the Wiedemann–Franz–Lorenz law.)
9.10.3.5 Direct metal-on-metal vs. grease or indium thermal contacts If available contact forces are low, or surface areas are large, one should place grease between the contacts [2]. (Alternatively, indium may be a more dependable alternative, if the pressure is sufficient to cause it to flow.) If the available force is high and the surface area is small, or ultralow temperatures are involved, use gold-plated metal-on-metal contacts without a soft filler. Thermal contact issues are discussed extensively in Refs. [2] and [17]. Further information can be found in Ref. [1].
9.11 1 K pots When apparatus must be cooled below 4 K, a 1 K pot is often a necessary component in the refrigeration setup. This device works by pulling liquid helium up through a fine tube from the main helium bath into a chamber by means of a vacuum. The action of pumping on the helium in the chamber (the “pot”) causes the latter to cool to temperatures of typically 1–2 K. The flow of helium into the pot is controlled by using a needle valve. In some cases where no flow control is needed, the valve is omitted and a fixed flow impedance is used. These devices frequently malfunction as a result of: (a) blockages of the filling tube or the needle valve, (b) damage to the needle valve (e.g. from overtightening, or tightening the needle onto its seat when the latter is covered with solid air or ice), and (c) seizure of the valve mechanism due to the presence of solid air or ice. Blockages and valve seizure can often be avoided by thoroughly purging the 1 K pot system with helium gas prior to and during the cooling of the cryostat from room temperature. The 1 K pot line must not be pumped if there is any chance of this causing air or moisture to enter the filling tube. Other measures for preventing blockages, as discussed on pages 291–293, are also useful. Air sometimes enters the 1 K pot system through a sliding vacuum seal that passes rotary motion from the needle valve control knob (in the ambient environment) to the shaft inside the 1 K pot space. This can be prevented by using a bellows-based motion feedthrough instead (see the discussions in Sections 8.2.9.1 and 8.2.9.2). Overtightening of the needle valve can be prevented by installing a mechanical torque limiter on the control shaft. Further information on all-metal valves, needle valves, and general valve issues can be found in Section 8.2.10. See also the discussion on cold valves in Ref. [1]. Whatever valve design is used, the device should be installed within the cryostat in a place where it can be easily accessed for repair. It should be possible to remove and replace damaged or worn needles or seats without having to demount the body of the valve [22].
301
9.12 Thermometry
9.12 Thermometry 9.12.1 Two common causes of thermometer damage Cryogenic thermometers are generally fragile devices. In the case of commercial resistance thermometers, a common cause of failure is breakage of their very fine lead wires as a result of fatigue during handling. This often makes the thermometers completely worthless. Every effort must be made to avoid unnecessarily bending the lead wires, especially at the points where they enter the body of the thermometer [23]. Unfortunately, the wires are sometimes not supplied with bend reliefs at these places. Since the part of the thermometer at the entry points is generally a rigid material, such as epoxy, the minimum bend radius of the wire can be easily exceeded. It is recommended that the user take steps to correct such a deficiency, particularly in cases where the thermometer is not permanently installed. A good way of doing this is by applying a small amount of RTV silicone rubber to the base of each wire at its point of entry into the thermometer [24]. It is preferable to use an electronic-grade silicone rubber, rather than the standard type that is commonly employed in sealing applications. The former grade emits only alcohol during the curing process, whereas the latter generates corrosive acetic acid. The soldering of cryogenic thermometer leads can be very hazardous for these – particularly if they have a small diameter or are made of gold. The tin in common solders is a very good solvent for gold, and (to a much lesser degree) copper and its alloys. Hence, this operation can easily destroy the wires, or at least seriously weaken them. Great care must be exercised during the soldering of gold wires [24]. More information on this subject can be found on page 419.
9.12.2 Measurement errors due to poor thermal connections Inadequate thermal connections between thermometers and the samples being studied are the most common cause of temperature measurement errors [2]. They are much more often a cause of problems than uncertainties in thermometer calibration or instrumentation. A discussion of thermal contacts can be found in Section 9.10. The subject of making thermal connections between thermometers and samples is treated in detail in Ref. [2].
9.12.3 Measurement errors due to RF heating and interference At temperatures below about 1 K, resistance thermometers start to become susceptible to internal heating by radiofrequency (RF) energy in the environment [25]. This energy is picked up by the electrical leads that enter the cryostat, whereupon it travels to the thermometer and causes spurious temperature rises – and hence, measurement errors – as a result of Joule heating. (In experiments involving nuclear (magnetic) cooling, which is used to reach the lowest possible temperatures, the RF energy can also cause general
302
Cryogenic systems
heating of low-temperature regions of the cryostat [3].) In the case of diode thermometers, another potential problem is that RF noise may be rectified and produce a d.c. voltage error [2]. Radiofrequency interference (RFI) can be sporadic and unpredictable. RFI issues are discussed in detail in Section 11.2.2. Radio (particularly FM) and TV stations can be especially troublesome sources of RF energy in low-temperature work [26, 27]. Switching power-supplies (or switchers) in the vicinity of the cryostat can also be very problematic in this regard [1, 3]. In contrast with linear power supplies, switching types are prolific sources of electrical noise [28]. Hence, the former should be used as much as possible in situations where RF heating might be a problem. Unfortunately, because of their compactness and other advantages, switchers are very common – especially in digital equipment such as computers and their peripherals. They are also frequently found in laboratory instruments, and are often used to run superconducting magnets. Extensive RF shielding of these devices, and heavy filtering of their leads, may be necessary [1]. Hybrid linear/switching superconducting magnet power supplies are available, which are claimed to produce very little RF noise, while retaining most of the beneficial characteristics of ordinary non-hybrid switchers. A good general solution to RF heating problems is to filter electrical leads where they enter the cryostat. With suitable precautions, an all-metal cryostat can act as an RF-shielded enclosure [27]. Suitable filters (see, e.g., Ref. [1]) can be placed on the vacuum side of the cryostat’s electrical feedthroughs, in the room temperature environment. One useful strategy is to use commercial multi-pin connectors with built-in RF filters for this purpose [26]. All wires running into the cryostat should be filtered, since capacitive coupling of RF energy between nearby wires inside the cryostat can cause trouble [3]. Cryogenic thermometers with built-in RF filters are commercially available. Commercial temperature controllers that employ square-wave resistance-thermometer excitation currents introduce very large amounts of RF energy, and so should be avoided in work much below 1 K [3]. Heater control arrangements that involve pulse-width currentmodulation methods should likewise be shunned, for the same reason. Ground loop currents can also cause thermometer heating. Further information on electromagnetic interference (EMI) is provided in Chapter 11. Temperature measurement using resistance thermometers is generally less trouble prone if their resistance is less than about 104 at the lowest temperature of interest [26].
9.12.4 Causes of thermometer calibration shifts Shifts in the calibration of thermometers can take place as a result of thermal cycling between cryogenic and room temperatures. This is especially serious in the case of carbon resistor thermometers, but happens to most thermometers to some degree. (Carbon resistor thermometers are crude and comparatively unstable thermometers, and nowadays are used mainly for liquid-cryogen level sensing, and other undemanding tasks.) Bismuth-ruthenate and ruthenium-oxide thermometers are relatively susceptible to damage by rapid thermal cycling at low temperatures [2]. Thermometer calibration shifts can take place due to excessive heating during soldering. This is especially true for ones based on carbon resistors [1]. Carbon resistor
303
9.13 Problems arising from the use of superconducting magnets
thermometers should not be touched with a soldering iron after they have been calibrated [26]. Carbon-glass and germanium resistance thermometers are vulnerable to being damaged by overheating during soldering [2]. Mechanical shocks, aging, and exposure to nuclear radiation (gamma rays and neutrons) can also cause calibration shifts in various thermometers [29]. Carbon-glass thermometers are particularly vulnerable to damage from mechanical shock. If one of these devices is dropped just a few centimeters, its temperature response can be altered [2]. The thermometers with the lowest sensitivity to gamma and neutron radiation are the germanium, CernoxTM , and rhodium–iron types. Platinum resistance thermometers are in the middle of the range. Diode and carbon-glass thermometers show the largest calibration shifts under exposure to such radiations. Aside from careful treatment of the sensors, problems due to various effects can be avoided by periodically checking thermometers against a standard [30]. Another possible approach for reducing the likelihood of error, at least in some situations, is to use redundant thermometers. However, at least three thermometers are required (if there are only two, and they disagree, it will not be possible to choose between them).
9.12.5 Other thermometer issues As with other electrical sensors [31], a potentially important cause of cryogenic thermometer errors is the inaccurate transcription of calibration information from their data sheets into the data collection computer (see the discussion on page 20). Another potential source of difficulties is that many commercial thermometers are not provided with identification markings that allow them to be unambiguously matched to their calibration data. If a number of similar thermometers get mixed up, they may have to be recalibrated. In cases where this is a possibility, it may be worthwhile to ask the manufacturer if they can mark new thermometers with model and calibration numbers. If this is not possible, the addition of markings by the user should be considered. One possible approach is to mark the thermometer with dots or bands of some durable ink or paint, using different colors to represent numbers. A convenient coding scheme is the color code used in the marking of resistors and other electronic components. An extensive discussion on cryogenic temperature measurement and control, including information on a variety of thermometers and their problems, can be found in Ref. [2]. This also contains advice on thermometer calibration and setting the operating parameters of PID temperature controllers. Information on cryogenic thermometry can also be found in Refs. [1], [13], [25], and [30].
9.13 Problems arising from the use of superconducting magnets Superconducting magnets sometimes experience a type of fault, known as a “quench,” in which at least part of the wires that comprise them change from the superconducting to the normal state. This causes the magnet to suddenly release its stored energy into
304
Cryogenic systems
the surrounding liquid helium. Quenches can be triggered by a variety of conditions, including [3]: (a) exceeding the critical current of the magnet, (b) taking the magnet from the persistent into the non-persistent mode (in the latter case, the current is provided by an external supply) while the leads are carrying the wrong current, (c) poor electrical contacts in the magnet or its leads, (d) mechanical shocks, and (e) improper operation of the λ-point refrigerator (used to cool the magnet in order to allow it to operate at higher fields). Ferromagnetic objects near a magnet can cause quenches. They may do this by moving in the field gradient against the magnet – thereby producing a mechanical shock by impacting the magnet dewar. Alternatively, even if such objects are fixed in position, they may distort the field within the magnet and produce a quench in this way [5]. The amount of helium gas that is generated during a quench can be very large, and provisions have to be made to discharge this safely. Generally, quenches cause no damage. However, the liquid helium in the dewar is lost, and there may be a considerable delay involved in re-cooling the system. If ferromagnetic objects are brought near a large superconducting magnet with high fringing fields (e.g., of a type used for magnetic resonance imaging), accidents can occur that may be much more serious than quenches [5]. Items made of steel, such as tools, gas cylinders, instrument racks, etc., can be picked up by the stray field, and hurtled against the magnet with very considerable force. These objects can cause severe harm to both people and equipment. Even small items, such as hand tools, can dent vacuum vessels (possibly causing touches), and can even rupture them. It is possible to obtain non-magnetic tools made of beryllium–copper that can be used in the vicinity of large magnets. Also, ferromagnetic-object detectors are available for use with large magnet systems, which can be mounted in a laboratory doorway in order to sense the passage of such items. These devices are not ordinary metal detectors, and nonferromagnetic metals, such as aluminum, will not trigger them. However, the most important means for avoiding these accidents are training and vigilance. In cryostats containing optical systems, one should consider providing any windows with guards, so that they cannot be destroyed by small loose ferromagnetic objects. If moisture that has found its way into a superconducting magnet is subsequently allowed to freeze, it can displace the windings and destroy the material (e.g. epoxy, wax, etc.) that has been used to encapsulate them. (See page 288.) Hence, it is essential to ensure that superconducting magnets are completely dry when they are cooled. The fringing fields of superconducting magnets are capable of destroying the data on magnetic storage media, such as magnetic tapes [5]. In order to prevent such events (and others, discussed above), it is useful to mark, or possibly barricade, the area around the magnet where the field exceeds a level of 10 G. Image intensifiers, photomultiplier tubes, accurate measuring scales, and electron microscopes can be affected by fields as small as 1 G. (See page 344 concerning photomultipliers.) Magnetic fields can also affect the
305
Summary of some important points
operation of electronics. Transformers can become saturated at fields of about 50 G. Other electronic devices should generally be kept out of areas where the field exceeds 100 G. Keep in mind that the field can extend into neighboring rooms, including those on different floor levels. Magnetic shielding is discussed in Section 11.2.3.3.
Further reading Some particularly useful sources of information on cryogenic reliability issues are Refs. [1], [2], [3], [7], and [13]. Many general concerns in low-temperature research, especially regarding work at temperatures above 1 K, are covered in Ref. [2], and this book is recommended.
Summary of some important points 9.1 Introduction (a) Because of the inherent susceptibility of cryogenic equipment to reliability problems, and the inaccessibility of trouble-prone items when these are cold, particular care must be taken when designing, constructing, and using cryogenic apparatus. (b) Trouble-prone items in cryogenic equipment, such as electrical and mechanical vacuum feedthroughs, should be located in the room-temperature environment, if possible. (c) Things should be arranged so that unreliable items that must be in the cryogenic environment can be tested at intermediate temperatures (e.g. 77 K and 4 K), while the apparatus is being cooled to base temperature.
9.2 Difficulties caused by the delicate nature of cryogenic apparatus (a) The inherent fragility of many types of cryogenic apparatus (especially those used below 1 K) is often overlooked. (b) It is not too uncommon for cryostats to be damaged when they are removed from their dewars. (c) In order to minimize this type of problem, hand-operated hoists are preferred to motorized ones (to allow tactile feedback for the operator), and the hoist should have the smallest capacity needed to lift the cryostat.
9.3 Difficulties caused by moisture The presence of moisture in cryogenic equipment, which is usually a result of condensation, can cause many problems when it freezes. These include blockages of gas and liquid
306
Cryogenic systems
passageways, seizure of mechanical devices, damage to structures that confine moisture, and degradation of materials following repeated freeze/thaw cycles.
9.4 Liquid-helium transfer problems (a) Heat leaks between the outer and inner tubes comprising a helium transfer siphon are often a cause of helium transfer difficulties. These can be caused by a loss of vacuum in the space between the tubes, or by a touch between them. (b) Helium transfer tubes are often blocked because of air or moisture that has found its way into them (following an earlier transfer) and frozen. This can be prevented by purging a transfer tube with warm helium gas before re-lowering it into the storage dewar. (c) Such tubes are also often blocked because they are allowed to come into contact with particles of solid air and ice at the bottom of the storage dewar.
9.5 Large pressure buildups within sealed spaces All sealed spaces in low-temperature equipment that are surrounded by cryogenic liquids or gases must be provided with pressure relief devices. This is to allow liquids or gases that may inadvertently enter them (perhaps via small leaks) to vent safely in the event of sudden warming.
9.6 Blockages of cryogenic liquid and gas lines (a) Fine cryogenic liquid and gas lines (capillaries), helium transfer tubes, and needle valves are often blocked by solid air or ice – perhaps because air or moisture entered them before they were cooled down. (b) A useful general method of reducing such occurrences is to purge these items with helium gas before the apparatus is cooled. (c) Blockages also occur when liquid helium is drawn into the above items from a dewar, which may be contaminated with particles of solid air or ice. Measures should be taken to keep the liquid in such dewars as clean as possible. (d) In cryogenic apparatus in which gases are circulated, such as dilution refrigerators, contaminants from pumps (especially rotary pumps) can be an important source of blockages.
9.7 Other problems caused by the presence of air in cryostats (a) Solid air, either in the form of thin films or particles suspended in liquid helium, can interfere with the operation of cryogenic mechanisms. (b) Oxygen, whether in solid or liquid form, is strongly paramagnetic. It can move around in the presence of a magnetic-field gradient, and interfere with magnetic measurements.
307
Summary of some important points
9.8 Cryogen-free low-temperature systems All things considered, cryogen-free low-temperature systems can be much more reliable than conventional cryostats employing liquid cryogens. However, cryocooler vibrations may be a problem for some sensitive experiments.
9.9 Heat leaks Heat leaks in cryogenic systems may arise as a result of poor vacuums, touches between parts at different temperatures, and thermoacoustic oscillations in slender tubes containing helium gas. At the lowest temperatures, radio-frequency energy in the environment, vibrations, and mechanical stress relaxation can also create unexpected heat leaks.
9.10 Thermal contact problems (a) The most reliable method of creating high thermal conductance joins between two metallic parts is to weld them together. (b) If mechanical press contacts must be used to carry heat, and high forces cannot be applied, very large enhancements in thermal conductance can be achieved by using a soft substance (such as Apiezon-N grease or indium) to fill in the tiny gaps between two closely fitting contact surfaces. (c) The thermal conductance of such filled contacts is proportional to their apparent surface area. (d) If high contact forces can be applied and the contact surface area is small, or if ultralow temperatures are involved, bare metal press contacts (i.e. free of soft fillers) should be employed. (e) Bare-metal press-contact surfaces should be gold-plated in order to ensure that they remain free of oxidation. (f) The thermal conductance of a bare-metal press-contact is dependent only on the force between the contacts, and not their apparent surface area. (g) Bare-metal press-contacts can be tested at room temperature by measuring their electrical resistivity using a four-terminal (Kelvin) setup.
9.11 1 K Pots 1 K pots are amongst the most trouble-prone devices in cryogenic equipment. They can fail because of blockages, damage to the needle valve, and seizure of the valve mechanism due to the presence of solid air or ice.
9.12 Thermometry (a) Cryogenic thermometers are relatively fragile devices. Particular care must be exercised when bending their leads, which are susceptible to fatigue failure, especially at the places where they enter the thermometer.
308
Cryogenic systems
(b) Inadequate thermal connections between thermometers and samples are the most common cause of temperature measurement errors. (c) At temperatures below about 1 K, resistance thermometers are susceptible to heating by radio-frequency (RF) energy in the environment. (d) A good general method of preventing the heating of thermometers by RF energy is by placing filters on cryostat leads at their point of entry into the cryostat.
9.13 Problems arising from the use of superconducting magnets (a) Superconducting magnets can quench (lose their superconductivity and dump their stored energy into the helium bath) due to a variety of causes. These include, e.g., exceeding the critical current, poor electrical contacts in the magnet or its leads, mechanical shocks, and the presence of ferromagnetic objects. (b) Ferromagnetic objects, such as steel tools and gas cylinders, can become very dangerous projectiles if they are brought near a large high-field superconducting magnet. (c) Since freezing water can damage a superconducting magnet, it is essential to ensure that such magnets are completely dry before cooling them. (d) The stray fields from superconducting magnets can erase nearby magnetic storage media, such as magnetic tapes. They can also disrupt the operation of a variety of devices, including photomultiplier tubes, electron microscopes, and transformers.
References 1. F. Pobell, Matter and Methods at Low Temperatures, 2nd edn, Springer, 1996. 2. J. W. Ekin, Experimental Techniques for Low-Temperature Measurements: Cryostat Design, Material Properties, and Superconductor Critical-Current Testing, Oxford University Press, 2006. 3. G. K. White and P. J. Meeson, Experimental Techniques in Low-Temperature Physics, 4th edn, Oxford University Press, 2002. 4. G. J. Murphy and P. O. Pearce, in Applied Cryogenic Engineering, R. W. Vance and W. M. Duke (eds.), John Wiley & Sons, 1962. 5. Safety Matters for Cryogenic and High Magnetic Field Systems – Issue 1.2, Oxford Instruments Ltd, 1995. Address: Old Station Way, Eynsham, Witney, Oxon, UK. www. oxinst.com 6. A. J. Croft, Cryogenics 3, 65 (1963). 7. N. H. Balshaw, Practical Cryogenics: an Introduction to Laboratory Cryogenics, Oxford Instruments Superconductivity Ltd., 2001. 8. D. E. Daney, J. E. Dillard, and M. D. Forsha, in Handbook of Cryogenic Engineering, J. G. Weisend II (ed.), Taylor & Francis, 1998. 9. P. D. Fuller and J. N. McLagan, in Applied Cryogenic Engineering, R. W. Vance and W. M. Duke (eds.), John Wiley & Sons, 1962.
309
References
10. I. I. Abrikosova and A. I. Shal’nikov, Instrum. Experiment. Techniques 2, 593 (1970). 11. G. Nunes, Jr., and K. A. Earle, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 12. G. Nunes, Jr., in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 13. R. C. Richardson and E. N. Smith (eds.), Experimental Techniques in Condensed Matter Physics at Low Temperatures, Addison-Wesley, 1988. 14. B. A. Hands, in Cryogenic Engineering, B. A. Hands (ed.), Academic Press, 1986. 15. J. R. Thompson and J. O. Thomson, Rev. Sci. Instrum. 49, 1485 (1978). 16. J. Landau and R. Rosenbaum, Rev. Sci. Instrum. 43, 1540 (1972). 17. L. J. Salerno and P. Kittel, in Handbook of Cryogenic Engineering, J. G. Weisend II (ed.), Taylor & Francis, 1998. 18. I. R. Walker, Cryogenics 45, 87 (2005). (This reference contains information about the low temperature mechanical properties of beryllium–copper and MP35N). 19. W. J. Tallis, in Cryogenic Engineering, B. A. Hands (ed.), Academic Press, 1986. 20. Y. S. Touloukian, R. K. Kirby, R. E. Taylor, and P. D. Desai, Thermal Expansion – Metallic Elements and Alloys, Plenum, 1975. 21. I. Didschuns, A. L. Woodcraft, D. Bintley, and P. C. Hargrave, Cryogenics 44, 293 (2004). 22. A. J. Croft, Cryogenic Laboratory Equipment, Plenum, 1970. 23. J. K. Krause and P. R. Swinehart, Photonics Spectra, pp. 61–68, August 1985. 24. Installation and Application Notes for Cryogenic Sensors – Rev. 3/77, Lake Shore Cryotronics, Inc., 575 McCorkle Blvd., Westerville, OH, USA. www.lakeshore.com 25. D. S. Holmes and S. S. Courts, in Handbook of Cryogenic Engineering, J. G. Weisend II (ed.), Taylor & Francis, 1998. 26. A. C. Anderson, Rev. Sci. Instrum. 51, 1603 (1980). 27. R. S. Germain, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 28. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn, Cambridge University Press, 1989. 29. Temperature Measurement and Control – 1995 part 1 (catalogue), Lake Shore Cryotronics, Inc., 575 McCorkle Blvd., Westerville, OH, USA. www.lakeshore.com 30. B.W. Ricketson, in Cryogenic Engineering, B. A. Hands (ed.), Academic Press, 1986. 31. B. Betts, IEEE Spectrum 43, No. 4, 50 (2006).
10
Visible and near-visible optics
10.1 Introduction In certain types of optical measurements, such as those making use of interference, the most elusive sources of reliability difficulties are structural vibrations, acoustic disturbances, and temperature variations in the optical path. The first two phenomena are often a problem in many types of sensitive measurement in research, and these issues are discussed in detail in Section 3.5.3. Temperature effects are examined in the following section. Precision positioning devices in optical systems can cause difficulties due to mechanical instabilities and other problems. These are discussed in Section 8.2.3. The most general problem in optical work – affecting all apparatus to a greater or lesser degree – is contamination of optical surfaces. At one end of the scale, the problem may be merely scattering of light by dust (a serious issue in the case of, for example, diffraction gratings), which can often be solved by protecting and/or cleaning the surfaces using elementary methods. At the other extreme, when high-powered lasers are involved, contamination of optical surfaces may result in damage, or even destruction, of optical components. The process of cleaning optical surfaces may itself be difficult, depending on the character of the devices involved and the nature of the contaminants. If done frequently, cleaning can easily be more detrimental to the performance of the optics than the presence of contamination. These issues are discussed in more detail in Section 10.6.
10.2 Temperature variations in the optical path Relatively small changes in the temperature of the air in an optical path can result in effective path-length changes that are comparable with the wavelength of light. For example, in the case of light in the visible range, a 1 K increase in air temperature over a distance of 1 m, at standard temperature and pressure, will result in an optical path change of 1 µm [1]. This is clearly a potential problem for measurements involving interference, such as interferometry and holography. It can also be important for optical alignment operations, where a laser beam must be confined to a straight path over large distances. Optical imaging over large distances (e.g. astronomy) is also affected by this phenomenon. Although these effects are sometimes attributed to “air turbulence,” it is not the movement of air as such that causes the problems, but the movement of warm or cold air into the optical path. 310
311
10.2 Temperature variations in the optical path
The primary method for reducing such effects is to limit the presence of sources of heat or cold in the vicinity of the optical path. (Heat is usually the major problem.) Hence, electric heaters, large items of electronics, computers, etc., should be switched off, or located well away from the instrumentation. Air-conditioning ducts must be pointed away from the area, and the opening of building doors and windows should be restricted. Lasers are often a major source of heat near optical apparatus. If thermal effects are a major concern, and the heat-producing device is air cooled, the item can be placed in an enclosure, and the warm air can be taken from the area through a duct using a blower fan. In some cases, consideration might be given to the use of water cooling to carry away the heat. (However, water cooling has its own considerable reliability problems, as discussed in Section 8.4.) If possible, the optical path should be placed in an enclosure to protect it from air currents. Transparent boxes made of polymethylmethacrylate (e.g. Plexiglas or Perspex) are often very suitable for this purpose, and also serve to keep dust off the optics. This approach can present problems due to path length effects caused by the presence of a boundary layer near the wall of the enclosure. To prevent this, the edge of a light beam should be kept at least half a beam diameter away from the wall [2]. If the optical system must be completely free of air-path effects, consideration might be given to placing it in a partial vacuum. If this is not practical, the replacement of air in the enclosure with helium is almost as effective. This is because of the very high thermal conductivity of helium and its low index of refraction [1]. It may not always be possible to enclose optical paths. When light beams are being used in the open (e.g. to carry out alignment), another method of reducing air-temperature variation effects may be beneficial. This involves the use of fans to homogenize the air around the beam [3]. If this is done, it is important to blow the air along the path of the beam, and not at right angles to it (which can make matters worse). If temperature variations are having an effect on interferometric measurements, improvements can be obtained by using appropriate optical designs. Generally, the lengths of vulnerable optical paths should be minimized. One particular approach for reducing the problems is to use a “common-path interferometer”, in which the test and reference beams travel over nearly the same optical path [4]. This arrangement also reduces perturbations due to vibrations, as discussed on page 82. Common-path interferometers are used frequently in the testing of optical components. Near-exact coincidence of the test and reference beams is not necessary in order to obtain useful improvements. Even if the beams travel on distinctly separate, but nearby paths, helpful reductions in air-temperature variation effects can result. For example, if a Michelson interferometer is involved, replacing the standard configuration in which the test and reference beams are perpendicular to one another, with a parallel-beam modification, can be beneficial (see Fig. 10.1). The latter arrangement is used, partly for this reason, to control some types of ruling engine used in the creation of diffraction gratings [5]. In some cases, reducing the duration of the measurements and/or signal averaging may provide a solution to problems posed by air-temperature variations. Air turbulence effects in the vicinity of large telescopes, caused by items such as electronic equipment and motors, is a major problem in the field of astronomy. This issue is discussed in considerable detail in Ref. [6].
Visible and near-visible optics
312
(a)
(b) Mirrors
Mirror
Detector
Beamsplitter
Mirror
Prism
Light Source
Fig. 10.1
A conventional Michelson interferometer (a) and a parallel-beam modification (b). The latter reduces air-path effects, and is used to control ruling engines used to make diffraction gratings (see Ref. [5]). A step-by-step procedure for reducing a.c. phase noise in interferometric devices (resulting from air path temperature variations, floor vibrations, acoustic disturbances, etc.) is provided in Ref. [7].
10.3 Temperature changes in optical elements and support structures Temperature variations in solids can produce a number of undesirable effects in optical apparatus. They may induce surface deformations of optical components, change the relative position of these, and can produce effects due to changes in the index of refraction of a lens material [1,8]. If very sudden changes of temperature take place, the resulting mechanical stresses in optical materials can be large enough to cause breakage [1]. (This can be an especially serious problem for some transmissive infrared and ultraviolet optical materials – see Section 10.7.1). The most important consequence of thermal effects is usually the change in focus of an optical assembly with temperature [1]. Such behavior affects infrared lenses especially severely. This is followed by component distortions caused by differences in thermal expansion coefficients. The stresses on optical components due to differences between their thermal expansion coefficients and those of the structural materials comprising their mounts can be a particularly serious cause of distortions.
313
10.3 Temperature changes in optical elements and support structures
Measurements making use of interferometry can be affected by shifts in the position of items as a result of thermal-expansion. Temperature changes can also cause mechanical devices, such as translation and rotation stages, to bind [9], and ball bearings to loose their preload. The bending of structures in the presence of temperature gradients is a very common and troublesome effect, and should be the first thing to be examined if a temperatureinduced mechanical problem is suspected [9]. This is because the total deflection caused by bending builds up along the length of the bending object. For this reason, temperature gradients are generally much more problematic than uniform changes in temperature over time. Another serious problem caused by temperature shifts is changes in the bandpass wavelengths of interference filters [10]. In laboratory situations, the easiest method of dealing with these effects is often to reduce temperature variations by controlling the environment. Banishing heat sources, as discussed in the previous section, is one helpful measure. The regulation of room temperatures is discussed in Section 3.4.1.2. In some situations, it may be worthwhile to place the apparatus in a closed box. Temperature gradients can then be greatly reduced by placing a fan inside to stir the air and homogenize the temperature. Further improvements can be obtained by using nested boxes – with each space having its own fan. Using this method, it is possible to reduce temperature differences to less than 3 mK near room temperature [9]. Cooling fans can, however, cause vibration problems. If changes in temperature with time must be reduced, the use of a temperature controller may be beneficial. Temperature-control issues in optics are discussed in Ref. [9]. When the apparatus is being designed, the selection of materials to minimize thermal effects can be beneficial. For example, the matching of thermal expansion coefficients in rigidly coupled optical structures is desirable if significant temperature changes are foreseen. If rapid changes in temperature are expected, it may be helpful to use materials that have a high thermal conductivity. The employment of materials with very low thermal expansion coefficients is also beneficial in this regard (see the next section). The use of such materials is possibly the best passive solution to problems involving temperature gradients [1]. In situations where temperature control is impractical or inadequate, it may be useful to provide the optical system with some form of mechanical compensation arrangement. These automatically counteract temperature-induced changes by moving an optical element in response to changes in ambient temperature (e.g. in order to keep an image in focus). Such schemes (called “automatic mechanical athermalization”) may be either passive or active. Passive arrangements may utilize, for example, a liquid with a high thermal expansion coefficient in a bellows device, which maintains focus without the need for a powered drive mechanism. Active arrangements, in contrast, involve some powered closed-loop electronics system (perhaps using an electric motor operated by a computer) to achieve this aim. Non-mechanical “optical athermalization” techniques have also been developed. These methods are discussed in detail in Ref. [8].
314
Visible and near-visible optics
10.4 Materials stability A high degree of stability in the physical properties of materials is often very important in optics. Such properties include, for example, physical dimensions (for optical elements and mechanical structures), modulus of elasticity (for mechanical structures and flexures), and index of refraction (for transmissive optical elements). Usually the condition that causes the greatest problems in this regard is changes in temperature, as discussed in the previous section. However, properties can also vary with time (e.g. dimensional changes, or creep, of optical and structural materials – see page 315), humidity levels (e.g. dimensional changes of some plastics), and exposure to radiation (e.g. the discoloration of optical materials in the presence of intense UV light, which is discussed on page 334). When materials are selected for optical applications, this should not be done on the basis of a single parameter, such as the thermal expansion coefficient. Other properties, such as thermal conductivity, rigidity, creep, density, machinability and cost, must also be considered in choosing a material for use in any practical optical apparatus. Regarding the stability of physical dimensions with changes in temperature, it is clearly desirable to have a low thermal-expansion coefficient. However, in the presence of external temperature gradients or non-uniform changes in temperature, having a high thermal conductivity is also important in order to minimize temperature differences across the material. (Such a situation could arise if a local source of heat, such as a diode laser attached to some part of the material, was activated.) Hence, a useful figure-of-merit in such cases is the ratio of thermal expansion divided by the thermal conductivity. The distortion of objects caused by non-uniform temperature changes is proportional to this parameter (called the “relative thermal distortion” or “thermal distortion parameter” – see Ref. [1]). Among metals, aluminum has a relatively low relative thermal distortion, and hence is useful for these applications [11]. If high-dimensional stability in the presence of uniform or non-uniform temperature changes is important, two very useful structural metals are the iron–nickel and iron–nickel– cobalt alloys “Invar” and “Super-Invar” [12]. These have thermal expansion coefficients of respectively 1.6 × 10−6 and 0.72 × 10−6 K−1 between 25 ◦ C and 93 ◦ C. This should be contrasted with those of austenitic stainless steel and aluminum, which are respectively 17 × 10−6 and 24 × 10−6 K−1 , in this range. Invar alloys are employed, for example, in the outer skins of optical tables and breadboards used in the construction of interferometers. Even better temperature stability can be provided by glass ceramics. For example, one type R ” [13]. This has a thermal expansion that is remarkably good in this regard is “Zerodur −6 coefficient that ranges from 0 ± 0.1 × 10 to 0 ± 0.02 × 10−6 K−1 between 0 ◦ C and 50 ◦ C, depending on the grade. Fused silica glass is a commonly-available material with a thermal expansion coefficient of 0.6 × 10−6 K−1 over a temperature range from 0 ◦ C to 300 ◦ C. Machining or polishing materials with low thermal expansion coefficients can produce stresses in them, and thereby increase these coefficients [14]. Annealing may then be required. Plastics generally have very high thermal-expansion coefficients (e.g. 150 × 10−6 K−1 for polymethyl methacrylate). Compared with metals, they also have very low thermal
315
10.5 Etalon fringes
conductivities. These materials are also prone to creep under load (see below), and can swell significantly following absorption of atmospheric moisture [10]. As a structural material, aluminum is preferable to plastic, because it is stable, yet (like plastic) lightweight and easy to machine. A potential long-term problem with materials used in optical systems is dimensional instability, or “creep.” This is the change in a physical dimension of a material, even in the absence of temperature changes and possibly without an applied load. Metals can be poor in this regard, in comparison with glasses or ceramics (although they are not nearly as bad as plastics). Metals often undergo dimensional changes of more than one part per million per year [1]. Time scales for significant amounts of creep to occur are typically months or years [11]. Creep can limit the long-term performance of optical components (e.g. mirrors) made of these materials. Such effects can also be a problem for mechanical support structures. Among metals, 70–30 brass has particularly good long-term stability. Aluminum alloy 6061-T6 is also relatively good in this regard, and diffraction-limited allmetal mirrors have been made from this material [1]. Invar is particularly prone to creep, and this (along with other problems, such as difficulties in machining it) sometimes limits its use as a structural material. Heat treatments can be used to reduce creep behavior in metals. For reference, glass and ceramics typically undergo changes of about one part in 107 per year at 27 ◦ C [15]. The dimensional stability of some materials used in optics is discussed in detail in Ref. [16]. When mechanical components, such as translation and rotation stages, are being made for optical systems, the mechanical details of the design usually have a much more important influence on dimensional stability than creep of the materials [11]. For instance, the migration of lubricant in the threads of a micrometer can lead to significant position changes. Movements can also occur owing to the indentation of ball bearings into their raceways (“Brinelling”) under some conditions. Precision mechanical positioning devices are discussed in Section 8.2.3. If optical components are fixed in position using glue, they are likely to move around as the latter sets and shrinks. A good way of avoiding this problem is to employ clamp mounts whenever possible. This results in a fixture that is very stable [10]. (A discussion of optical mounts is provided in Ref. [17].) If glue must be used, it should be deployed only in very thin layers. Another possibility is to use glue that has been filled with glass spheres of uniform diameter. The items to be bonded should then be preloaded, so that the space between them remains constant while the glue cures [10].
10.5 Etalon fringes The “Fabry–Perot” or “multiple-beam” interferometer is a spectroscopic device with a very high resolving power. The basic arrangement (see Fig. 10.2) consists of two planar, parallel, and semi-reflecting glass plates, which are normally separated by a distance of a few millimeters to several centimeters [18]. When the plates are fixed in position (possibility with the ability to tilt them with respect to each other), the device is referred to as an etalon.
Visible and near-visible optics
316
Broad Light Source
Fig. 10.2
Lens
Wedged Plates
Lens
Screen
Schematic of a Fabry–Perot etalon (for a detailed description, see Ref. [18]). This device is highly susceptible to vibrations, environmental temperature changes, and mechanical creep of the components comprising the support structure. The parallel plates are wedged to prevent the formation of incidental etalon fringes within the plates themselves.
This type of interferometer is extremely sensitive to spacing fluctuations and angular misalignment of the plates [14]. As a result, it is severely affected by phenomena such as vibrations, environmental temperature changes, and mechanical creep of the components comprising the support structure. It is possible, and common, for optical systems to give rise to etalon-like interference effects unintentionally. This can happen in laser-based apparatus owing to the presence of stray light caused by unwanted reflections. The resulting “incidental etalon fringes” (or just “etalon fringes”) can cause major problems during sensitive measurements. (Normally, only systems making use of narrowband light have etalon fringe difficulties.) These terms are informal ones that refer to undesirable interference fringes resulting for any reason, and not just those arising in a strictly etalon-like optical configuration. However, etalon fringes are especially troublesome when optical elements with flat surfaces (such as windows and prisms) are closely spaced and in alignment. If fringes created across an air-gap are involved, small vibrations, air currents, temperature changes, and mechanical drift can lead to erratic variations in light intensity [10]. Incidental etalon fringes can also cause frequency pulling of lasers. Stray light beams that have been ignored can be an insidious cause of unwanted etalon fringes, partly because they multiply exponentially in number as they hit successive optical components. Windows and beam-splitters can produce significant internal etalon fringe effects just due to the changes in the dimensions and indices of refraction of their materials with temperature. A very striking example of this effect has been given, involving a polarizing beam-splitter cube (made of BK-7 glass) with an antireflection coating, in the path of a He– Ne laser beam at 632.8 nm [10]. When the beam is aligned for maximum interference (i.e. the worst case), the change in the transmittance of the beam-splitter with temperature turns out to be 12% per degree Kelvin! Pellicle beam-splitters can also suffer from etalon problems. The sources of incidental etalon fringe problems can often be located by tapping the apparatus gently in various places with the eraser end of a pencil or a screwdriver handle. Heating components with a hair drier or a small hot-air gun is also very effective.
10.5 Etalon fringes
317
Some methods of preventing the formation of incidental etalon fringes are (see Ref. [10]): (a) keeping the optical system simple, with the minimum number or surfaces, (b) trapping stray light rays using beam dumps1 (this should be done as close as possible to where the rays are first formed, before they have a chance to multiply), (c) slightly misaligning components that could give rise to etalon fringe effects (e.g. by tipping a window), (d) adding antireflection coatings to some vulnerable transmissive components, such as windows, (e) sending light beams into components, such as windows and prisms, at Brewster’s angle, to prevent reflections, (f) cementing transmissive components with similar refractive indices together, where possible, to reduce surface reflections, (g) employing mirrors rather than prisms to redirect beams, (h) using converging or diverging (i.e. non-collimated) beams, where possible, and (i) removing non-crucial windows in opto-electronic devices such as photodiodes. In optical systems containing lasers, perfect alignment of the components is generally not desirable. Accidental and deliberate minor misalignments, rather than being objectionable, can be very important in preventing the formation of etalon fringes. For example, the na¨ıve use of a precisely constructed monolithic support structure for the components (perhaps created on a computer-controlled milling machine), without any provisions for making position adjustments, can lead to major problems [10]. In analog fiber-optic systems, etalon fringes can be reduced by the insertion of Faraday isolators,2 and the use of angled fiber ends. Faraday isolators are also useful in preventing frequency pulling of lasers, if other measures have proven unsuccessful. Smearing the fringes out is a useful strategy. Techniques for doing this include the following. (a) Use canted windows and light sources with a low spatial coherence (e.g. incandescent and gas-discharge lamps). (b) In the case of internal fringes in windows, take advantage of low temporal coherence of the light source, by using windows that are thick compared with the coherence length of the light. (This may result in impractical thicknesses for very narrowband light sources.) (c) Rapidly modulate the frequency of the light, in order to make the fringes move in a way that is fast compared with the measuring times (e.g. in the case of a diode laser, this can be done by “current-tuning”). (d) Place wedged transmissive elements in the optical path, rather than ones with parallel surfaces, in order to prevent the interfering beams from being exactly collinear. 1 2
A beam dump is an optical component that serves to completely absorb an unwanted beam of light. A Faraday isolator is a component that allows light to pass through it in only one direction.
Visible and near-visible optics
318
Windows and plate beam-splitters are available from commercial sources with wedged surfaces. In some experiments involving near normal incidence of the beam on an optical component, etalon fringe problems can be very serious. Some devices, such as cube beam-splitters, are particularly problematic. In such cases, it may be better to use non-cubical Brewster splitters or Wollaston prisms instead [19]. The multiple layers of windows that must be installed in liquid-helium cryostats to allow access for laser light (while providing thermal insulation) can produce serious etalon fringe problems [9]. The optical geometries in cryostats of this type are necessarily very restrictive, with long beam paths and small clear diameters. Hence, one must work at close to normal incidence to the windows. A strategy for reducing such difficulties is to use circular polarizers. For monochromatic light, the primary strategies for avoiding incidental etalon fringes are [10]: (a) (b) (c) (d)
keep the system simple, avoid using closely spaced planar parallel surfaces, cement optical components together wherever possible, redirect beams using mirrors rather than prisms.
10.6 Contamination of optical components 10.6.1 Introduction The presence of that ubiquitous and highly noticeable surface contaminant – dust – is frequently less harmful to the performance of optical systems than is generally assumed. The optical surfaces on which particles tend to collect are usually not in focus, and the resulting shadows are therefore generally not troublesome [10]. In fact, it is scattering of light (causing spurious photons to appear where they are not wanted, such as in the detector of a telescope), rather than absorption, which is frequently the most significant problem resulting from dust particles in optical systems [15]. Dust and other impurities will affect most optical devices eventually, depending on the nature of these substances and exposure levels. Some devices and systems that are particularly vulnerable are listed in Table 10.1. The reason for the relatively high sensitivity of optical elements near a focal plane is that a dust particle near the focal plane, when projected onto the front aperture, can occupy a large fraction of the latter’s area [15]. Lenses are frequently less sensitive to contamination, and easier to clean, than mirrors [10]. A peculiar problem with lasers (especially high-power diode lasers), is that dust particles are attracted by a form of photophoresis to the most intense parts of the beam [10,24]. This
10.6 Contamination of optical components
319
Table 10.1 Some optical systems or devices that are especially affected by dust and other contaminants Optical system or device
Undesirable effect
High-power laser systems
Damage to optics from large light fluxes [20]
Diffraction gratings
Light scatter, which causes problems with spectroscopic measurements [10]
Systems in general that are sensitive to scattered light or operate at wavelengths at which scattering by dust is very pronounced (e.g. spectrometers, infrared optics,a instruments used for dark-field measurements, astronomical telescopes)
Surfaces that are contaminated with particulates can cause the stray light performance of an optical system to be significantly reduced [10,21,22]
Mirrors and windows inside a laser cavity
Light losses, which prevent the laser from oscillating [14]
Systems using lasers
Dust and films cause unwanted interference and diffraction effects [14]
Instruments making use of ultraviolet light
Large interaction of light with very thin molecular films, fluorescence from contaminants, photopolymerization and darkening of molecular contaminants [14,22]
Optical elements near a focal plane, such as field stops, field lenses, reticles, and spatial filter pinholes
Dust particles on these elements are relatively obtrusive, compared with those located at the aperture [15]
Single-mode fiber optic connectors
Attenuation of light passing through the connector, possible damage to the connector even at relatively small light power levels (32 mW) [23]
Eyepieces in devices such as microscopes, loupes, optical bench telescopes, etc.
Fogging under conditions of high heat and humidity; rapid accumulation of dust, skin secretions, etc.
a
Common dust particles are, because of their size, particularly strong scattering centers for infrared radiation.
means that optical components in the way of the beam, and especially the ones inside the laser cavity, can become contaminated rapidly, in comparison with nearby objects that are not exposed to the laser light. Some common types of contamination are [10,17,25,26]: (a) (b) (c) (d) (e) (f) (g) (h) (i) (j)
fingerprints (can cause damage – see below), vacuum grease (greases can be very difficult to remove), skin secretions dissolved in cleaning solutions, plasticizers from gloves and plastic parts dissolved in cleaning solutions, droplets deposited as a result of talking near optics, atmospheric dust, smoke or fumes (e.g. from cigarettes or soldering operations), deposits left on a surface following the condensation of water vapor, condensed products of outgassing by adhesives, in vacuum systems: mechanical roughing pump and diffusion pump oil,
320
Visible and near-visible optics
(k) airborne oil from vacuum pumps, compressed air, etc., (l) (in the case of lasers) impurities in lasing gases, sputtered metal from laser electrodes, and (m) lubricants in optical assemblies that have evaporated and re-condensed on optical surfaces. Skin secretions in fingerprints can be particularly harmful, because their acids attack optical coatings and glass. These substances may also create permanent stains on optical components [11]. Impurities in solvents used to clean optical components are an insidious cause of contamination. This often results from the dipping of fingers into solvent containers, and other forms of carelessness. The immersion of gloves into cleaning solvent is very likely to result in the coating of optical components with plasticizer residue, a common example of which is dioctyl phthalate [10]. Solvents can also become contaminated after being placed in inappropriate containers, such as polyethylene squeeze bottles [27], PVC vessels, or ones with PVC cap liners. Also, new solvents, as created by the manufacturer, may contain impurities. The same is true of soap and detergents. These issues are discussed in Section 10.6.5.3. Particular care should also be taken during the use of silicones (in greases, oils, and other substances) near optics. They should be avoided, if possible. Silicones have a tendency to migrate, and can be very difficult to remove from surfaces. Silicone oil adheres particularly strongly to glass. If they get onto optics that must be subsequently coated (e.g. aluminized mirrors), silicones may make it extremely difficult to get adequate coating adhesion (see the discussion on page 97). Perfluoropolyether oils and greases pose similar problems. Contamination by these substances can be insidious. One actual example involved the passing of nitrogen (used to purge an optical substrate exposed to the beam from a highpower laser) through valves that had been lubricated with silicone grease. The gas was contaminated by vapor from the valves, and subsequently contaminated the optic. As a result, the optic was damaged by the beam [28]. Lubricants that are used near optical elements have the potential to evaporate and redeposit on the optics [10]. This can be a problem when positioning devices are being used, and particularly in closed environments such as optical assemblies. For this reason, oils or greases must not be used in spectrometers [29]. Only dry lubricants (see pages 231–233) are acceptable in these instruments. If oil- or grease-based lubricants cannot be avoided in optical applications, a type that has a low vapor pressure should be selected. Oils and greases based on multiply alkylated cyclopentanes (MACs) have extremely low vapor pressures, and may be employed for this purpose (see the discussion on pages 229–231). When sensitive optics are being used in humid environments, particular attention must be given to avoiding conditions that can lead to condensation. Water that is condensed out of the air is generally not pure (as might be expected), but is usually laden with contaminants (such as sulfur compounds and auto emissions) that will form deposits when the water evaporates. Another common and troublesome contaminant in many humid areas is salt (see the discussion on page 67). Condensation is a serious problem in the case of high power laser systems [26]. One possible scenario for unexpected condensation is the loss of
321
10.6 Contamination of optical components
air conditioning due to a power failure. Condensation issues are discussed in more detail in Section 3.4.2. Tobacco smoke is a particularly troublesome contaminant of optical surfaces, especially those in laser systems. Smoke deposits are not easy to remove. Painted items should generally not be used in the mechanical hardware employed in optical systems. The paint will flake off sooner or later and get onto the optics or into positioning devices [11]. Instead, other surface treatments can be used. Aluminum may be provided with anodizing, which is hard and durable, and can be colored black to reduce reflections. Steel and brass can be given electroplated coatings of, e.g., chrome, nickel, or rhodium. Bead-blasting of surfaces prior to coating will reduce specular reflections. Brass is often treated with a metal blackening liquid (“brass black”), without electroplating.
10.6.2 A closer look at some contamination-sensitive systems and devices 10.6.2.1 High-power light systems Optical systems operating at very high photon fluxes can easily be damaged by contaminants for a variety of reasons. Such contaminants can act as centers of high light absorption (and heating), and enhanced electric fields [20]. Even traces of oil from the fingertips or small amounts of dust can greatly increase the susceptibility of optical surfaces to damage. It is also possible for damage to occur as a result of Fresnel diffraction of laser radiation from particles on the front surface of a lens onto its rear surface (effectively a kind of focusing of light by dust) [30]. The form of the damage can range from small defects in surface coatings, to fracture or burning-through of the entire optical component (see, e.g., Ref. [25]). The damage of components exposed to high light fluxes from lasers – owing to contamination, imperfections in optical materials, surface damage, and other causes – is called “laser-induced breakdown”. It is a very serious issue, and is the subject of an extensive literature (see, e.g., Ref. [20]). Local heating of an optical component resulting from the presence of contamination can also cause temperature gradients that will cause the component to distort, and (in transmissive components) may also lead to changes in their refractive indexes. This leads to distortions in the power distribution within the laser beam – an effect called “thermal lensing” [25]. For all the above reasons, meticulous care must be given to preventing the contamination of optical surfaces in high-power laser systems. Further information is also provided in Ref. [26]. Damage due to the presence of surface contamination can also occur in apparatus containing tungsten halogen lamps or arc lamps. This takes the form of devitrification of the quartz envelopes of these devices if they are operated (and subsequently become hot) after being touched by bare fingers. The devitrified quartz is relatively weak, and tends to leak. Any fingerprints should be removed from the lamps with a solvent while they are still cold [31].
322
Visible and near-visible optics
10.6.2.2 Intra-cavity laser optics Minuscule amounts of light absorption within the cavity of a laser can prevent the device from oscillating, or at least severely reduce its output power. Small traces of contamination will do this, and in fact just a single dust particle in the wrong location can disable a laser [24]. Even if the optics are clean enough to allow lasing, isolated dust specks on them may cause undesirable mode structures to appear. Also, dust particles drifting across the beam inside the cavity can cause intensity noise [32]. For these reasons, intracavity optical elements, such as windows, mirrors, etc., must be kept extremely clean [14]. Furthermore, the inside of the laser head in general should be maintained in a clean condition. In the case of some small gas lasers (e.g. low-power helium–neon types), and in most diode lasers, the optics within the laser cavity are hermetically sealed against the entry of contaminants. In small helium–neon lasers, for example, the cavity mirrors are inside the gas discharge tube. However, in most other lasers, the cavity optics are external to the part of the laser that provides the gain. They are therefore accessible and susceptible to contamination. In commercial lasers, sealing devices, such as bellows dust seals, are used to prevent the entry of contaminants into the laser cavity. Nevertheless, it is important to take precautions to keep the laser enclosure, and the general environment in which the laser must operate, as clean as possible.
10.6.2.3 Diffraction gratings Because of the very harmful effects of scattered light on spectroscopic measurements, diffraction gratings must be kept clean. This is best done by preventing them from becoming contaminated in the first place. Diffraction gratings (in particular, replica gratings) are difficult to clean without causing damage, and heavily contaminated ones may have to be replaced [10]. Fingerprints are generally a significant source of concern. If skin secretions from fingers (or saliva) get deposited on a grating, they should be removed as soon as possible. The acids or bases in these substances will start to corrode the metallic surface of the grating (if it is made of a reactive metal such as aluminum or silver), and this corrosion cannot be removed by subsequent cleaning [5]. One troublesome characteristic of oils is that they tend to wick along the grooves of gratings, which causes a loss of efficiency [10]. If the oil is a hydrocarbon type, it may be relatively easy to dissolve with a solvent such as acetone. Silicone and perfluoropolyether (PFPE) oils, on the other hand, can be very much harder to remove. Pump oil in vacuum systems can also be very harmful to gratings [5]. This is because gratings are normally employed in vacuum systems because they are being used for the spectroscopy of ultraviolet light. This light tends to photolyze films of oil that have been deposited on the grating, which causes darkening and a loss of grating efficiency. It also makes the oil residue much more difficult to remove (see below). Some oil-free vacuum pumps are discussed in Section 7.2.
323
10.6 Contamination of optical components
10.6.2.4 Ultraviolet optics Systems making use of UV radiation present a number of contamination-related problems. The short wavelength of UV light means that the effects of contaminating layers is much more pronounced than it would be in the case of visible light. In ultraviolet optical systems for which low levels of light scattering is important, the optics must be kept extremely clean if they are to attain high levels of performance [15]. Also, many substances fluoresce when exposed to ultraviolet radiation, and this can be a source of unwanted light in some situations [14]. (Indeed, the phenomenon of UV fluorescence is often used to indicate the presence of contamination on objects.) Another difficulty arises because of an effect known as “photopolymerization” (or “photolysis”), whereby organic contaminants (films of oil, etc.) undergo chemical changes that cause them to permanently fix themselves to surfaces, and to darken in color. This change in transmissibility, from transparent to opaque, is very striking (see Ref. [33]), and can cause a serious degradation in the performance of UV instruments [22]. It has been found that adhesive materials that have been used to secure components in UV laser systems can find their way onto nearby optical surfaces [33]. This may occur because of outgassing of the adhesive, and redeposition of the vaporized substances on other parts of the apparatus, or by some other means. These substances are then photolyzed by the UV radiation, and become opaque. Mechanical mounts, rather than adhesives, should be employed to hold optical elements in such apparatus.
10.6.2.5 Single-mode fiber optic connectors The very small diameter of the core of a single-mode optical fiber (8–9 µm) makes the devices used to connect these fibers very susceptible to dust and other contamination. For example, a large dust particle on the core of a connector can completely block the transmission of light. Another problem with such connectors is that the power densities present in the core, even with relatively small light power levels, can be extremely large, and enough to cause organic substances to combust [23]. Such combustion can be very rapid, and violent enough to cause damage to the optical surface of the connector. For these reasons, single-mode connectors must always be cleaned of all contaminants immediately prior to mating. The cleaning process must be done with care, (see pages 328 and 330). It is also advisable to attach a protection boot (or dust cover) to such connectors when they are not in use, in order to shield them from dust and protect them from mechanical damage during handling. Unfortunately, substances present in the plastics used in such boots (possibly mold-release agents) will tend to migrate onto the fiber, so that the use of a boot does not eliminate the necessity for cleaning the fiber prior to connection.
10.6.3 Measures for protecting optics The biggest single threat to the cleanliness of optical components is generally the human finger. Even if bare fingers are used to hold optical components only at their edges,
Visible and near-visible optics
324
troublesome contamination is still a problem. Skin secretions and other contaminants will be left on the edges and also on working optical surfaces. Hence, the use of gloves3 or finger cots is necessary when sensitive or delicate items are being handled. Such items should always be held at their edges. Powder-free latex or polyethylene gloves are preferred. Vinyl (PVC) types are best avoided, as they can release plasticizers. Cleanroom gloves can be a good choice, because they are processed in such a way as to minimize the presence of mold-release agents, which can cause contamination. Where the highest levels of precaution are needed, gloves should be wiped of any skin secretions using spectroscopic-grade methanol after they have been donned [26]. Gloves should be changed frequently – perhaps every 30 minutes or so. One should not talk around, or breathe upon, critical optical surfaces. Facemasks (especially cleanroom types) are useful in this regard. With regard to contamination from the environment (dust, etc.), optical systems generally acquire most of this when they are not being worked on. Therefore, the provision of dust covers for optical apparatus is an inexpensive and effective measure. Boxes made of some transparent plastic (e.g. PMMA – Plexiglas or Perspex), with a hinged side that can be swiveled out of the way when the apparatus is being worked on, are useful [10]. Such enclosures are easy to make, and can be obtained commercially. Commercial optical table enclosures may have provisions for the installation of a clean air blower, which pressurizes the former with filtered air from the room. Such blowers are intended for use in-between precision measurements only, so as to avoid causing air turbulence problems. In situations in which the use of covers or enclosures is inadequate or impractical (perhaps because continuous access to an optical system is needed), consideration should be given to the use of a “clean hood” or “laminar flow station.” These are devices that continuously blow highly filtered air over a local area within a room. The presence of static electric charges on optical surfaces can lead to the attraction and accumulation of large amounts of dust. Electrostatic forces can also make it difficult to subsequently remove dust particles. A solution to this problem is to install an ionization device (called a “static eliminator”) near the optical components. These release positive and negative ions into the air, which act to neutralize surface charges. Such devices can be very effective. In one case, it was found that an ionizer was able to reduce the accumulation rate for 1 µm or larger particles on exposed silicon surfaces by a factor of 20 [34]. The device had the additional benefit of reducing the dust concentration in the air by 50%. Ionization devices can be incorporated into the clean air blowers used to keep dust out of commercial optical bench enclosures. In cases where optical elements must be allowed to move with respect to one another, flexible sealed enclosures (such as bellows) can provide useful local protection. These are often employed in commercial lasers to enclose the space between their Brewster windows and mirrors, and thereby protect these critical elements from contamination [14].
3
Gloves can be made considerably more comfortable by wearing fabric glove liners underneath them.
10.6 Contamination of optical components
325
Purge Inlet
Purge Sleeve
Conical Section Lens Housing
Fig. 10.3
Purging arrangement to protect a lens that is directly exposed to a highly contaminating environment. Clean gas flows across the surface of the lens and along the purge sleeve, preventing particles from approaching the lens, and sweeping off larger ones that may have deposited on it. The gas speeds up in the conical section, thereby reducing backflow into the chamber. (See the discussion in Ref. [37].)
Pellicles (thin transparent films of plastic that are often used as beam-splitters) can be used to prevent dust from depositing on diffraction gratings [10]. However, pellicles themselves are very delicate, and must be cleaned using a suitable technique. Also, these films tend to be sensitive to air currents and vibrations, which cause them to deform. If an optical element is used only part of the time, another approach is to provide it with a mechanical shutter. Such devices are frequently installed next to the viewports of vacuum thin-film deposition systems, in order to reduce the rate at which these become opaque due to buildups of the materials used to make the films. In cases where the above measures are inadequate or impractical (and especially if environmental conditions are severe), consideration might be given to the use of purge gases. This approach involves placing the optics within a housing, which is sealed as well as possible, and introducing a flow of clean gas into the latter, so that the outflow from any openings prevents the entry of contaminants. Filtered dry nitrogen or helium are frequently used for flushing and pressurization [17,26]. The optics of high-power laser systems can be effectively protected from water vapor and dust by this method. The technique is also used in optical instruments on spacecraft [35,36]. The method can also be used when the optics must be exposed directly to a contaminating environment. For example, it has been used to protect optical pyrometers in jet engines from the very harsh contaminants found inside these machines. A number of purge designs for this application are described in Ref. [37]. One approach, called “air scrubbing,” involves directing clean air across the surface of a lens, so that particles that have already entered the purge chamber are prevented from landing on it (see Fig. 10.3). Techniques have been developed that make use of corona discharges in air to protect optical components from contamination by dust and hydrocarbon mists. Electrodes near the component (but outside the optical path) are placed at a high potential. This
326
Visible and near-visible optics
creates a corona that causes incoming dust particles to acquire an electrostatic charge. The dust is then repelled from the vicinity by electric forces. Such systems do not involve exposed high-voltage components, have no moving parts, and require very little power (see Refs. [38,39,40,41]). The creation of ozone (a strongly oxidizing atmospheric contaminant) is a potential problem with this approach. Certain optical components that are subject to fogging, and in particular eyepieces used in microscopes and other instruments, can benefit from the application of antifog coatings. These substances are widely sold – as liquids intended to be wiped onto car windshields, or impregnated into special lens cloths for use on spectacles and ski goggles, etc. Such substances may cause unacceptable interference with the operation of any antireflection coatings on the lenses, however.
10.6.4 Inspection Detecting contamination on optical components may or may not be straightforward. In the case of smooth surfaces of lenses and mirrors, a simple procedure (see Ref. [10]) is to face the component, with a bright overhead light behind one’s back. The component is then rocked back and forth in the hands, while the reflection of the light from the front (and, in the case of a transmissive component) the rear surface is observed. As this is done, the reflection should remain of uniform intensity and color, with no shiny spots (which may be a fingerprint or oil film). Scattering (e.g. from dust particles) can be detected by orienting the optic so that a dimly lit part of the room is reflected. Large particles will reveal their presence as bright specks of light, while small particles, condensed vapors, or surface damage (see page 327) produce broad diffuse scattering. If the contamination criteria are more exacting, a very bright light source may be required for the inspection. For example, in the case of laser optics it may be necessary to use a microscope illuminator, a slide projector, or even (with appropriate attention to eye safety) a laser for this purpose. Visual appearance is not necessarily an ironclad guide as to whether the performance of an optical component will be compromised. For example, in the case of high-power laser optics, even minute quantities of contamination, which may not be visible under normal conditions, can lead to damage. On the other hand, the surface blemishes that are frequently observed on the surfaces of (even clean) diffraction gratings may have no effect on their performance [5]. This is because these blemishes may represent trivial changes in the local efficiency of the grating, which do not affect its spectroscopic performance (since this is an integrated effect over the entire grating). However, the changes produce alterations in the color appearance of the affected areas, in a way that is particularly discernable to the eye. For very high-power laser optics, where even the smallest amounts of contamination could lead to damage, special examination techniques have been devised. In particular, the presence of organic contamination can be detected by near-infrared spectroscopy, using a spectrophotometer [42].
327
10.6 Contamination of optical components
10.6.5 Cleaning of optical components 10.6.5.1 Introduction The cleaning of optical components is something that requires a certain amount of thought. Cleaning usually results in at least some surface damage. Frequently, this damage can be more harmful to the performance of a device than the contaminants that were removed during its creation. For example, the scratches created when an optical surface with dust on it is wiped can generate a significant amount of scattered light. It turns out that it is the length of the perimeter of the defect, rather than its surface area, which tends to determine how much scattering takes place. Hence, a long, thin scratch may be worse than a dust particle [10]. In high-power laser optics, the presence of scratches can (like contaminants) lead to further damage in the presence of high photon fluxes [20]. Furthermore, some optical components are particularly prone to damage. Replica diffraction gratings are perhaps the best example of this. The ruled surfaces on such gratings must never be touched by the hands, and special techniques that do not involve the application of direct pressure to the surfaces must be used to clean them. The optical components used in the resonant cavities of lasers (i.e. mirrors, Brewster windows, etalons, Littrow prisms, etc.) are further examples of components that should be cleaned with the greatest care [24]. Casual cleaning that could easily be endured by a durable optical device in other apparatus (such as an eyepiece in a microscope) will quickly render such components useless. This is especially relevant for the coated dielectric mirrors that form the boundaries of the laser cavity. (In fact, ultralow loss mirrors used for any application are also very vulnerable to damage during cleaning.) Again, special cleaning techniques intended for use with high-performance optics are needed with such devices [11]. Other optical components that are unusually susceptible to damage by cleaning (because of mechanical damage to the surfaces) are: (a) first surface metal mirrors (e.g. aluminum- or gold-coated) that have not been provided with a protective coating [10], (b) pellicles [11], (c) specially coated ultraviolet, and some infrared, optical elements [14], and (d) solid copper mirrors [14]. The materials comprising some optical components that are intended to be used at infrared and ultraviolet wavelengths scratch easily. Such components may be more robust than the ones given in the above list, but are not as durable as the hard optics commonly employed in the visible part of the spectrum. A certain amount of care is required when dealing with these. Some materials in this category are germanium, silicon, gallium arsenide, arsenic trisulphide, tellurium, and thallium bromoiodide (also known as KRS-5) [14]. Crystalline materials used as transmissive optics in the infrared can be damaged by water. These include sodium chloride, potassium chloride, cesium bromide, and cesium iodide. The water may come from condensation during cleaning. For example, if a solvent-soaked
328
Visible and near-visible optics
tissue is wiped across such an optic, the cooling that takes place following evaporation of the solvent can cause this [14]. For the above reasons, it is general rule of thumb than optical components should not be cleaned unless it is really necessary. If there is any possibility that an optical element is a delicate type, no action should be taken until the proper cleaning procedure can be established. In some cases (e.g. water-soluble infrared optics) the cleaning may have to be done by a specialist. However, if a mishap should result in the unacceptable contamination of an optic, it is usually a good idea to avoid unnecessary delay in removing it. Some contaminants (such as fingerprints, mold, and salt deposits resulting from condensation) can attack optical surfaces, and contamination generally becomes harder to remove the longer it is left on a surface. Fingerprints, water spots, and oil should be cleaned off right away [11]. Single-mode fiber-optic connectors can be damaged if cleaning is carried out while the fiber is carrying light, even at power levels as low as 32 mW. The very high optical power densities at the surface of the fiber cause the cleaning fluid (e.g. alcohol) to undergo explosive ignition, causing pitting or cracking of the surface. Single-mode fiber-optic connectors must never be cleaned while attached to a light-carrying fiber [23]. Besides avoiding damage, the other major issue during the cleaning of optics is doing this in such a way that one does not deposit more contamination than one removes. Making matters worse in this way is surprisingly easy to do. Contaminants may come from impurities dissolved in a solvent, as discussed above. They can also be introduced if: r parts of a cleaning tissue that must come into contact with an optical surface are contaminated by being touched with the fingers (very common), or r any pressurized gas employed for dust removal (e.g. machine-shop-grade compressed air) contains oil, water, or dirt. In order to avoid inadvertent contamination, it is also very important to use an appropriate cleaning technique.
10.6.5.2 Some general cleaning procedures It is usually safe to clean most optical surfaces of dust by blowing them with compressed gas [14]. (However, this may be a dangerous procedure to use if delicate optics are involved, and gritty dust is present on the surfaces [6].) Filtered dry nitrogen from a gas cylinder is about the best for this purpose. Machine-shop-grade compressed air should not be used [10] (see Section 3.8.2 for more information). R ” Another method of blowing dust from surfaces is to use an “air duster” or “Dust-Off spray. These consist of spray-cans containing a volatile liquid (such as difluoroethane), which vaporizes and emerges as a gas from a small nozzle when released. A possible disadvantage of using an air duster is that condensation may be produced by the gas, which could be a problem when cleaning certain items, such as hygroscopic optical elements [14]. Furthermore, if such devices are used to clean diffraction gratings, very cold liquid may be emitted with the gas that can cause the surface of the grating to craze [10]. Air dusters should not be used to clean delicate optics.
329
10.6 Contamination of optical components
Not all dust particles can be removed from a surface by blowing it with compressed gas. This is because the smaller ones will be within the hydrodynamic boundary layer of the moving gas, in which the flow velocity falls rapidly to zero as the surface is approached [43]. (Or, to put it another way, only large particles protrude through the boundary layer into the stream.) The removal of dust particles with diameters of less than about 20 µm is impractical with this method, and particles much larger than this may also not be dislodged [44]. High-velocity compressed gas can be effective at removing very large particles, but this risks causing damage to the surface. CO2 snow cleaning and plastic film peeling (see page 332) are more effective, and safer, than using high-velocity compressed gas for this purpose. As discussed on page 324, dust particles are often held tightly against optical surfaces by electrostatic forces. Hence, the use of an “ion-air gun”, which generates positive and negative ions in the gas stream that neutralize static charges on surfaces, can be beneficial for blowing off dust. The type of gun employed for this purpose is important. Those that create ions using a corona discharge work well in reducing surface charges. However, those that use a radioactive source (e.g. polonium) to generate the ions can actually increase the charge on a surface, and thereby make the dust collection problem worse [45]. Ionizer attachments are also available for air dusters. Even if some other dust-removal technique is employed, the prior use of an ion air gun to eliminate static charges from optical surfaces may be beneficial. Most of the optical surfaces found on the exposed parts of common laboratory optical devices, such as microscopes and cameras, will be relatively robust. These can be cleaned using straightforward methods, such as wiping with soft solvent-soaked lens tissues. Nevertheless, the production of stray light in these devices will gradually increase over time if such methods are used, because of the slow accumulation of scratches [10]. The use of paper towels, common tissue papers, and the like should be avoided if possible, especially when cleaning delicate optics. These materials can produce scratches, and can leave dust behind on the surfaces. Suitable non-abrasive cleaning tissues are available from optical equipment suppliers. Before carrying out any cleaning procedure on robust optics involving vigorous wiping, it is desirable first to remove large dust particles from the surfaces that might otherwise grind against them. For this purpose, blowing using compressed gas is normally the best method. Alternatively, this can be done by dusting the surface with a soft camel- or goat’shair brush. A potential problem with using a brush is that it may transfer contaminants from other surfaces. In the case of more delicate optics, after removing the larger particles by blowing, the remaining dust can be eliminated by gently wiping the surface very gently, using a figure-of-eight motion, with a piece of lens tissue soaked in alcohol [11]. This should be held in such a way that only the tissue’s own internal elastic forces hold it against the surface (i.e. direct pressure from the fingers or a hemostat is not applied). The procedure should then be repeated using tissue soaked in acetone. Brushing or wiping techniques should not be used if potentially abrasive particles (e.g. eyelashes) are seen on the optic. In order to achieve the best results, attention to the details of a cleaning technique is important. General-purpose cleaning procedures are described in Refs. [10],
Visible and near-visible optics
330
[11], and [14]. Methods for cleaning specific devices are described in the following references: (a) (b) (c) (d) (e) (f)
high-power laser system optics: [25] and [26], diffraction gratings: [5] and [10], laser internal optics: [24], infrared windows: [14] (but check with the manufacturer), fiber-optic connectors: [23], and pellicles: [11].
A variety of methods for cleaning astronomical telescope optics, ranging in sophistication from detergent-solution washing to ultraviolet laser cleaning, are discussed in Ref. [6]. Some delicate optical components, such as mirrors with unprotected soft metal coatings and replica diffraction gratings, are not suitable for cleaning by methods involving direct pressure. However, these can be cleaned without harm by techniques involving immersion in cleaning solutions. (In practice, as in the case of gratings, cleaning delicate items using this method may involve considerable effort [5]). Cemented optics (e.g. lenses that are glued together) must not be cleaned by immersion [11]. Because dust particles become more firmly anchored to surfaces with the passage of time, their removal by immersion in cleaning solutions is more effective if it is done frequently [6]. Special techniques are available for cleaning the most delicate optics. These methods, which include CO2 snow cleaning, plastic film peeling, and plasma ashing, are discussed in Sections 10.6.5.5–10.6.5.8.
10.6.5.3 Some cleaning agents to be avoided in the cleaning of optics Certain liquids that would seem to be very suitable for the removal of contaminants from optical surfaces, can actually themselves result in contamination. For example the isopropyl alcohol-based fluid that is often sold in pharmacies as “rubbing alcohol” often contains impurities, such as perfume oils, which will be left on an optic when the alcohol evaporates. Only solvents of the highest purity, such as spectroscopic-grade isopropyl alcohol or acetone, should be used [11]. When absolute cleanliness of an optical surface is essential (e.g. in high-power laser systems), it is recommended that solvents such as isopropyl alcohol and acetone be avoided [26]. Spectroscopic-grade methyl alcohol is better suited for this purpose. The use of vapor degreasing is an excellent way of avoiding problems caused by solvent contamination (see Section 10.6.5.5). Optical-grade plastics can be attacked by solvents. Therefore, it is essential to carefully select, or entirely avoid, solvents when the cleaning of plastic optical components is being done. Acetone is often particularly harmful, but even alcohols may cause problems in some cases. The plastic layer underneath the metal coating in replica diffraction gratings may be softened by certain solvents. The manufacturer should be consulted about what solvents are acceptable [10]. Diffraction grating coatings can be harmed by solvents that contain chlorine.
10.6 Contamination of optical components
331
If liquid soap in water is to be used to lift contaminants from surfaces,4 ordinary tap water should be avoided. (Tap water may contain dissolved minerals, sand, and other foreign matter. It also has chlorine, which can lead to corrosion of some surfaces.) Distilled or deionized water should be used instead. For diffraction gratings, only the highest-purity deionized water (preferably with a resistivity of 18 M·cm) is acceptable [10]. Only neutral mild soaps are suitable – alkali, perfumed, or colored soaps should be avoided [11]. In the case of diffraction gratings, an industrial surfactant such as TritonTM is preferable to an ordinary soap or detergent [10]. Some soaps contain abrasives, such as sand [15].
10.6.5.4 Ultrasonic cleaning Ultrasonic cleaning can be a useful method of removing heavy contamination, such as fingerprints. However, it is generally only suitable for use on robust optical components, made out of hard substrate materials with hard coatings. The main problem is that the cavitation that takes place in the cleaning liquid during the application of the ultrasound can erode delicate surfaces. Suitable substrate materials are borosilicate glass, sapphire, quartz, or fused silica [14]. Some hard coating materials are magnesium fluoride (MgF2 ), quartz (SiO2 ), silicon monoxide (SiO), silicon nitride, and sapphire (Al2 O3 ). Ultrasonic cleaning damage is discussed in greater detail in Section 3.12.
10.6.5.5 Vapor degreasing A particularly efficient method of removing contaminants with solvents, which does not present the possibility of leaving solvent-borne residues on the surfaces, is the vapor degreasing process. In essence, this involves suspending the optical component above a boiling bath of solvent. Vaporized solvent rises from the bath and condenses on the component. The resulting liquid solvent then drips back into the bath, carrying oil, grease, and fingerprint residues with it. Since the solvent that condenses from the vapor is essentially distilled, it is very pure. The technique is well suited for use on optics that must be exceptionally clean, such as the Brewster windows used in lasers [14]. As a non-contact method, it can be used on the most delicate components. Vapor degreasing fell out of favor in the world at large for many years because the solvents traditionally employed in the process (such as trichloroethylene) were found to damage the Earth’s ozone layer. However, with the development of an environmentally R ” [46]) the technique is friendly solvent based on a hydrofluorocarbon (known as “Vertrel coming back into use. Although specialized equipment can be obtained commercially for vapor degreasing, the method need involve nothing more than a beaker, a hotplate, and a fixture to hold the optics. Further discussions of this method can be found in Refs. [47] and [48]. 4
Soap (or some other wetting agent) in water, rather than solvents, is preferred for removing heavy contaminants such as fingerprints, grease, water spots, and oil from surfaces. (Solvents tend to just redistribute these contaminants.)
332
Visible and near-visible optics
10.6.5.6 Plastic film peeling technique One very effective, but relatively slow, method of cleaning optical surfaces involves pouring, swabbing, or spraying a special liquid (polyvinyl alcohol in an organic solvent) onto the optical surface [49]. The solvent is allowed to evaporate, and the plastic film that remains is then peeled off the optic. Dust and some organic residues, such as fingerprints, are lifted from the surface along with the film. This process is a very gentle one in most cases. It can be used with coated optics, unprotected metal surface mirrors, and diffraction gratings. There is a possibility that the method will damage certain plastic optical surfaces, if these are susceptible to dissolution in polar organic solvents. Plastic film peeling has been shown to be capable of removing almost all dust from aluminum-coated mirror surfaces [6]. It is also effective at lifting most fingerprints and other contaminants, as long as these are relatively fresh [50]. However, the method is not suited for use on very large optical components, such as astronomical telescope mirrors [44]. The use of the technique is described in detail in Ref. [10].
10.6.5.7 CO2 snow cleaning The carbon dioxide snow method involves forming a jet of CO2 snowflakes suspended in the gas, by allowing pure CO2 liquid to expand through an orifice. When these snowflakes are directed against a dust-contaminated surface, they impart momentum to the dust particles, and thereby dislodge them. The jet of CO2 gas then carries the particles away. This process is effective even with particles having submicron dimensions. The technique also removes hydrocarbon impurities, such as oil, by a “transient solvent” or “freeze fracture” mechanism. Importantly, fingerprints are also effectively removed [5]. Any snowflakes that remain on the surface quickly turn to gas, and so do not themselves act as contaminants. (In order to ensure this, the supply of liquid CO2 must be highly pure – substantially free of particles, water and other substances.) The CO2 snow cleaning method is a very gentle one. It has been used successfully on delicate surfaces such as diffraction gratings [5] and unprotected metal mirror coatings [6]. However, the method does not seem to be nearly as effective at removing particles as the plastic film peeling technique [6]. On the other hand, snow cleaning can be used on optical surfaces that are not in a horizontal orientation, and (unlike film peeling) is suitable for optical components of virtually any size. It is used routinely to clean large astronomical telescope mirrors [44].
10.6.5.8 Cleaning by using reactive gases In some situations, hydrocarbon contaminants on delicate optical surface are photolyzed or carbonized, and are then essentially impossible to remove by normal cleaning methods. This happens, for example, to diffraction gratings that have been contaminated with an oil film and exposed to ultraviolet light (as discussed on page 322). Methods have been developed to remove such contaminants by using reactive gases, such as monatomic oxygen or ozone (O3 ). Perhaps the simplest way of doing this is to suspend a source of short-wavelength ultraviolet light (e.g. a mercury lamp) a few centimeters above the optical surface. The
333
10.7 Degradation of optical materials
ozone that is produced by the interaction between the UV light and the oxygen in the air oxidizes carbon-contamination on the optic. This method has been used to clean goldcoated replica gratings, with no significant degradation of the surface figure and finish of the grating’s surface, and a full restoration of the reflectivity of the grating [51]. The UV/ozone cleaning technique has been shown to be effective at removing thin layers of a large range of contaminants from quartz surfaces, including human skin oils, mechanical pump oil, cutting oil, beeswax, and rosin soldering flux [52]. This method is probably not a satisfactory one for removing silicones. The technique is discussed in detail in a general context in Ref. [52]. Another form of cleaning using reactive gases is referred to as “plasma ashing” or “plasma cleaning.” This can be done by placing the optic in a special machine (a “plasma asher”) that creates an electric discharge in a gas such as oxygen at low pressure. The discharge produces monatomic oxygen, which (like ozone) is a strong oxidizing agent. This process has also been used successfully on carbon-contaminated diffraction gratings [5]. The use of reactive gases to clean optics might lead to problems if the optical components contain organic materials, or others that are readily oxidized. For example, in the case of replica diffraction gratings, if there is some damage or degradation of the metal coating, the underlying epoxy support will probably be eventually damaged by repeated cleaning [51]. Aluminum rapidly develops thick oxide surface films as a result of exposure to ozone [52]. Furthermore, the ultraviolet light used in the UV/ozone cleaning technique can cause discoloration of some optical materials – a phenomenon known as “solarization” (see the discussion on page 334).
10.7 Degradation of optical materials 10.7.1 Problems with IR and UV materials caused by moisture, and thermal and mechanical shocks Many of the materials used for transmissive optics in the infrared and ultraviolet parts of the spectrum are hygroscopic, and susceptible to damage in the presence of moisture. This takes the form of fogging of the surfaces [53]. Whether or not this degradation has any affect on the optical properties of the materials depends on the wavelength of the radiation with which they are used. At very long wavelengths (e.g. much greater than about 10 µm), the fogging may have little effect. The materials that are affected by moisture, which can be classified as salts, include, for example, CsBr, NaCl, LiF, KBr, KCl, CsCl, KI, and BaF2 [14]. The most general way of protecting such materials from moisture-induced damage is by air conditioning or dehumidifying the environment (see page 67). In some cases the humidity requirement can be severe. For example CsBr can be damaged if the humidity exceeds 35% [14]. A more local method involves keeping the temperature of the optic (such as a window) above the ambient value by placing an incandescent lamp nearby, or by
334
Visible and near-visible optics
mounting a heater wire on its periphery. Polished windows made from materials that are particularly prone to fogging in the presence of moisture, such as NaCl, must be stored in a desiccator [53]. Another method of protecting hygroscopic optical materials from moisture is to wrap them in a thin plastic film [14]. Such films will have no affect on the infrared performance of the optics, as long as they transmit the wavelength of light that is to be passed through them, and if they are wrapped tightly around the components. These films include polyethylene, polyvinylidene chloride copolymer (available in the form of certain food R R ), and polycarbonate (e.g. Lexan ). Plaswraps), polyethylene terephthalate (e.g. Mylar tic films often contain plasticizers, which may pose a contamination problem in some cases. Materials that are particularly prone to damage from thermal shocks include NaCl, CaF2 , and BaF2 [53]. Because of their vulnerability to this form of stress, the compounds NaCl, KBr, and CaF2 should not be used in low-temperature work. CaF2 is also inappropriate in high-temperature applications. Quantitative information on the thermal-shock resistance of a variety of ultraviolet, visible, and infrared materials can be found in Ref. [54]. Infrared materials are often mechanically weak. Some particularly delicate ones are NaCl, CaF2 , BaF2 , and Ge. Resistance to scratching is an issue that has been discussed in the context of cleaning, on page 327.
10.7.2 Degradation of materials by UV light (“solarization”) In the presence of high-intensity radiation at wavelengths of less than about 320 nm, many transmissive optical materials gradually become discolored [10]. This form of degeneration (which is often called “solarization”) takes the form of darkening of glass, and the yellowing and crazing of plastic. In the case of glass, the degradation arises because of the creation of radiation-induced defect centers (“color centers”), which causes the formation of adsorption bands in the material [55]. Solarization also occurs in, for example, some common types of fused silica (quartz). Some light sources, such as arc discharge lamps and flashlamps emit substantial amounts of ultraviolet light (even if their purpose is to produce visible light), and will cause such effects. The use of vulnerable materials (especially plastic, such as high-density polyethylene) in sunlight can also result in degradation. Data on solarization in some common optical glasses are presented in Ref. [55]. Damage from ultraviolet light can be a problem for silica optical fibers at wavelengths below 254 nm [10]. Special “solarization-resistant” fibers are available that are stable down to 180 nm.
10.7.3 Corrosion and mold growth on optical surfaces In humid environments, especially in coastal areas, the deposit of salt on surfaces can be a problem (see the discussion on page 67.) Salt on optical surfaces can quickly cause failure of optical coatings, such as first-surface mirror and antireflection coatings [17]. The salt can even ultimately damage certain substrate materials.
335
10.7 Degradation of optical materials
Other contaminants in the atmosphere can also cause corrosion of coatings. Both hydration and corrosion of interference filter coatings can cause the bandpass wavelength of these devices to drift [10]. In warm, humid environments, a very serious potential problem is the growth of mold or fungi on optical surfaces [17]. In tropical climates this phenomenon can be particularly problematic, although it is not limited to such conditions. For example, mold growth can also occur if optical components are stored in a damp location – even in a temperate climate. Mold infestations can occur on glass surfaces even if these are completely clean, since nutrients within the mold spores themselves can be sufficient for a limited amount of growth. However, organic deposits on these surfaces can act as additional sources of nutrition, and will thereby foster further development. In the early stages of growth, these organisms will increase the light scatter from the surfaces. Later on, corrosive agents produced by the mold can etch patterns into the material – thereby causing permanent damage. The time scale for this process may be on the order of several years. Mold colonies on optical surfaces often have the appearance of a mesh of fine lines, which look similar to a spider’s web. Components that have been etched by mold may have to be replaced. Normally, the best way of preventing mold growth is to keep optical devices in a cool, dry environment. In particular, condensation must be avoided. If the presence of warmth and moisture is unavoidable, optical components can be provided with fungicidal coatings, which will inhibit the growth of mold, without altering the optical properties of the components [17]. Moisture problems in general are discussed in more detail in Section 3.4.2.
10.7.4 Some exceptionally durable optical materials 10.7.4.1 Introduction If an optical system must be exposed to very harsh conditions, it may be worthwhile employing specialized materials in their construction. This would generally not involve the all the components in the system, but just those directly exposed to the harsh environment – such as windows. Ordinarily, these items are made of an optical crown glass, such as BK-7. If the environment is corrosive, better choices might be quartz, fused silica, sapphire [10], or possibly even diamond.
10.7.4.2 Sapphire Sapphire (single crystal Al2 O3 ) is an exceptionally robust material in a number of ways. It is very resistant to chemical attack, high temperatures, thermal shock, mechanical stress, and ionizing radiation. The material also has a very large transmission range, which extends from 150 nm to 6 µm [14]. The sapphire that is used in optical applications is made artificially, by growing crystals from molten Al2 O3 . It is possible to obtain both windows and lenses made out of sapphire from commercial sources. An unfortunate characteristic of sapphire is the presence of a small amount of birefringence, which can be troublesome in many applications [10]. If only the chemical resistance
336
Visible and near-visible optics
is a concern, this problem can be avoided by using windows that comprise a quartz substrate coated by a thin layer of amorphous sapphire. These items, which are commercially available, are as inert as bulk sapphire, but do not have its birefringence problems. Of course, such windows are not helpful if it is the bulk properties of the material that are being exploited. Further information about overcoming problems caused by birefringence in sapphire can be found in Ref. [10].
10.7.4.3 Diamond Perhaps surprisingly, it is possible to obtain windows, and even lenses, made of diamond. This material possesses extreme levels of hardness (the highest of any material), and is highly resistant to chemical attack, thermal shock, abrasion, mechanical stress (especially in compression) and to damage from radiation. Diamond has a very large thermal conductivity – about six times that of copper at room temperature. The material has a very large range of optical transmission, which (for type-IIA material) spans wavelengths from 230 nm to beyond 200 µm [14]. In a part of this span, extending from 2.5 µm to 6.0 µm, diamond exhibits unusually high adsorption. Diamond is not birefringent. Since most materials that are transparent in the infrared are comparatively delicate, diamond is particularly useful in this part of the spectrum. In research, one of the most important applications of diamond is as a means for subjecting substances to extremely high pressures, using the diamond-anvil cell. The diamond components in these devices (called “anvils,” and made from natural single crystals) are employed both to apply pressure to materials within the cell, and as windows with which to view the resulting behavior. Diamond is also used in devices that produce extremely high thermal loads on the window material, such as high-power infrared lasers. Although windows made from natural diamond crystals can be obtained (at up to about 0.8 cm in diameter and 0.1 cm thick), much larger ones can be made using the process of chemical vapor deposition (CVD). The latter type, which are polycrystalline, may generally be up to 10 cm in diameter and 0.1 cm in thickness. (It is possible to obtain CVD windows of up to 0.2 cm in thickness.)
10.7.4.4 Rhodium as a front-surface mirror coating Metallic films on front surface mirrors are generally delicate structures. Their robustness can be increased by applying an overcoat of some durable transparent material, such as silicon monoxide. For very harsh conditions, one reflective material that does not require extra protection is rhodium. This metal forms exceptionally hard, tenacious, and corrosionresistant films when deposited by vacuum evaporation or electroplating. The material has an almost neutral reflectivity in the visible range. The overall reflectivity of rhodium (for white-light from a tungsten lamp) is about 80% [56]. Rhodium coatings need not be expensive.
10.7.4.5 Fused silica or silicon carbide diffraction gratings As has been remarked, ordinary replica diffraction gratings are very delicate. Much greater damage resistance can be achieved with gratings made out of solid fused silica or silicon
337
10.8 Fiber optics
carbide, with grooves that have been created by chemically assisted ion etching [5]. This includes damage that may be caused if the gratings are used in applications involving highpower pulsed laser light. Such gratings are also considerably easier to clean than replica types. However, they are also far more costly to make.
10.8 Fiber optics 10.8.1 Mechanical properties It would be easy to imagine that fiber optic cables, which contain at their core tiny filaments of fused silica, are highly fragile. However, partly because of the way they are strengthened and protected by the structural members in the cable, these objects are generally very strong and robust. In fact fiber-optic cables are more rugged than their electrical counterparts [57]. However, fiber-optic cables do not tolerate continuous flexing. Hence, they should not be used in applications involving, for instance, steerable mechanisms [58]. When they are outside the protected environment of cables, and have had their outer jacket removed, optical fibers are much more vulnerable to breakage. Just how vulnerable depends on the particular type of fiber, as there is a huge variation in the mechanical properties of fused silica fibers [10]. Good ones are almost twice as strong as the strongest steel (maraging steel). However, bare fibers usually break because they have been scratched. Hence, they should be handled in a clean environment (i.e. with no grit, etc.) that is free of sharp objects. The most vulnerable parts of fiber-optic cables are their connectors, when these are uncoupled from their mating counterparts and exposed. Unprotected connector ends are most frequently damaged as a result of impact. To prevent such accidents, and to protect the exposed optical surfaces from contamination, unused fiber-optic connectors should always be covered with protective caps. It is very important always to clean single-mode fiber-optic connector ends whenever they are mated, as discussed on page 323.
10.8.2 Resistance to harsh environments Fiber optics are durable under conditions that can pose problems for electrical cables. Since the fibers are made mostly of quartz, they are tolerant of dirt, moisture, and extremes in temperature. Water, which can cause failure if it enters a coaxial cable, is not a threat to fiber-optic ones. Fiber-optic cables are also compatible with most other fluids [10]. It is possible to obtain optical fibers made out of sapphire, for use under very harsh conditions.
10.8.3 Insensitivity to crosstalk and EMI, and sensitivity to environmental disturbances Two major advantages of fiber-optic cables over conventional ones is that they are immune to crosstalk (interference caused by light passing through nearby fibers), and are unaffected by
338
Visible and near-visible optics
ordinary electromagnetic interference. Ground loops, which often arise when conventional cables are employed to connect equipment in an electronic system, are not created when fiber-optic cables are used. As a result, fiber optics give very little trouble when they are used for digital communication links. The situation is different when fiber-optics are employed in an optical measurement setup. In this case, fiber-optics methods do have some advantages over those that make use of bulk optics (with light paths in free space, and mirrors and prisms to redirect the beams). These include the ability to move the light around the apparatus at will, without having to worry about alignment. However, the propagation of light through fibers is sensitive to environmental conditions, such as temperature changes, humidity, vibrations, and pressure [10]. Bending a fiber will also produce effects. Such behavior will not affect fiber-optic systems if they are being used to transmit information in digital form. However, if a fiber is used to carry light in an optical experiment (i.e. when analog signals are involved), these effects can make life very difficult. For example, the polarization and phase of light that has traveled through a fiber are unstable. The use of fibers in very low-frequency measurements is difficult because of their high sensitivity to temperature shifts. Bending of a fiber can produce birefringence. Multimode fibers (as opposed to single-mode ones) have the additional problem of mode coupling, which causes noise and drift in the form of changing illumination patterns. Fewmode fibers (which one effectively has when visible, rather than infrared, light is sent through telecommunications fibers) are the worst in this regard. Because their optical alignments are essentially perfect, single-mode fiber optics systems are very susceptible to etalon fringe problems, and difficulties caused by feedback into lasers (in particular, diode lasers – see page 341). Fringe effects and polarization instabilities are the major cause of difficulties in apparatus involving fiber optics and lasers [10]. Many of these difficulties can be reduced, at least to some extent, by various methods. For example, etalon fringe effects can be minimized by employing angled fiber ends, and (if necessary) by using Faraday isolators. Polarization instabilities can be tamed with the aid of Faraday mirrors, and by other techniques. These issues are discussed in detail in Ref. [10]. Despite this, in general it can be much less trouble to use bulk optics than fiber-based ones in experimental measurements.
10.9 Light sources 10.9.1 Noise and drift 10.9.1.1 Introduction Noise and drift in light sources are among the most serious reliability problems in optics. In the case of lasers, such instabilities may take the form of changes in intensity, frequency, polarization, and beam direction. Devices based on electric discharges in gases, such as large gas lasers [10,59] and high-pressure arc lamps [10], are particularly prone to intensity
339
10.9 Light sources
noise at low frequencies. This usually has a 1/f spectrum, and is often ill behaved. Amplitude noise and drift in large helium–neon (He–Ne) lasers can be as high as 2% RMS and 5% (over an 8 h interval), respectively [60]. A major instability problem in the case of diode lasers is mode hopping [10]. This is essentially a form of frequency noise, but also causes intensity noise, because a diode laser’s gain is not the same for different modes. The intensity variations can range in size from 0.1% to 1%. Also, if there are any etalon fringes in the optical system, the frequency shifts during mode hopping will cause these to move around, resulting in additional intensity noise. This mode-hopping phenomenon causes intermittent faults that can make low-noise optical systems employing such lasers very difficult to use.
10.9.1.2 Temperature-induced drift As with many devices (electronic, optical, and mechanical), drift in lasers is often caused, at least in part, by temperature changes. For example, diode lasers are extremely sensitive to temperature-induced drift and mode hopping (see page 341). If drift is a concern, it is usually a good idea to allow the source to warm up after being switched on in order to allow its temperature to stabilize. For example, in the case of single-frequency He–Ne lasers, it is generally essential to do this in order to achieve a stable frequency [60]. The warm-up time needed in order to obtain the rated stability may be about 15–60 min for He–Ne lasers in general. For argon-ion lasers used in demanding applications, as much as 2 h may be required in order to achieve adequate stability. When the highest levels of stability are needed, it may be necessary to operate the laser continuously. Some lasers are provided with beam-blocking shutters that allow them to run all the time without releasing light. In the case of water-cooled lasers, it is important to keep the water flow rate constant. This can be done by a pressure regulator or a flow regulator. Active stabilization may be desirable in some cases (see 340), which can reduce warm-up times to as little as a minute, and significantly improve the long-term stability of output power, beam direction, and frequency.
10.9.1.3 Drifting polarization in “unpolarized” He–Ne lasers Some helium–neon lasers produce polarized light, while others create light that is nominally unpolarized. However, light from the latter devices may not be actually randomly polarized, but instead comprise quantities of linearly polarized light with different polarizations, and in changing proportions [60]. This can result in light-intensity variations if the beam is passed through polarizing elements in an optical system.
10.9.1.4 Microphonics Lasers are often susceptible to frequency noise that results from vibrations changing the length of the laser cavity (a form of microphonics) [10]. These vibrations may arise for a variety of reasons (see page 71). Water- or air-cooling can be a particularly important source. For example, allowing an excessive water flow rate in the laser head may cause turbulence that can result in vibrations. (Water cooling can also cause problems
340
Visible and near-visible optics
for other reasons, as discussed in Section 8.4.) Lasers in which the cooling fans are mounted directly in the laser head should be avoided, if possible, in situations in which frequency noise is a potential problem. Sound waves are also a possible source of vibrational disturbances.
10.9.1.5 Active compensation methods for reducing noise and drift Active compensation techniques exist to minimize the effects of variations. In one such arrangement, the intensity of laser light is stabilized by attenuators that are part of a feedback control loop. This scheme involves measuring the intensity of the light emerging from the laser with a photodetector. The resulting signal is used by an electronic controller to adjust an optical modulator, which moderates variations in the intensity of the light. These devices (which are referred to as “Noise Eaters”) have been claimed to reduce amplitude noise to less than 0.05% up to 600 kHz and d.c. drift to less than 0.1%. They operate independently of the laser, and hence can be added onto many different types of laser system if they are needed. This technique of beam-intensity stabilization is discussed in more detail in Ref. [10]. Some commercial lasers are provided with active stabilization schemes (built into the laser unit) in which signals from photodetectors are used by a controller to adjust the length of the laser cavity. This permits drifts in frequency or intensity to be reduced. For example, certain helium–neon lasers have such a capability, which allows long-term power drifts of 0.2% RMS, or frequency stabilities of 2 MHz, to be achieved. The method is discussed in Ref. [24]. Instabilities in the pointing direction of the laser light can also be subdued using feedback. A sample of the beam is shone onto a quadrant cell, which measures its position. The electronic signal thereby produced is again passed to a control circuit. Output signals from the controller are used this time to tip a mirror with electromechanical actuators, or operate an acousto-optic deflector (AOD), in order to still the beam [10]. Such devices (called “beam-pointing stabilization systems”) are sold commercially. Beam-intensity stabilization is a good approach if it is essential that the intensity of the light be constant. This might be the case if the light is being used in dark-field measurements, to heat something (e.g. a sample in a diamond-anvil pressure cell), or to optically pump some system. If this is not required, a much better (and cheaper) method of removing the effects of laser intensity noise is “laser noise cancellation”. This technique involves removing noise only after the light has been converted into an electronic signal. In essence the method entails splitting the laser beam in two and sending half directly to a photodiode, which provides a reference signal. The other half of the beam is sent through the system being measured, and the resulting light is sensed by a second photodiode. A simple analog electronic circuit takes the two signals and subtracts them to remove additive noise, and then uses a log-ratio scheme to get rid of multiplicative noise. Noise cancellation is considerably more effective at removing the effects of laser noise than beam intensity stabilization. The former method can make it possible to carry out shot-noise limited measurements near d.c. even with noisy lasers. Noise cancellation is also better at reducing excess noise from optical measurements than light modulation techniques,
341
10.9 Light sources
such as those involving beam chopping and lock-in detection. Cancellation circuits can be obtained commercially, or can be built at very low cost. The technique is discussed in Refs. [10] and [19]. Optical-source noise problems in general are discussed at length in Ref. [10].
10.9.2 Some lasers and their reliability issues 10.9.2.1 Diode lasers From a number of perspectives, diode lasers have very attractive properties. They are compact, efficient, relatively inexpensive compared with other types of laser, and they are mechanically very robust. However, for the purposes of making measurements, diode lasers have some serious shortcomings. Their instability and mode hopping problems are very severe. These lasers are highly coupled, optically, to the external environment. Hence, even very weak reflections of emitted light back into the laser (as low as about one part in 106 ) can set off mode-hopping activity, which can be very erratic. The frequency stability of diode lasers is also very sensitive to the stability of their temperature and driving current. Achieving stable operation requires preventing back-reflections as much as possible, and using special hardware to control the temperature and current [10]. One way of reducing reflections is to use Faraday isolators, and purpose-made devices of this type (“diode laser isolators”) are available commercially. Another method of preventing mode hopping is to switch the laser on and off very rapidly. When this is done, at frequencies of 300–500 MHz, mode hops are suppressed, and the resulting beam is much more stable. However, such high-frequency switching greatly increases the line width of the laser light. Also, unless path differences are very small, the resulting phase noise will be a problem in interferometric measurements. Drivers are available that can carry out the modulation, and it is possible to obtain laser modules that are provided with this ability. Other methods of preventing mode hopping in diode lasers also exist (see Ref. [10]). Diode lasers are very vulnerable to damage by electrostatic discharge (ESD), which is the most common cause of failures in these devices [10]. An important measure for preventing such damage is to ensure that the laser’s leads are always shorted out when the laser is not in use. In the case of a laser that is connected to its power supply, this can be done by providing a relay in the latter. The relay contacts are connected between the anode and the cathode of the laser, and shorts them when the power is off. An alternative approach, which is useful if it is not necessary to modulate the laser at high frequencies, is to permanently install a 1 µF bypass capacitor between the anode and cathode. Methods for handling and using ESD-sensitive devices are discussed in Section 11.5. Besides ESD, another important cause of diode laser damage is the presence of voltage spikes and surges on the power line. Ordinary power supplies are not adequate for operating diode lasers, partly because they do not have the needed protection circuitry [11]. Special diode-laser driver electronics should be used for this purpose. In many cases, the best way of preventing ESD and power-quality related damage is to avoid dealing with laser diodes as distinct components, and instead to use modules that include the laser, a
342
Visible and near-visible optics
reliable driver circuit, and a filter for ESD protection. Such modules may also contain a temperature-control system to prevent temperature-related mode hopping and frequency drift. For some undemanding applications, laser pointers provide acceptable performance and reliability, at very low cost. One of the major advantages of diode lasers over gas lasers is their very high electrical to optical power conversion efficiencies (tens of percent). In situations where relatively large amounts of optical power are required, and where the inherent limitations of diode lasers are acceptable or can be overcome, it is often possible to use such lasers without the need for large amounts of electric power, and without having to provide forced air or water cooling. In contrast, gas lasers, with their very low efficiencies (0.001–0.2% for an argon-ion device), require comparatively large amounts of electric power in order to provide equivalent levels of optical power. In many cases, trouble-prone water cooling systems are needed in order to remove the heat. If they have been correctly configured (with adequate ESD protection, etc.), diode lasers are long-lived devices, and require no maintenance. Typical lifetimes are on the order of 105 hours [15].
10.9.2.2 Helium–neon lasers For many laboratory applications where low optical power levels are sufficient, especially those involving precision measurements, helium–neon (He–Ne) lasers may be preferable to diode lasers [10]. The He–Ne lasers are far less sensitive to reflected light than diode lasers. This, in conjunction with a much lower sensitivity to temperature changes, means that the intermittent mode-hopping behavior (and the resulting frequency and intensity noise) associated with diode lasers is substantially absent in He–Ne devices. The frequency stability of He–Ne lasers is generally excellent. However, as noted above, He–Ne lasers do display substantial intensity noise and drift. Some commercial devices are provided with active stabilization circuits to reduce intensity or frequency variations. Although He–Ne lasers are relatively mechanically fragile in comparison with diode lasers, they are immune to ESD and voltage transients. He–Ne lasers are also long-lived devices, and require essentially no maintenance. Lifetimes for lasers with hard-sealed tubes range from 10 000 to 20 000 h [60].
10.9.2.3 Other gas lasers Aside from helium–neon lasers, the gas laser family includes, for example, argon-ion, helium–cadmium, and carbon-dioxide devices. These lasers share with He–Ne types the tendency to exhibit a large amount of intensity noise and drift. Helium–cadmium lasers have especially poor noise and drift properties, with possible RMS amplitude noise levels of 2–2.5%, and long-term power fluctuations of 1–10% [60]. Unlike He–Ne lasers, other gas lasers often require regular maintenance, such as cleaning of mirrors and windows, and alignment. As is the case with lasers in general, high-power or high-energy gas lasers usually have lower reliability, and require more hands-on care than low-power or low-energy
343
10.9 Light sources
ones [14]. The lifetimes of continuous gas lasers are typically in the several thousand hour range [60].
10.9.2.4 Solid-state lasers Solid-state lasers use a solid material, such as a crystal or a glass, as the lasing medium. When pumped with diode lasers, these devices generate relatively quiet and stable beams when compared with gas lasers. (The beams are also of considerably higher quality than those produced by the diode lasers used for pumping.) For example, in the case of continuous diode laser pumped Nd–YAG lasers, the RMS amplitude noise from 10 Hz to 10 MHz typically ranges from 0.2% to 1% [60]. These Nd–YAG lasers are long-lived devices, with lifetimes that are usually limited only by those of their pump lasers. Their maintenance requirements consist of occasional cleaning and alignment. In many ways, diode-laser pumped solid-state lasers are superior to other types of laser as continuous sources of radiation. However, they are also very expensive. Diode-laser pumped solid-state lasers can exhibit good power efficiencies (as high as 10%, in the case of Nd–YAG types [60]). Like diode lasers working alone, these can also produce relatively high optical power levels without the need for water cooling.
10.9.3 Some incoherent light sources Incandescent (or tungsten filament) lamps are quiet and stable sources of broadband incoherent light. Tungsten–halogen types, which use a chemical method to prevent the build-up of evaporated tungsten on the bulb, are substantially brighter, yet last considerably longer than ordinary light bulbs (typically 2000 h vs. 1000 h). At the cost of short-term periodic variations in the emitted light, incandescent lamps can be made more stable over the long term if they are run on a.c. rather than d.c. [10]. A potential source of premature failure of incandescent lamps is vibrations, which cause fatigue failure of the filament. High-pressure arc lamps are much brighter than incandescent ones, but are also relatively noisy and unstable. Both their emitted light intensity and spectrum fluctuates. Power variations can be as high as ±10%. The intensity noise and drift compensation schemes used with lasers are not as effective when applied to incoherent light sources, and particularly arc lamps. This is because the noise depends strongly on the position and angle of the source. If a stable source of white light is needed, incandescent lamps are often a better choice than arc lamps [10]. The importance of keeping the envelopes of tungsten–halogen lamps and arc lamps clean is discussed on page 321. Light-emitting diodes (LEDs) are usually relatively narrowband sources of light, in comparison with incandescent lamps and high-pressure arc lamps. (White-light types are available, which typically use various schemes to convert monochromatic light from a shortwavelength LED to broadband radiation.) LEDs are quiet, stable, and mechanically very robust. Their lifetime is very large compared with other incoherent light sources (more than 105 h). Over their lifetime, their light output (at a given current) will decrease significantly.
344
Visible and near-visible optics
10.10 Spatial filters Spatial filtering is sometimes used to remove unwanted spatial irregularities in laser beams. This is often done by focusing the beam through a very small hole (a “pinhole”) in a metal foil with a lens. The minute dimensions of the hole (e.g. 10 µm diameter) means that precise alignment of the beam with the hole is essential. Commercial spatial filters are modules that comprise a microscope objective lens for focusing, a pinhole, and a mechanical x–y–z translation stage to permit alignment of the lens and the pinhole, and adjustment of their separation. Because of the need for very good alignment, spatial filters are generally finicky devices. They are easily misaligned, hard to align, and are vulnerable to position shifts due to temperature changes and unintended movements of their translation stages [10]. It pays to use high-quality spatial filters, particularly with regard to the precision and stability of their pinhole positioning mechanisms. (See also the comments in Section 8.2.3). A large pinhole and a suitably small numerical aperture should be used, if possible. For visible light, pinhole diameters of greater than 20 µm are preferred. A potential problem with spatial filters is that dust particles that drift close to the pinhole can cause significant fluctuations in the beam intensity [61].
10.11 Photomultipliers and other light detectors Perhaps the most trouble-prone detectors of light are photomultipliers (PMTs). These devices are capable of detecting exceedingly low levels of light, and are normally operated under low-light conditions. Exposing PMTs to daylight while they are provided with power can destroy them [10]. Even if this is done while they are unpowered, their dark current will greatly increase. It may then take a few days of operation under power for this to return to normal. The PMTs can produce a noisy signal if they are subjected to vibrations (i.e. they may be microphonic). Their characteristics depend on their history – in the form of light intensity and voltage hysteresis, at levels of a few percent. Some PMTs are sensitive to very low magnetic fields of about 1 G, while others can operate satisfactorily at fields of 1 kG. They tend to be mechanically fragile devices that are subject to damage by excessive stresses (a particular problem if they are placed in a mount, and subsequently cooled), accelerations, and vibrations [14]. The diffusion of atmospheric helium through the walls of PMTs, and interior surface changes, creates upper limits on their lifetimes [10]. Usually this is on the order of a few years, under low-current or dark-storage conditions, but PMTs with soda-glass envelopes can be destroyed in an hour if there is a large amount of helium in the environment. This might be the case if mass-spectrometer leak testing, or helium-based cryogenic work, is being done in the vicinity. Other optical detectors tend to be much more robust. Avalanche photodiodes (APDs) can be damaged by excess voltages, and photodiodes by ultraviolet light [10]. APDs are very sensitive to changes in temperature.
345
Summary of some important points
10.12 Alignment of optical systems Inadvertent misalignment of optics is a general problem, but of particular concern in the case of lasers, and especially high-power types. Misalignment can occur during transport of optical apparatus, because of mishandling in the laboratory, or because of repeated temperature cycling. In the case of the components comprising a laser cavity, misalignment can cause the laser to be unstable, so that (possibly because of temperature shifts) the laser modes may change [61]. If this happens in a high-power laser, the result may be damage to components in the optical system due to excessive light intensities [26]. Keeping lasers in good alignment is normally very important. The alignment of optical systems in general (and designing such systems for easy and stable alignment) is discussed in Ref. [10].
Further reading Reference [10] contains information on reliability issues for a wide range of topics in optics, and is recommended for any laboratory doing a significant amount of optical work. Other books that discuss this subject are Refs. [1], [14], and [17]. Comprehensive information on laser reliability issues, including noise, drift, failure modes, and maintenance requirements, can be found in Ref. [60]. References [24] and [61] are other useful sources of information on lasers. A collection of rules-of-thumb in optics, some of which can be helpful in avoiding mistakes when carrying out scientific optical work, can be found in Ref. [15].
Summary of some important points 10.2 Temperature variations in the optical path (a) Small air temperature shifts can result in large optical path length changes (e.g. for light in the visible: 1 K over 1 m changes the length by 1 µm, in air at STP). (b) These effects are sometimes said to be caused by “air turbulence,” but are actually the result of warm or cold air moving into the optical path. (c) The main method of limiting such problems is to keep sources of heat or cold away from the vicinity of the optics. (d) Enclosing the beam path to prevent air movements is a useful measure. (e) The problem can be eliminated by placing the optical system in a vacuum, but replacing the air with helium is almost as effective. (f) In the case of interferometry, another useful measure is to make the test and reference beams traverse the same (or nearly the same) paths.
346
Visible and near-visible optics
10.3 Temperature changes in optical elements and support structures (a) Temperature changes in the materials that comprise optical systems can produce undesirable effects. (b) The most important consequences of temperature changes are: (i) alterations in the focal lengths of optical assemblies (especially those involving infrared lenses), (ii) component distortions caused by differences in thermal expansion coefficients, (iii) bending of structures due to temperature gradients, (iv) drifts in the bandpass wavelengths of interference filters. (c) In laboratory situations, the easiest method of preventing such problems is often to control the environment, by removing heat sources, regulating air temperatures, homogenizing air temperatures with fans, etc. d) If it is possible to choose the materials to be used in optical apparatus, the best passive method of reducing problems due to temperature gradients is to employ materials with low thermal expansion coefficients.
10.4 Materials stability (a) Materials used in optics can be unstable with regard to physical dimensions, modulus of elasticity, and index of refraction. (b) Property changes can come about as a result of shifts in temperature (usually the most important cause), time (e.g. “creep”), humidity levels, and exposure to radiation. (c) Aluminum is a useful structural material in the presence of non-uniform changes in environmental temperature (because of its high thermal conductivity), while low thermal expansion materials such as Invar are helpful if either uniform or non-uniform temperature changes are expected. (d) Plastics are generally unstable materials – they have high thermal-expansion coefficients, low thermal conductivities, are prone to creep under load, and swell in the presence of moisture. (e) In structural applications where stability is important, aluminum is a much better alternative to plastics. (f) In the case of mechanical components, such as translation stages, creep of metals is generally of less importance than other phenomena (such as migration of lubricants and Brinelling of bearings) as a source of instability.
10.5 Etalon fringes (a) Unwanted interference fringes (“etalon fringes”) often form in optical systems using lasers, owing to stray light caused by reflections. (b) Etalon fringes can cause serious problems in precision measurements – e.g. light intensity fluctuations, and frequency pulling of lasers.
347
Summary of some important points
(c) These effects can be very erratic, because the fringes can change as a result of small vibrations, air currents, temperature changes, and mechanical drift. (d) Etalon fringes are especially troublesome when optical elements with flat surfaces (such as windows and prisms) are closely spaced and in alignment. (e) Stray light beams that have been ignored can be an insidious cause of etalon fringes, because they multiply in number as they strike successive optical components. (f) The formation of etalon fringes can be prevented by: (i) keeping the optical system simple, (ii) avoiding the use of closely spaced planar parallel surfaces, (iii) redirecting beams using mirrors rather than prisms, (iv) trapping stray light rays using beam dumps, (v) slightly misaligning components that could give rise to etalon fringes.
10.6 Contamination of optical components (a) Although dust is a very noticeable form of contamination, it is often less of a threat than might be thought, because the particles tend to land on surfaces that are out of focus. (b) Some devices and systems that are particularly vulnerable to problems caused by contamination are: (i) high-power lasers systems, (ii) diffraction gratings, (iii) instruments making use of ultraviolet light, (iv) mirrors and windows inside a laser cavity, (v) optical elements near a focal plane. (c) Fingerprints are often the major form of contamination in optical work, and skin secretions can attack optical coatings and glass. (d) Contaminants in solvents used to clean optics are an insidious cause of trouble. (e) Single-mode fiber-optic connectors are very vulnerable to attenuation of light caused by the presence of contaminants on their optical surfaces, and may be damaged if moderate light power levels (≈ 30 mW) are also present. (f) Hence, fiber-optic connectors should always be cleaned before being mated. (g) Cleaning of optical components should generally be done only when it is really necessary, since cleaning usually results in some damage. (h) This damage may have a more significant affect on optical performance (in the form of increased light scattering) than small amounts of contamination. (i) Some components that are particularly vulnerable to damage during cleaning are replica diffraction gratings, components used in the resonant cavities of lasers (mirrors, Brewster windows, etc.), and first-surface metal mirrors that are not provided with a protective coating. (j) Machine-shop-quality compressed air should not be used for blow-dusting optics. (k) If hard optical surfaces are to be cleaned by wiping with direct pressure, this should be preceded by the removal of particulates (e.g. by blow dusting), in order to reduce scratching.
348
Visible and near-visible optics
(l) Liquid cleaning agents should be selected carefully. Common solvents (e.g. rubbing alcohol) are often impure; spectroscopic-grade methyl alcohol is usually the best. Some soaps contain abrasives such as sand. (m) Ultrasonic cleaning is very effective at removing heavy contamination, but should not be used on delicate optical components. (n) Vapor degreasing is an excellent method of removing oil, grease and fingerprint residues from delicate optical components that must be exceptionally clean, such as laser Brewster windows. (o) The plastic film peeling technique is a slow, but highly effective method of removing contaminants (especially particulates) from very delicate optical surfaces. (p) CO2 snow cleaning is well suited for removing contaminants, including very fine particulates, even from very large and delicate optics.
10.7 Degradation of optical materials (a) Materials used for transmissive optics in the infrared and ultraviolet are often susceptible to damage by moisture. (b) One way of protecting a component made of such a material is to keep it warm, e.g. by placing an incandescent lamp nearby, or by attaching a heater wire to its periphery. (c) Transmissive materials for use in the infrared and ultraviolet are also sometimes mechanically weak and prone to damage from thermal shocks. (d) When exposed to high-intensity light at wavelengths of less than about 320 nm, many transmissive optical materials gradually become discolored. (e) In warm, humid environments, mold can grow on optical surfaces. Mold can eventually etch optical materials and cause permanent damage. (f) Some transmissive optical materials that are suitable for use in corrosive environments are fused silica, quartz, sapphire, or diamond. (g) Lenses and windows can be made out of sapphire, which is very resistant to chemical attack, high temperatures, thermal shock, mechanical stress, and ionizing radiation. (h) It is possible to obtain windows and lenses made from diamond, which, like sapphire, is extremely rugged. It is transparent over a large part of the infrared, and is far more robust than other infrared materials. (i) Rhodium can be used to make front surface mirror coatings, which are exceptionally hard, tenacious, and corrosion-resistant.
10.8 Fiber optics (a) Fiber-optic cables are very strong and robust – generally more so than their electrical counterparts. (b) However, bare optical fibers (outside a cable, and without their protective jacket) are vulnerable to breakage. (c) Bare fibers usually break because they have been scratched. Hence, they should be handled in a clean environment (i.e. with no grit, etc.) that is free of sharp objects.
349
Summary of some important points
(d) Fiber-optic cables are durable under environmental conditions that could pose problems for electrical cables, such as those involving dirt, moisture (they can be placed underwater), and extremes in temperature. (e) Unlike electrical cables, fiber-optic ones are immune to electromagnetic interference problems, and are therefore excellent for the trouble-free transmission of digital signals. (f) However, when fibers are carrying analog, rather than digital, optical signals, such signals are susceptible to alteration by temperature changes, humidity, vibrations, and pressure. (g) The phase and polarization of light traveling through an optical fiber is unstable. (h) Single-mode fiber optics systems are highly susceptible to etalon fringe problems, and feedback into lasers – particularly diode lasers. (i) Although methods exist to tame the analog signal problems of optical fibers to some extent, it is often much better to use bulk optics in experimental measurements.
10.9 Light sources (a) Noise and drift of light sources is one of the most serious reliability problems in optics. (b) Some particularly troublesome light sources are large gas lasers and high-pressure arc lamps (severe 1/f intensity noise and drift), and diode lasers (intermittent mode-hopping behavior, causing frequency and intensity noise). (c) Laser drift is often caused by temperature changes after switch-on. Hence, lasers should be allowed to warm up for perhaps 15–60 min prior to use, in order to allow stabilization. (d) Active compensation methods can be very useful in reducing laser noise and drift, or reducing the effects of noise and drift. (e) A particularly useful and inexpensive technique for reducing the effects of intensity noise is “laser noise cancellation.” (f) Although diode lasers have some attractive properties compared with other laser types (including mechanical robustness), they can suffer from severe mode-hopping behavior, and are easily damaged by electrostatic discharges (ESD), and power supply spikes and surges. (g) Even very tiny levels of light reflected back into a diode laser (one part in 106 ) can set off mode hopping. (h) Mode hopping in diode lasers can also be caused by instabilities in temperature or driving current. (i) A good way of avoiding (at least to some extent) such problems is to avoid working with diode lasers as isolated components, and to use diode laser modules instead. (But one still has to deal with reflections.) (j) For many types of laboratory work involving precision measurements, helium–neon (He–Ne) lasers are preferable to diode lasers. (k) He–Ne lasers are much less sensitive to reflected light than diode types, generally do not undergo mode hopping, and are robust with regards to ESD and power supply spikes and surges. (l) Gas lasers in general (including He–Ne types) tend to exhibit large amounts of low frequency noise and drift. Active stabilization methods can be used to reduce this.
350
Visible and near-visible optics
(m) With the exception of He–Ne lasers, gas lasers require regular maintenance, in the form of cleaning of windows and mirrors, and realignment. (n) Solid-state lasers that are pumped by diode lasers are relatively quiet and stable in comparison with gas lasers, but also require some periodic cleaning and realignment. (o) Incandescent (tungsten filament) lamps are quiet and stable sources of white light, and often better suited for precision optical work than noisy and drift-prone high-pressure arc lamps.
10.10 Spatial filters (a) Spatial filters are prone to internal misalignment as a result of mechanical and thermal effects. (b) High-quality devices, with precise and stable mechanisms for positioning the pinhole, should be selected.
10.11 Photomultipliers and other light detectors (a) Photomultipliers (PMTs) are generally intended to be used with dim light only – they can be destroyed if exposed to daylight while powered. (b) The diffusion of helium from the atmosphere through the walls of PMTs, and interior surface changes, sets an upper limit on their lifetime. This is normally a few years, under optimal conditions. (c) However, PMTs with soda-glass envelopes can be destroyed in an hour if there is a large concentration of helium in the environment.
References 1. D. Vukobratovich, in Applied Optics and Optical Engineering, Vol. XI, R. R. Shannon and J. C. Wyant (eds.), Academic Press, 1992. 2. P. Armitage, Laser Focus World 35, Pt. 5, 257 (1999). 3. Air Turbulence Effects on Lasers, Hamar Laser Instruments, Inc. www.hamarlaser.com 4. D. Malacara, in Physical Optics and Light Measurements, D. Malacara (ed.), Academic Press, 1988. 5. E. G. Loewen and E. Popov, Diffraction Gratings and Applications, Marcel Dekker, 1997. 6. R. N. Wilson, Reflecting Telescope Optics II: Manufacture, Testing, Alignment, Modern Techniques, Springer-Verlag, 1999. 7. I. Filinski and R. A. Gordon, Rev. Sci. Instrum. 65, 575 (1994). 8. P. J. Rogers and M. Roberts, in Handbook of Optics: Vol. 1, 2nd edn, M. Bass (ed.), McGraw-Hill, 1994. 9. P. C. D. Hobbs, Chapter 20: Thermal Control. Available from: www.electrooptical.net.
351
References
10. P. C. D. Hobbs, Building Electro-Optical Systems: Making it all Work, John Wiley and Sons, 2000. A second edition of this book has been published (John Wiley and Sons, 2009). 11. The Newport Resource 2004 (catalog), Newport Corporation, 1791 Deere Ave., Irvine, CA 92606, USA. www.newport.com 12. Carpenter Technology Corporation, 2 Meridian Blvd., Wyomissing, PA, USA. www. cartech.com 13. Schott AG, Hattenbergstr. 10, 55122 Mainz, Germany. www.schott.com 14. J. H. Moore, C. C. Davis, M. A. Coplan, and S. C. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. 15. E. Friedman and J. L. Miller, Photonics Rules of Thumb: Optics, Electro-Optics, Fiber Optics and Lasers, 2nd edn, McGraw-Hill, 2004. 16. S. F. Jacobs, in Applied Optics and Optical Engineering: Vol. X, R. R. Shannon and J. C. Wyant (eds.), Academic Press, 1987. 17. P. R. Yoder, Opto-Mechanical Systems Design, Marcel Dekker, 1986. 18. E. Hecht and A. Zajac, Optics, Addison-Wesley, 1974. 19. P. C. D. Hobbs, Appl. Opt. 36, 903 (1997). 20. M. J. Soileau, in Encyclopedia of Materials Science and Engineering, Vol. 4, M. B. Bever (ed.), Pergamon, 1986. 21. R. P. Breault, in Handbook of Optics, Vol. 1, 2nd edn, M. Bass (ed.), McGraw-Hill, 1994. 22. N. Carosso, Contamination Engineering Design Guidelines, NASA. Internet: https://400dg.gsfc.nasa.gov/sites/400/docsguidance/All%20Documents/Contam_Eng_ Guidelines.doc 23. www.fiber-optics.info/articles/connector-care.htm 24. Sam’s Laser FAQ. www.laserfaq.org/sam/lasersam.htm 25. D. J. Scatena and G. L. Herrit, Laser Focus World 26, 117 (1990). 26. J. Doty, Photonics Spectra 34, 113 (2000). 27. W. R. Hunter, Appl. Opt. 16, 909 (1977). 28. S. Guch, Jr. and F. E. Hovis, Proc. SPIE 2114, 505 (1994). 29. J. F. James, Spectrograph Design Fundamentals, Cambridge University Press, 2007. 30. F. Y. Genin, M. D. Feit, M. R. Kozlowski et al., Appl. Opt. 39, 3654 (2000). 31. Z. Malacara and A. A. Morales, in Geometrical and Instrumental Optics, D. Malacara (ed.), Academic Press, 1988. 32. W. Demtr¨oder, Laser Spectroscopy: Basic Concepts and Instrumentation, Springer, 2003. 33. M. M. Hills and D. J. Coleman, Appl. Opt. 32, 4174 (1993). 34. B. A. Unger, R. G. Chemelli, and P. R. Bossard, Proceedings of the EOS/ESD Symposium, September 1984, Vol. EOS-6, pp. 40–44. 35. J. Dyer, S. Brown, and R. Esplin, et al., Proc. SPIE 4774, 8 (2002). 36. N. M. Harvey and R. R. Herm, Int. J. Engng. Sci. 21, 349 (1983). 37. C. I. Kerr and P. C. Ivey, J. Turbomach. 124, 227 (2002). 38. S. A. Hoenig, Appl. Opt. 18, 1471 (1979). 39. S. A. Hoenig, Appl. Opt. 19, 694 (1980).
352
Visible and near-visible optics
40. S. A. Hoenig, Laser Induced Damage in Optical Materials: 1981. Proceedings of a Symposium (NBS-SP-638), 1983, pp. 280–297. 41. S. A. Hoenig, Appl. Opt. 21, 565 (1982). 42. R. Chow, R. Bickel, J. Ertel et al., Optical System Contamination: Effects, Measurements, and Control VII, P. T. Chen and O. M. Uy (eds.), Proc. SPIE 4774, 19 (2002). 43. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics, Vol. II, Addison-Wesley, 1964. 44. P. Y. Bely (ed.), The Design and Construction of Large Optical Telescopes, Springer, 2003. 45. R. R. Zito, in Advances in thin-film coatings for optical applications II, M. L. Fulton and J. D. T. Kruschwitz (eds.), Proc. SPIE 5870, 587005 (2005). 46. E. I. du Pont de Nemours and Company. www2.dupont.com/Vertrel/en_US/ 47. J. H. Moore, C. C. Davis, and M. A. Coplan, Building Scientific Apparatus: a Practical Guide to Design and Construction, Addison-Wesley, 1983. 48. Handbook for Critical Cleaning: Aqueous, Solvent, Advanced Processes, Surface Preparation, and Contamination Control, B. Kanegsberg and E. Kanegsberg (eds.), CRC Press, 2000. 49. This product, formerly known as “Opticlean,” is now called “First ContactTM .” It is available from Photonic Cleaning Technologies. www.photoniccleaning.com 50. J. M. Bennett and D. R¨onnow, Appl. Opt. 39, 2737 (2000). 51. R. W. C. Hansen, J. Wolske, and P. Z. Takacs, Nucl. Instr. Meth. Phys. Res. A 347, 254 (1994). 52. J. R. Vig, J. Vac. Sci. Technol. A 3, 1027 (1985). 53. M. H. Mosley, in Laboratory Methods in Infrared Spectroscopy, 2nd edn, R. G. J. Miller and B. C. Stace (eds.), Heyden and Son, 1972. 54. P. J. Rogers, Proc. SPIE 1781, 36 (1993). 55. U. Natura and D. Ehrt, Glastech. Ber. Glass Sci. Technol. 72, 295 (1999). 56. J. Yarwood, High Vacuum Technique, Chapman and Hall, 1967. 57. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn, Cambridge University Press, 1989. 58. L. Ekman, in Space Vehicle Mechanisms: Elements of Successful Design, P. L. Conley (ed.), John Wiley & Sons, Inc., 1998. 59. J. P. Goldsborough, in Laser Handbook, Vol. 1, F. T. Arecchi and E. O. Schulz-Dubois (eds.), North-Holland, 1972. 60. J. Hecht, The Laser Guidebook, 2nd edn, TAB Books, 1992. 61. E. Gratton and M. J. vandeVen, in Handbook of Biological Confocal Microscopy, 3rd edn, J. B. Pawley (ed.), Springer, 2006.
11
Electronic systems
11.1 Introduction The most annoying problems that are encountered during the use of electronic systems are often intermittent in nature. Electromagnetic interference, corona and arcing in highvoltage circuits, and some other causes of potentially intermittent faults, are covered in this chapter. Other reliability problems in electronic hardware, such the failure of high-power equipment, are also discussed below. Difficulties involving electrical contacts, connectors and cables are very common in electronic work, and frequently intermittent. These are dealt with in Chapter 12. Problems with mains power disturbances and overheating of equipment are considered in Sections 3.6 and 3.4.1, respectively. Vibrations can be a cause of noise in electronic systems in the form of microphonics, and these are discussed in Section 3.5.3. A general survey of some of the causes of intermittent failures in experimental work is presented in Section 3.3. With some exceptions, details of the design, construction, and troubleshooting of electronic circuits and equipment items (e.g. self-contained electronic instruments) are not discussed in this chapter. Such information is provided in references listed in the “Further reading” section on page 403.
11.2 Electromagnetic interference 11.2.1 Grounding and ground loops 11.2.1.1 General points Importance of grounding arrangements Proper grounding is generally necessary for the reliability of electronic systems. This includes both the topology of ground networks, and the quality of the electrical contacts made between ground conductors. (The latter is discussed in Section 12.2.4.) Failure to provide an adequate ground arrangement often leads to erratic noise problems and system malfunctions. In some cases, equipment damage is even a possibility. The words “grounding” and “earthing” can be a source of confusion. It should be stressed that “proper grounding” does not refer to making good electrical contact to the soil or rock surrounding a building. In the context of reducing noise due to ground loops or other 353
Electronic systems
354
device A
device B shield
power cord safety ground
ground loop
chassis
1–100 mV ground noise
Fig. 11.1
Ground loop involving the a.c. mains safety-ground. Noise voltages between power outlets create currents that pass through the coaxial cable shield. This produces a voltage drop along the shield, which appears as noise at the input of the amplifier in device B.
electromagnetic interference, the term actually refers to the nature of the interconnections between equipment chassis (so-called “chassis grounds”), signal or data cable shields, a.c. mains safety-ground conductors, and nearby conducting objects, such as cable trays, equipment racks, and vacuum lines. It is not necessary to make contact with the soil or rock surrounding a building in order to be substantially free of electromagnetic interference problems, and it is generally not even helpful. (After all, an aircraft can contain a multitude of electronic devices that must work together, in an electromagnetically noisy environment, and there is no possibility of making electrical contact to planet earth.)
The nature of the problem Ground connections between instruments are often created in ad-hoc or unintentional ways as a measurement system is constructed. They form as instruments and interconnecting signal or data cables are added to the system, power leads are connected, instruments are mounted onto racks, and because of other (sometimes insidious) actions and events. One of the results of the uncontrolled ground topologies that follow from this approach is that voltage drops along a ground conductor in one part of a system may cause spurious voltages to appear in another, apparently unrelated, part. This often leads to electrical noise during sensitive measurements, and can also cause many other problems in electronic systems. A condition that frequently gives rise to such phenomena is referred to as a ground loop (or multiply connected ground). A simple example of this is shown in Fig. 11.1. Two devices (e.g. a preamplifier and a lock-in amplifier) are connected by a shielded cable that is grounded at both ends to their respective enclosures. Both instruments are operated from the a.c. mains, and the safety ground wires in their power cords also connect them with the common a.c. mains safety-ground, via different power outlets in a room.
355
11.2 Electromagnetic interference
Although one might think that the a.c. mains safety-grounds are everywhere at the same voltage, this is not the case. Net voltages can be induced in a ground conductor owing to unbalanced inductive coupling between current-carrying power conductors and the ground conductor. Such phenomena are especially prominent in situations where the conductors are loosely contained within (ferromagnetic) steel conduit. (This sort of unintended configuration is referred to as a “parasitic transformer” [1].) Furthermore, stray currents are often present on a.c. mains safety-ground conductors, owing to various causes. These include current leakages from devices with poor or degraded insulation (such as electric motors), or from devices with appreciable hot-to-ground capacitances (such as electronic devices with mains electromagnetic interference (EMI) filters). These stray currents pass along an a.c. mains safety-ground conductor and, because of its finite resistance, give rise to voltage differences between different points on the conductor (e.g. at different power outlets). Occasionally, very large stray currents may be present on an a.c. mains safety-ground conductor if a power plug or receptacle somewhere has not been wired correctly (see page 357). The differences in ground voltage between the power outlets (as shown in Fig. 11.1), which have been produced by parasitic transformer action and leakage currents, cause currents to pass through the cable shield linking devices A and B. These create a voltage drop along the shield, which ends up as noise at the input of the amplifier in device B. Because this mode of entry of noise into the signal path is based on the noise and signal sharing a common impedance (the cable shield), it is called common-impedance coupling [2]. The mains ground-noise voltages that can arise between different power outlets in a room are commonly in the range 1–100 mV [3]. They generally consist of 50/60 Hz line frequency voltages (usually, by a wide margin, the strongest component) and their harmonics, impulsive noise, and other miscellaneous disturbances. Furthermore, a.c. mains conductors in a building tend to act as antennas, so substantial radio-frequency noise will also be present. An example of a noise spectrum can be found in Ref. [2]. It is possible for power outlet grounds on different walls in a room to occasionally differ by as much as several volts [4]. Voltages differences between outlet grounds at different corners of a building can be 5 V or even more [5]. Circuits that contain ground loops, but which do not involve a.c. mains safety-grounds, are also vulnerable to unwanted pick-up due to magnetic induction. As an example (see Fig. 11.2), a shielded cable from one instrument bifurcates, so that the same signal can be sent into two different inputs on another device. If the ground loop thereby formed by the branched cable and the second instrument enclosure is penetrated by stray a.c. magnetic fields, a circulating current will be set up. This will cause noise voltages to appear at each of the two inputs. Stray 50/60 Hz fields, caused by power transformers and other items, frequently appear in room environments at levels of about 10 mG. In experimental apparatus, this sort of ground loop noise problem is usually much less severe and commonplace than those that involve a.c. mains safety-grounds. Although in most situations, ground loops are created as a result of d.c. (i.e. galvanic) ground contacts, this is not always the case. At high frequencies, the coupling between nearby conductors arising from their parasitic capacitances may be large enough to permit the formation of a ground loop. At low frequencies (e.g. 50 Hz), the large parasitic
Electronic systems
356
device A
device B
B
ground loop
Fig. 11.2
Ground loop involving bifurcated cable. Stray a.c. magnetic fields passing through the open area between the two branches cause currents to circulate around the enclosed loop. These currents create voltage drops along the cable shields, which cause noise voltages to appear at the amplifier inputs in device B. inter-winding capacitances of power transformers, and those of the capacitors used in power line EMI filters, can be the primary source of problems if, for any reason, equipment is not connected to the safety-ground [2].
Systems affected by ground loops Ground loops are a notorious and very common source of noise in sensitive electrical measurements. They are probably the most frequent cause of interference in experimental work. The problem mainly affects electronic systems operating with low signal levels (in the low-millivolt range or less) and at low frequencies (much less than 1 MHz). Highfrequency (greater than about 1 MHz), high-level analog (operating at levels of volts or more), and large-swing (e.g. 5 V) digital circuits are generally much less affected [3]. In the latter two cases, ground loops can be a problem if the high-level analog or digital signals are sent over long distances, in which case differences between the local ground voltages at the transmitting and receiving ends may become unacceptably large. For example, RS-232 computer interfaces can sometimes be susceptible to glitches, lockups and crashes as a result of ground loop noise [2]. Ground loop voltages can also lead to heating of temperature sensors in ultralow temperature equipment.
Unexpected behavior Ground-loop noise can be highly irregular and intermittent – appearing and disappearing unexpectedly. The cause of the noise is frequently very difficult to track down, even by those with extensive experience. Even if care is taken when equipment is initially interconnected to avoid creating ground loops, they frequently occur inadvertently and unexpectedly. Some potential causes are: (a) electronic instruments coming into contact with each other (e.g. while resting on a tabletop),
357
11.2 Electromagnetic interference
(b) coaxial cable connectors touching each other, or touching grounded surfaces, such as an instrument rack or an optical table, (c) outer jackets on shielded cables that are in physical contact with grounded metal surfaces are worn through by chafing, (d) unauthorized borrowing and reinstallation of electronic devices and cables in an experimental setup, (e) metal filings, swarf, bits of wire, etc., lodging in a crevice between two separately grounded surfaces, and (f) floating experimental structures, such as cryostats, being connected to grounded objects via metal pumping lines. Seemingly innocent changes to the wiring of an experimental setup (sometimes made in an attempt to eliminate noise problems) often leads to ground loops. Local changes to wiring can have global consequences in complex systems. Strangely, the addition of an extra ground loop in an already badly grounded system may sometimes reduce noise levels. This can occur because the extra conductor lowers the voltage drop along the signal ground path, or because the extra loop produces a voltage that cancels the noise generated by the original loop. (NB: This is not a good way of eliminating such problems.) Ground loops can be a particular nuisance in the case of electronic systems that must be regularly set up and dismantled again (when, for example, they are used in conjunction with shared experimental facilities).
A potential cause of severe ground loop noise The miswiring of a.c. power plugs and receptacles can be an insidious cause of severe ground-loop noise problems [2]. If the neutral and ground wires on a plug or receptacle are interchanged, power return currents from electrical equipment (perhaps amounting to many amps) will travel along the safety-ground network. This can cause very large 50/60 Hz voltages to appear in electronic systems that have multi-point connections to the safety ground. Such faults do not produce a short circuit, and may go unnoticed. These ground currents can affect systems that are in different part of the building from that of the miswired item, so that locating the improper wiring may be difficult. A discussion of mains wiring problems of this type, including a method for determining their presence, can be found in Ref. [6]. One case of such a neutral–ground interchange involved a diffusion pump that was electrically connected to a cryostat through its support structure [7]. As a result of miswiring the plug on the pump, a current of 12 A at 60 Hz flowed through the structure. This caused the voltage on the cryostat itself to swing at 7 V with respect to ground.
11.2.1.2 Avoiding ground loops Planning ground systems and the use of ground maps As indicated, the grounding of an electronic system is often left to chance when it is first constructed. Only when ground loops manifest themselves through degraded system noise
Electronic systems
358
(b)
(a)
in 1 out
in
out in 2
a.c. mains safety ground
Fig. 11.3
Ground maps for the setups in Figs 11.1 (a) and 11.2 (b).
behavior is serious attention given to proper ground layouts. This is the wrong approach. The possibility of ground loops should be kept in mind before construction begins, and ground topologies should be planned. In building an electronic system for low-level, lowfrequency operation, care must be taken not to make unnecessary ground connections, and to avoid situations that might lead to the formation of accidental grounds (such as some of the types described in the above list). If a ground must be created, the physical contacts should be of good quality (see Section 12.2.4). Keeping things simple is, once again, a virtue. Indeed, if an electronic setup already exists, merely simplifying it (e.g. by removing unused instruments) may be sufficient to eliminate any ground loops. In a system containing both low- and high-signal-level sections, attention should be focused on preventing ground loops in the low-level ones. Ground loops in parts of the system where signal levels are more than a few volts are not likely to cause problems. Of course, a ground loop in a low-level section of the system may be completed through a high-level one. Making sense of the grounding in the maze of cables and instruments that form during the construction of an electronic system can be extremely difficult on the basis of visual appearances. Hence, an important tool in the design and creation of a ground network is the ground map [8]. This is a type of schematic diagram that illustrates all ground paths and ground reference points for the system. Only the ground structure should be included in the map. Other system functions and conducting paths (such as the center conductors of coaxial cables) should be absent or presented in block form. Some possible ground paths are: cable shields, power cord safety grounds, instrument enclosures, electronic racks, pumping lines, and cryostats. Capacitively coupled ground paths through power transformers and powerline EMI filters may also be important. Ground maps of the setups in Figs. 11.1 and 11.2 are shown in Fig. 11.3. The creation and maintenance of a ground map, and its use in deciding how new interconnections are to be made in an electronic system, should preferably be the task of a single person.
11.2 Electromagnetic interference
359
amplifier
battery-operated preamplifier shield
Fig. 11.4
Creation of a single-point ground by using a floating power source. (i.e.: no terminal in the preamplifier power source, which is a battery in this case, is at ground potential).
Single-point grounding The most basic method of preventing ground loops is to avoid making ground contacts at more than one point along a ground path. That is, single-point grounding should be used, if possible. Referring again to Fig. 11.1, a ground loop can be avoided in this case by lifting the ground contact at the preamplifier (device A). Since the preamplifier is mains powered, the mains safety-ground wire provides the ground connection for the device. Some other means of providing power must be used if this ground is to be removed. This is because, for reasons of safety and legality, the safety ground on a mains-powered device must not be disconnected. Batteries are one possible source of ungrounded power (see Fig. 11.4). (These issues are discussed further on pages 360–362.) The ground connection at the lock-in amplifier (device B) may be lifted instead, but the same power supply issues will be present (furthermore, lock-in amplifiers usually require significantly more power than preamplifiers). If a shielded cable containing a pair of twisted wires is used to connect two instruments with balanced interfaces, one method of preventing a ground loop is to open the connection between the shield and ground at one end of the cable (see Fig. 11.5). This is feasible because (unlike the case with coaxial cable) the shield is not used as a signal conductor, but only to prevent the capacitive coupling of low-frequency EMI into the circuit. The reason why a shield that is part of a ground loop, but which is not a signal conductor, may cause problems is that (because of small imperfections in the cable) ground noise in the shield can be inductively coupled into the signal wires [2]. Lifting the ground at one end of the cable is often done by trimming back the cable shield within the connector, and covering it with heat-shrink tubing in order to isolate it from the connector shell. A potential problem with this approach is that cables that are grounded at one end may be mixed up with those grounded at both ends. Hence, labeling of the two types is essential. Also, if good EMI shielding at radio frequencies is needed, this generally requires that cable shields be connected to ground at both ends. (Single-point grounding at radio frequencies is generally not desirable – see page 376.) This can be done without causing a low-frequency ground
Electronic systems
360
device A
Fig. 11.5
shielded twisted-pair cable
device B
Preventing ground loops in a balanced system by lifting the cable shield ground connection at one end. (NB: Some workers dispute the need to do this.) The driver in device A is a differential output stage, while the amplifier in device B is a differential amplifier. loop by connecting a capacitor between the shield and ground at the cable end where the ground has been lifted [2]. A better approach, which does not lead to RF pigtails (see page 459) is to use double-shielded cable, in which the shields are isolated from each other, and connect each shield to ground at opposite ends of the cable. Although the measures described in the previous paragraph, and the need for them, are widely accepted by professional audio engineers,1 there is some dissent. Some workers in the audio industry believe that there is little need to worry about ground loops involving the shields of shielded twisted-pair cables in balanced systems [9]. They have found such ground loops do not cause noise problems, as long as the cable shields are properly terminated to the equipment chassis. (See also the discussion in Ref. [2].) They maintain that, for the purpose of preventing RF interference, it is better to connect the shield directly to the chassis at both ends (without using a capacitor), and thereby allow any ground loop currents to flow through the shield. Of course, such an approach also makes it much easier to construct and manage the cables. The important and often misunderstood issue of grounding cable shields is discussed in Ref. [10]. (See also the discussion on page 456.) Ground loops that pass through vacuum pumping lines can be broken by insulating plastic centering-rings and clamps, used in conjunction with O-ring seals, and ceramic electricalbreaks. If it is desirable that the case of an instrument that is mounted in an instrument rack should be isolated from the rack, this can be done with mounting insulators [11].
The provision of floating power Since ground loops are often completed through (and derive much of their noise from) power line safety grounds, the use of floating power supplies can be an important way of breaking such loops. Isolating a mains-operated power supply by removing its safety-ground 1
Balanced interfaces are widely used in professional (as opposed to consumer) audio equipment.
11.2 Electromagnetic interference
361
Faraday shield
Fig. 11.6
Stand-alone power isolation transformer. Such a device will not solve ground-loop noise problems. connection is very unsafe2 and often violates statutory electrical codes [2]. Safety ground issues, in the context of ground loops, are discussed in Refs. [2] and [6]. A measure that is sometimes recommended in order to overcome the power-line ground problem is to plug the mains-powered apparatus into a “power isolation transformer.” In such a device, the primary and secondary transformer windings are isolated from each other using a Faraday shield, which is connected to ground. This shield is supposed to greatly reduce the capacitance between the primary and secondary windings, and thereby allow the isolation transformer to provide ac, as well as galvanic, isolation. In practice (because of the requirements of the electrical safety codes), the primary and secondary safety grounds must be connected together (see Fig. 11.6). Such an arrangement negates any benefits that might arise from the use of stand-alone power isolation transformers [2,12]. It can actually result in an increase in ground loop noise problems, since the shield diverts noise currents into the safety ground. (Isolation transformers can be beneficial, however, if they are used in the signal path – see page 365.) A very helpful and simple way of reducing the size of ground-loop noise voltages across cable shields is to place resistors or inductors in series with the power supply leads [13]. The use of series resistances is especially convenient if large amounts of power are not being drawn. Otherwise, inductors can be employed. For example, resistors can be installed in the low-voltage d.c. leads supplying power to a preamplifier. (See also the comments regarding semi-floating input grounds on page 364). Such measures should never be carried out in such a way as to interfere with the operation of a.c. electricity supply safety-grounds. Batteries are useful sources of floating power (see Fig. 11.4). Some sensitive commercial instruments, such as low-noise preamplifiers, are provided with built-in rechargeable batteries, which can be used in place of the a.c. power supply when ground loops and/or a.c. power-line noise with other origins are a problem. The operating times of such devices may range up to several tens of hours on a single charge. If longer operation is needed, it is sometimes possible to use large external batteries. (Some sensitive instruments permit operation using external dc power sources.) Such batteries can be deployed in pairs, so that one set can be recharged while the other is being depleted. Gel-type lead-acid batteries (also known as “sealed lead-acid batteries”) are a good choice for this – see Section 11.8.5. Their low energy densities (in comparison with other rechargeable batteries) are usually not an issue in this application. 2
Shock, electrocution, and fire are possible consequences of this approach.
Electronic systems
362
Another way of providing floating power is to use an optical method. It is possible to transmit power in the form of light over optical fibers, at useful levels and with reasonable efficiency [14]. The technique involves the use of a diode laser to launch near-infrared light into one end of an optical fiber. At the other end of the fiber, the light is converted into electricity by a photovoltaic device (similar to a solar cell), which is designed to generate power efficiently at the wavelength of the laser. By this means, power can be produced that is (as in the case of batteries) completely isolated from the mains electricity ground. Power levels of 0.5–1 W per fiber can be transmitted over distances of more than 500 m using commercial modules [15]. The voltage generated at the receiving end is typically 5 V d.c. At the time of writing, such systems are very expensive, and would normally only be used in special circumstances. For instance, they can be employed in situations where it is necessary to transmit power to electronic devices at very high voltages with respect to ground. An alternative, and much cheaper, way of providing floating power by optical means is to use a combination of solar cells and light-emitting diodes (LEDs). However, these would not provide the ability to efficiently transmit the light through an optical fiber.
Isolation of digital paths Ground loops are sometimes completed through the grounds on data buses. Complete galvanic and ac isolation can be provided for such items very straightforwardly, using optically coupled isolators (or opto-isolators) [3]. These are inexpensive modules that use a light-emitting diode (LED) to transmit the digital signal optically across a gap to a photodiode or a phototransistor. Self-contained opto-isolator units are available for RS-232 and other computer interfaces. Digital isolators that employ transformer or capacitor coupling techniques are also made. Optical fibers can sometimes be useful in providing isolation. In selecting and using these, one must ensure that the isolation is not violated by any metal braid or armor coverings on the optical-fiber cable.
Opening ground loops in the signal path The use of the aforementioned techniques may be impractical, or insufficient for preventing the formation of ground loops. A very effective, and widely employed, method of blocking ground loops in the signal path involves the use of differential amplifiers (see Fig. 11.7). In this arrangement (ignoring for the moment the R-C protection components illustrated), ground currents do not flow along signal-carrying conductors. The remaining problem is therefore one of rejecting common-mode voltages3 due to the differences in ground voltages between devices A and B. Differential amplifiers have the property of responding only to the voltages differences between signal lines, and (up to a point) ignoring common-mode voltages. Good differential amplifiers can achieve a common-mode rejection ratio (CMRR) 3
Common-mode (or CM) voltages on two or more wires have the same polarity on each wire with respect to ground. Differential-mode (i.e. DM or normal-mode) voltages on a pair of wires have the opposite polarity on each wire.
11.2 Electromagnetic interference
363
device A
device B shield diff amp
0.01 µF
Fig. 11.7
Arrangement for preventing ground loops in an unbalanced system by using a differential amplifier. The resistor and capacitor are used to limit swings in ground voltage, and avoid consequential damage to the amplifier (see Ref. [3]). (These are optional, but desirable in situations where surges may occur.)
of 90–100 dB [2]. Some types can provide 110 dB of CMRR under certain conditions. These devices offer a reasonably complete solution to ground loop problems in many cases. Some low-noise electronic instruments, such as lock-in amplifiers, are provided with differential input amplifiers. Stand-alone differential amplifiers and preamplifiers that are intended for use with oscilloscopes are commercially available. Self-contained low-noise differential preamplifiers for very sensitive (nV) measurements can also be obtained. It is possible to get high-quality differential amplifier and “balanced line receiver” ICs at moderate cost. However, these ICs are circuit components, not self-contained devices. Hence, their use normally requires a certain amount of electronic construction work (but not much) on the part of the user. The use of differential amplifiers with high input impedances is desirable, in order to avoid the deterioration of CMRR that can occur if the source impedances are high and unbalanced. In this context, an input impedance of a few tens of kilo-ohms could be considered “low,” while source impedances of hundreds of ohms and source impedance imbalances of tens of ohms may be considered “high.” Such source impedance parameters are perfectly plausible for actual electronic systems [2]. High-performance differential amplifiers are available with input impedances in the tens of mega-ohms range. Differential amplifiers are discussed in Refs. [3] and [10]. There are a number of reasons why differential amplifiers, useful as they are, cannot be considered a cure-all for ground-loop noise problems. First, a given differential amplifier will have a high CMRR only within a limited range of frequencies. Hence, it may be necessary to combine the use of differential amplifiers with other techniques to achieve the desired levels of noise rejection over a very large frequency range. Second, the intrinsic noise performance of differential amplifiers is generally not as good as that of single-ended types. This may be a consideration when extremely low-level signals (e.g. in the nanovolt range) are being measured. Third, differential amplifiers are not as common as singleended ones, and a suitable differential amplifier may just not be available to hand when the situation demands it.
Electronic systems
364
device A
device B shield
10–100 Ω
Fig. 11.8
Semi-floating input ground arrangement for reducing ground-loop noise. (a)
Fig. 11.9
(b)
(a) Common-mode choke for reducing radio-frequency common-mode noise. (b) Schematic diagram. The device comprises a 1:1 transmission line transformer formed by winding the cable around a ferrite toroid. A configuration that is commonly used with sensitive commercial instruments to limit the effects of ground loops is the semi-floating input ground [4]. This involves raising the voltage of the cable shield above ground with a 10–100 resistor (see Fig. 11.8). As a result, most of the ground-loop voltage appears across the resistor, rather than the shield. The resulting reduction in ground-loop signal is the ratio of the resistance of the shield to that of the resistor. If one assumes a shield resistance of 20 m (for a 1 m length of RG-58 coaxial cable), and with a 10 resistor, this amounts to 2 × 10−3 , or −54 dB. The semi-floating input ground technique is used with both single-ended and differential amplifiers. The use of differential amplifiers to remove common-mode noise at radio frequencies may be ineffective, because their CMRRs tend to be greatly reduced at such frequencies [3]. If this is a problem, the use of a common-mode choke, in series with a differential amplifier, may provide a solution. A common-mode choke (see Fig. 11.9) can be created by wrapping a few turns of the interconnecting cable around a ferrite toroid. It presents a large inductive reactance to high-frequency common-mode noise, which can be bypassed to ground at the output end of the cable using a pair of small capacitors. However, because the conductors
11.2 Electromagnetic interference
365
device A
device B Faraday shield
magnetic shield
Fig. 11.10
Arrangement for breaking a low-frequency ground loop with an audio transformer.
are coupled together via what amounts to a 1:1 transformer, differential-mode signals are unaffected. Low-frequency ground loops can be broken by placing transformers in the signal path, as shown in Fig. 11.10. To avoid a.c. continuity caused by parasitic inter-winding capacitances, it is necessary to provide a Faraday shield (or “electrostatic shield”) between the primary and secondary windings. This method is employed very frequently in audio systems. With a high-quality audio transformer, having a shield between the windings, and used correctly, it is possible to achieve CMRRs of 90–100 dB at 60 Hz [2]. If a transformer without a Faraday shield is used instead, the resulting CMRRs can be many tens of decibels lower. Some advantages of transformers over differential amplifiers are: (a) they do not require a power source, (b) they are readily available at moderate cost, (c) they are simple and particularly useful as a quick fix for ground loop problems, (d) they tend to suppress RF interference, and (e) they are highly resistant to overvoltages that can easily destroy active electronic devices (see Section 11.6). The disadvantages include: (a) transformers tend to have a more limited frequency range, (b) signal distortion may be a concern, especially at very low frequencies and high levels, (c) they cannot be used at d.c., and (d) particular care must be taken to ensure that their load resistances and capacitances are suitable. Regarding the last point, even the capacitance associated with 60 cm of ordinary shielded cable (about 100 pF) can impair the phase and frequency response of many Faraday shielded transformers [2]. Hence, their output leads should be kept short. Also, unless surrounded by magnetic shields, normal (i.e. non-toroidal) audio transformers are prone to interference caused by external ac magnetic fields. Transformer problems are discussed in Section 11.8.6. Transformer-based devices for the prevention of ground loops in audio systems are widely available. These are sometimes referred to as “audio isolators” or “ground isolators.” In this context, “line input transformers” (but not “line output” types) are often Faraday shielded. The importance of using high-quality Faraday and magnetically shielded transformers for this purpose has been emphasized [2]. In some situations, large voltage differences between separate grounds are encountered. These may even reach kilovolt levels in very unusual circumstances [2,3]. Under such conditions, not just noise, but damage to equipment becomes a concern. In these cases
Electronic systems
366
input supply
output supply
shield
isolation amplifier
grounds can differ by up to several kV
Fig. 11.11
Conceptual scheme for preventing problems arising from large ground-voltage differences, using an isolation amplifier. Each side of the isolation amplifier must be provided with a separate dc power supply (as shown), which is isolated from the other (see the discussion in Ref. [3]). isolation amplifiers may be useful. Such devices use special methods to encode the information contained in a signal and send it across an insulating barrier. This approach produces almost complete galvanic and ac isolation, and the ability for the device to withstand very high voltages differences between the input and output terminals. An example of how an isolation amplifier might be used is shown in Fig. 11.11. One type of isolation amplifier uses the input signal to modulate a high-frequency pulse carrier, which is sent through a transformer, and demodulated on the other side. These transformer-coupled isolation amplifiers can (unlike ordinary transformers) operate at frequencies ranging from 120 kHz all the way to dc. Devices of this type can provide isolation of up to 3.5 kV between the input and output terminals. The ability of an isolation amplifier to prevent voltage differences between the two sizes of the barrier from influencing the output voltage is sometimes expressed as a common-mode rejection ratio (CMRR) or as an isolation mode rejection ratio (IMRR). Depending on how such devices are used, these parameters are often equivalent (the manufacturers should be consulted). For transformer-coupled isolation amplifiers, values of 120 dB at 60 Hz are possible. Another form of isolation amplifier uses the input signal to modulate a high-frequency carrier, which is capacitively coupled across the insulating barrier. Demodulation is carried out on the other side of the barrier to produce the relatively low-frequency output signal. Such capacitively-coupled isolation amplifiers can operate from 60 kHz to d.c., and provide isolation of up to 3.5 kV. It is possible to achieve IMRR values of 140 dB at 60 Hz with these devices. Isolation amplifiers are normally IC-like circuit components, which are not selfcontained. Hence, the user usually has to do a small amount of electronic construction work in order to put them into operation. These devices are discussed in Ref. [3].
367
11.2 Electromagnetic interference
A potential pitfall with the use of transformer-coupled isolation amplifiers is the presence of noise in the form of voltage spikes on the output signal. These arise because of the impulsive nature of the carrier, and can be a problem for some sensitive experiments. Heavy filtering of the amplifier output, and attention to the internal grounding and d.c. power supply line bypassing of electronic circuits containing the isolation amplifier (as recommended by the manufacturer) may be necessary in order to prevent such disturbances. Capacitively coupled isolation amplifiers can also have significant output noise and ripple. Ground loops can be broken by placing optical fibers in the signal path. Analog fiber-optic links can be obtained as complete systems from commercial sources. These systems convert analog electrical signals to digital optical ones for transmission along an optical-fiber cable, and do the opposite at the receiving end.
Some methods of reducing the effects of unavoidable ground loops If sensitive electronic devices must be interconnected, it is preferable that their power cords be plugged into the same outlet strip, or into a branch circuit dedicated to their use [2]. Although such an arrangement will not prevent ground loops from forming, it should at least reduce the potential severity of any resulting noise problems, by minimizing differential ground voltages. (Keep in mind that the grounds of power outlets on the same wall are not necessarily joined locally – their wires may run for many tens of meters or more before coming together [4].) Any cable that carries signals, and is part of an unavoidable ground loop, should be kept as short as possible, in order to minimize its shield resistance, and hence the voltage drop along the shield for a given loop current. Any excess cable should not be coiled, since such coils can result in needlessly large ground currents due to magnetic induction by stray fields. It is preferable to bundle the cable in a non-inductive (i.e. zigzag) fashion, while taking care to ensure that the minimum bend radius of the cable is not exceeded (see page 453). Similarly, other unavoidable cable configurations that lead to ground loops (such as the type illustrated in Fig. 11.2) should be bundled in such a way as to minimize the area of the loop. Twisting cables together can be helpful in this respect. The reduction of interference from stray magnetic fields is discussed further in Section 11.2.3. It is also a good idea to use signal cables with heavy-gauge cable shields (i.e. braided copper) in those places where they form part of a ground loop, again in order to minimize the total resistances of their shields [2]. In particular, cables that contain only thin foil shields should be avoided. (Such foil-shielded cables may also result in other reliability problems – see page 459.) A very general point of principle exists that applies to all low-frequency, low-level electrical measurements that are made at a single frequency (e.g. using a lock-in amplifier), regardless of whether or not the measurement system involved contains ground loops. Interference from the a.c. power line, whether the result of capacitive, inductive, or commonimpedance coupling, can be greatly reduced, but it is very hard to avoid entirely. Hence, the measurement frequency should never correspond to the 50 Hz or 60 Hz power-line frequency or its harmonics.
368
Fig. 11.12
Electronic systems
Ground loops in an experimental setup can be detected and traced by using a ground-loop tester.
11.2.1.3 Detecting ground loops The presence of 50 Hz or 60 Hz mains frequency noise in an electronic system is a good indication that ground loops might be present. (Although such noise could come from capacitive coupling.) As indicated earlier, finding the loops is frequently a difficult task. A ground map is often very helpful. The use of oscilloscopes to track down the noise can be a problem, because such devices may themselves create ground loops when they are connected to a circuit. Alternatively (as explained on page 357), they may actually reduce the noise caused by existing ground loops. Such difficulties can be overcome by using an oscilloscope with a true differential input amplifier. (The function that is provided on some oscilloscopes, which involves inverting one channel and adding it to the other, is generally inadequate for this purpose.) The use of a battery-powered oscilloscope is another approach. A very useful and convenient device for determining whether or not a wire or cable forms part of a ground loop is a ground-loop tester (also known as an earth-ground clamp meter, earth-resistance clamp meter, or earth clamp). This is a hand-held instrument that clamps around a conductor and, if the latter is part of a closed conducting loop, measures the resistance of the loop. It works by using a current transformer to produce an a.c. voltage in the conductor, and a second current transformer (adjacent to the first, in the same unit) to detect any resulting current. With this method, it is not necessary to make any electrical contact to (or even touch) suspect cables. One can quickly and easily test conductors in an experimental setup (including small-diameter pumping lines) for the presence of a ground loop (see Fig. 11.12). As long as only a single simple ground loop is present, it is also possible to follow it around the apparatus. Loop resistances of between about 20 m and 1 k can be measured. Such devices will not detect capacitive ground loops.
369
11.2 Electromagnetic interference
If a dedicated ground-loop tester is not available, a similar testing arrangement can be constructed using a toroidal coil and a clamp-on ammeter (or clamp meter). This is less convenient than a ground-loop tester, but is more versatile, and better suited to testing large systems. (However, ground-loop testers often have the ability to act as clamp-on ammeters.) A wire that might comprise part of a ground loop is passed through the central opening in the coil. An a.c. voltage is then induced in this wire by the coil, and the clamp meter is used to sense any resulting current. Details of this procedure (and others pertaining to the characterization of ground connections and ground networks) are provided in Ref. [6]. Another, similar, method can be used for tracing ground loops, even when the conductors are too large to accommodate a ground-loop tester or a clamp meter [16]. As before, a toroidal coil is used to induce an a.c. voltage in a conductor that could form part of a ground loop. The frequency of this voltage is between 90 Hz and 250 Hz. Current flow is detected by using a small non-toroidal (i.e. a normal EI type) iron-core transformer, with many turns of wire, which is mounted on a stick. The transformer is connected to the input of a standard tape recorder amplifier, which in turn is connected to a set of headphones. The operator who is carrying out the test dons the headphones, and moves the transformer near the conductor. If a ground loop is present, and current is passing through the conductor, the signal will be audible. By moving the transformer over the conductor, and other objects connected to it, the ground loop can be traced through the system. If the loop currents pass through particularly large objects, the resulting current densities (and the signal heard in the headphones) may be too small to permit detection. Under such conditions, the signal strength can be increased by forgoing the use of the toroidal coil, and making electrical connections directly to the conductor. A signal generator can then be used to pass large a.c. currents through the loop. Another scheme for locating ground loops is described in Ref. [17]. This arrangement uses a ferrite split-core toroidal coil to induce an a.c. voltage in a ground conductor, and a Rogowski coil as a detector. The Rogowski coil is an air-core coil, which is wound on a flexible former. Hence, it can be conveniently deformed and wrapped around a group of cables that are suspected of being part of a ground loop. One potential problem that can cause much trouble while trying to eliminate a ground loop in a complex system is the presence of multiple branches in the loop [17]. If a part of the loop outside the branched region has a high impedance relative to that of the branches (i.e. it acts as a current source), and if the branches have widely different impedances, loop currents will preferentially travel along the branch with the lowest impedance. If this is identified and eliminated, the current will hop over to another branch. In this way, the loop currents will remain until all the branches are found and removed. In very large experimental setups, it may be desirable to have a permanent system for detecting ground loops. One such arrangement, designed for use in a fusion reactor facility, employs a system of current transformer pairs (similar to the arrangements just described) to sense the presence of a ground loop [16]. Upon detecting such a condition, the system sounds an alarm. If the ground loop occurred because of human activity, this may allow the worker who caused the condition to immediately reverse the action that lead to it.
Electronic systems
370
11.2.2 Radio-frequency interference 11.2.2.1 Introduction Although the presence of ground-loop voltages at 50/60 Hz and its harmonics is a more common problem, electromagnetic interference at radio frequencies (RFI) can also be a cause of noise in measurements and malfunctions of electronic devices and systems. “Radio frequencies” may be defined as those at which interfering signals can readily be generated, propagated through space and coupled into vulnerable equipment as electromagnetic waves. In practice, this means frequencies greater than roughly 104 –105 Hz. (Other coupling paths that are important even at much lower frequencies (e.g. 50/60 Hz), including conduction along cables and near-field (i.e. magnetic- or electric-field) coupling, may also exist.) Electromagnetic energy with frequencies of much more than about 109 Hz is usually not a cause of RFI problems. RFI often manifests itself as aberrant behavior that occurs for no apparent reason. For example, an instrument reading may suddenly change to a different value, or become noisy or erratic. Even analog systems that operate at low frequencies can be affected by RFI (see page 372). Radio-frequency energy can also cause heating of thermometers in ultra low temperature equipment (see Section 9.12.3). Sensitive analog circuits (such as strain gauge amplifiers), operating at millivolt levels or less, are particularly susceptible to RFI, whereas digital circuits are usually unaffected by it. An important aspect of RFI, which can make it difficult to cope with, is that the electromagnetic energy may arrive as radiation, and/or propagated along mains power lines, from very distant sources. Since these distant sources are frequently completely outside the control of the worker whose equipment is affected,4 reducing the noise at the point of reception is often the only viable way of dealing with the problem. Largely because of the irregular character of many of the sources (both near and far), RFI frequently appears to be intermittent and unpredictable. Paths from the source of the RF energy to the affected device can be very complicated. For example, sometimes the energy is picked up by building a.c. mains wiring, propagated along it, and then re-radiated. Multiple transitions between conduction and radiation of RF energy can sometimes occur [18]. Sometimes, the frequency of the RF energy will happen to coincide with the resonant frequency of some circuitry in the affected device (which may not be intended to act as a resonator). This can cause large amplitude oscillations to take place even in the presence of low RF noise levels [3]. For instance, pairs of capacitors (tantalum and ceramic) that are commonly used for bypassing purposes can form parasitic tuned circuits, with resonant frequencies that range from about 107 Hz to 108 Hz. Some important sources of RFI are [2,3,19]: (a) commercial radio (especially FM) and TV transmitters – these can be very troublesome near large cities, 4
In contrast, for instance, the stray 50/60 Hz magnetic field produced by a power transformer is well localized, and can cause interference only in electronic devices or systems that are relatively close to the transformer. This often makes it much easier to reduce the interference at its source, which is generally the preferred method.
11.2 Electromagnetic interference
371
(b) switching power supplies (very common RF sources), (c) high-voltage power lines (owing to arcing and corona discharges – especially near the sea and under humid conditions), (d) lightning, (e) arc welders (can be troublesome even several hundred meters from a workshop containing a welder), (f) devices in general which create electric sparks or arcs, such as thermostats, switches and relays (especially those that operate inductive loads), brush-type electric motors, electric-discharge machining equipment, (g) amateur and CB radio transmitters, commercial two-way radios, mobile phones, (h) radar transmitters, (i) computer equipment and other digital devices, (j) variable-speed drives for a.c. motors, (k) fluorescent and mercury-vapor lamps, arc lamps, gas lasers, (l) devices that contain high-power switching components (e.g. SCRs and triacs), such as light dimmers, (m) RF induction heating equipment, (n) high-voltage apparatus (owing to arcs and corona discharges – see Section 11.3), and (o) building facilities (air-conditioner compressors, elevator motors, etc.) – look out for those located in ceiling voids (see page 380). Commercial broadcast transmitters do not necessarily operate at constant power levels. They may shut down or reduce power at night, for example.5 Computer equipment can be a major contributor to RFI. Despite the presence of statutory limitations on the amount of radio frequency energy that such devices are allowed to emit, there are so many of them around in a typical laboratory environment that they create an almost unavoidable “smog” of electromagnetic pollution. Certain emitters of RF energy are inherently very broadband. These include devices that produce sparks, arcs, or corona discharges (as indicated above), and apparatus that create electrical pulses, such as switching power supplies, computer equipment, and variablespeed drives for a.c. motors. For example, the frequencies of emissions produced by corona discharges on high voltage power lines can, when conditions are wet, extend continuously all the way from the lowest radio frequencies to over 100 MHz [20]. The emission frequencies of pulsed devices will correspond to harmonics of the fundamental pulse frequency. Even apparatus that is sensitive only within a narrow range in the radio frequency spectrum may be affected by broadband emitters. One example of a particularly annoying kind of RF generator in this category is the type of thermostat that uses a bimetallic strip to operate a set of contacts. These devices are relatively common, and (if they are faulty) can produce intermittent arcs that may continue for a minute or longer as their contacts slowly open [21]. Such thermostats are unobtrusive, and not obvious sources of interference. Hence, as producers of RFI they can be hard to identify. 5
In North America, many AM radio stations (so-called “Class D” or “Class B” stations) are required by law to shut down their transmitters, or greatly reduce power, between sunset and sunrise. This is done in order to avoid causing interference with the reception of other (higher class) station signals.
372
Electronic systems
RF-stabilized arc welders produce both broadband and narrowband noise. The fundamental frequency of the narrowband part is typically 2.6 MHz [22]. Arc welders are common items in laboratory workshops, and can cause serious RFI problems. Interference from welders of this type is often caused by incorrect installation of the welding equipment. Laboratory equipment can sometimes be strong emitters of RF energy. For example, certain CO2 lasers are excited by RF discharges at frequencies of several tens of megahertz. The power supplies used to produce these discharges generate powerful radio signals [23]. Radio-frequency induction heating equipment is often used in laboratories for the preparation of material samples. These devices, which work at narrowband frequencies ranging from a few hundred kilohertz to several megahertz, may operate at power levels of tens of kilowatts. Optical modulators based on Pockels cells can operate at frequencies of tens of megahertz, with possible power levels of several tens of watts. These can be troublesome sources of interference [24]. Also, RF sputtering systems used for thin-film deposition can emit radio energy (typically at 13.56 MHz). Other RF plasma systems, such as plasma etchers, can also cause problems. Some types of experimental equipment make use of high current pulses of very short duration. These include certain devices used in the study of ionized gases (plasmas), and pulsed lasers. Like lightning, these intense current pulses often result in the emission of bursts of radio frequency energy with a large bandwidth.
11.2.2.2 Audio rectification: RF interference in low-frequency devices Radio-frequency energy can cause interference in devices that do not even operate at radio frequencies. Apparatus operating at low frequencies (even d.c., as in the case of the aforementioned strain gauge amplifier) can be affected, generally because of rectification of the RF energy [8]. This effect is called “audio rectification,” even though low-frequency apparatus working outside the audio range may be involved. It occurs most often because the dynamic range of some (normally linear) device in the apparatus is exceeded by the RF energy, so that saturation occurs. As a result, a d.c. bias is created in the circuit, which is modulated by the envelope of the RF waveform. When the device is operating in a nonlinear regime, this also affects how it processes signals that are normally present in the apparatus. Rectification can also take place if some nonlinear component (such as a diode) is present in a circuit. It may also happen if the circuit contains bad electrical contacts, such as poor solder joints or corroded connectors. Audio rectification is discussed at length in Ref. [25]. Low-frequency electromagnetic energy (e.g. due to 50/60 Hz a.c. pickup) can also lead to rectification-related interference problems [26]. For instance, it can result in offsets or noise in d.c. nanovoltmeters, and other sensitive d.c. instruments.
11.2.2.3 Preventing radio-frequency interference Precautionary measures If an instrument or system is being constructed, and there is a good chance that it will be affected by RF noise in the environment (or will itself generate RF energy), steps
11.2 Electromagnetic interference
373
should be taken at the beginning to address the potential problems. Although fixes can be applied to existing devices after RFI problems are detected (see Refs. [19] and [27]), the results are not likely to be as satisfactory as they might have been had the appropriate countermeasures been taken earlier. For instance, sensitive electronics should be placed in a shielded enclosure (or cabinet), and low-level analog electronics should not be put in the same enclosure as noisy digital ones. In those cases where it is feasible to suppress the emission of RF energy at its source, this is usually the easiest and least-expensive strategy [13]. In taking such an approach, a potentially large number of devices and systems that may be affected by the RFI are thereby protected. Furthermore, additional attenuation of the RF can be carried out at the point of reception by items that are particularly sensitive to interference. However, as indicated previously, the suppression of RF at its source is often either impractical or impossible. If the source of the RF energy is close by, it may also be possible to greatly reduce interference by increasing the separation between it and the vulnerable items. Again, this may not always be practical. Furthermore, the RF energy can travel via very circuitous routes in some cases (through building power wiring, for example), and hence the strength of the RF interference may not be a simple function of distance.
Shields An important method of attenuating RF is by placing a conducting shield (made of, e.g., steel or copper) around an affected electronic device. Just because an enclosure is made of metal does not necessarily mean that it will act as an electromagnetic shield. For instance, in order to be effective as a shield, an enclosure must be grounded to the common of the electronic system [10]. In practice this means, for example, that the shields of cables that enter such an enclosure must be connected directly to it. Furthermore, this must be done in a way that does not compromise the effectiveness of the cable shield at this junction (i.e. without creating “pigtails” – see page 459). The performance of RF shielding depends on the maximum linear dimensions of any holes in the shield, and not (perhaps surprisingly) on the areas of the holes. Hence, even narrow slot openings in a shield can cause problems if they are long enough. A long slot that is too narrow to permit the entry of a sheet of paper can allow almost as much RF energy to pass as would occur if no shield were present [18]. The reason for this is that a long slot in a shield disrupts magnetically induced shielding-currents (see Fig. 11.13), and thereby acts as an efficient antenna (a so-called “slot antenna”) [10]. Maximum transmission takes place if the length of the slot is half the wavelength of the RF energy.6 Leakage of RF energy commonly occurs through seams and joints in the shielding (which can act as good slot antennas), and generally not by penetration of the shielding material itself. Unless the materials at these places are bonded together (e.g. by soldering or welding), the edges should overlap, and be in good electrical contact. Holes required for controls, meters, and other items that must penetrate a shielded enclosure, can cause difficulties unless appropriate 6
The relationship between the attenuation provided by the shield (S, measured in dB) to the RF wavelength (λ) and the length of the slot (l) is: S = 20 log (λ/2l) , so long as l ≤ λ/2.
Electronic systems
374
shield surface (in plane)
Fig. 11.13
induced shield currents
slot
Magnetically induced shield currents in a portion of an RF shield with and without a rectangular slot (see Ref. [10]). It can be seen that narrowing the slot in the direction of the arrows will not appreciably reduce the large effect it has on the current distribution. steps are taken during construction. Screws that hold together shielded enclosures can be major sources of trouble if they become loose (see also Section 12.2.4). If holes are needed for ventilation, it should be kept in mind that having many small holes will result in less RF leakage than using a smaller number of large holes with the same total area. The creation of ventilation holes in a shield must be done in a certain way in order to obtain the best shielding effectiveness [28]. A perforated metal shield can provide at most about 40 dB of attenuation against RF fields. If more attenuation is needed, air can be brought into an enclosure through a waveguide beyond cutoff (which is essentially a long narrow tube), or through a collection of such waveguides. Shielded-enclosure cooling issues are discussed in detail in Ref. [28]. A variety of RF shielded enclosures are available from commercial sources. Considerations for making such items can be found in Refs. [10], [18] and [28]. Methods for improving inadequate or deteriorated shielding are discussed in Refs. [18], [19], and [28]. Radio-frequency noise often enters or leaves devices through their leads (e.g. signal cables, power cords, etc.). Unless they are shielded, such conductors can act as antennas. Both the input and output leads of a device (such as an amplifier) can act as paths for the entry of noise [3]. The use of cable shielding is important whenever cables are used to carry low-level signals, or signals that are to be measured to high precision. It is also useful in preventing capacitive coupling of low-frequency noise into conductors (which is often a problem if circuit impedances are high.) Power cords are generally not shielded, and so it is necessary to use a.c. line filters with these (see below). Shielded enclosures and cables can be expected to provide attenuation levels of 40–60 dB or more [19]. RFI problems can arise because cable shields or the shield end-connections have deteriorated. Cable shielding issues are discussed further in Section 12.4.
Filters Radio-frequency filters are necessary in order to prevent noise carried by conductors from entering or leaving a shielded enclosure. Depending on the source and load impedances,
11.2 Electromagnetic interference
375
shielded enclosure
high-impedance circuit on both sides
toroidal RF inductor low impedance on both sides
feedthrough capacitor low impedance inside
Fig. 11.14
Feedthrough filters for passing conductors into a shielded enclosure. The appropriate component combination depends on the impedances of the circuits outside and inside the enclosure (see Ref. [19]). “Medium impedance” corresponds to, e.g., 100–300 . Radio-frequency attenuations of more than 40 dB are possible with the C-L-C or L-C-L configurations. shunt capacitors, series inductors, or combinations of these elements may be used to block RF noise [19]. Feedthrough capacitors (possibly used in combination with inductors) can be very effective in this application (see Refs. [10,19]). Such capacitors are designed to be mounted directly in a sheet-metal panel. For example, high levels of RF attenuation can be provided by the arrangements shown in Fig. 11.14. The addition of RF filters to low-frequency balanced circuits can be problematic. Since the components in such filters are generally not matched accurately, they are likely to unbalance the circuit, at least to some extent, and thereby significantly reduce its ability to reject low-frequency (e.g. 50/60 Hz) common-mode interference. This may or may not be a problem, depending on the situation. A better solution in some such cases is to forego the use of filters, and employ shielded cables to keep out troublesome RF energy. Capacitors and inductors exhibit non-ideal behavior that limits their utility at high frequencies. Capacitors act like inductors at sufficiently high frequencies, because of the parasitic inductances of their leads. Inductors act like capacitors under such conditions, owing to the parasitic inter-turn capacitances of their windings. At a certain frequency, capacitors and inductors will self-resonate, because of these parasitic effects. As the frequency rises above the resonant point, such effects grow increasingly dominant, and the component in question becomes useless for its intended purpose. At low frequencies, a particular capacitor or inductor type may not be available with sufficiently large values to allow it to work effectively as a filtering element. Hence, capacitors and inductors should be selected with knowledge of the frequency ranges over which they are useful. For instance, mylar capacitors can be employed from about 100 Hz to 8 MHz, whereas low-loss ceramic
376
Electronic systems
capacitors are effective from approximately 2 kHz to 10 GHz (as long as their leads are kept short) [10]. Aluminum electrolytic capacitors are useful from 1 Hz only up to about 25 kHz. In the case of capacitors, frequency limitations can often be overcome by bypassing the low-frequency component with one that works at higher frequencies. Feedthrough capacitors have very low parasitic inductances, compared with ordinary capacitors with wire leads, and are therefore well suited for blocking RF noise at the highest frequencies. These issues are discussed in Refs. [10] and [19]. A variety of self-contained RFI filter modules, of various degrees of sophistication, can be obtained from commercial sources. These include shield feedthrough filters, and devices that are intended for mounting on an instrument chassis or a printed circuit board. Radiofrequency attenuation levels of 30–80 dB can be provided by such modules, depending on the frequency and the number of filter stages. As mentioned earlier, power cables can act as transmission lines for RF noise. For example, RF energy created in one part of a laboratory by some noisy device may travel through the a.c. mains wiring, and interfere with the operation of a sensitive instrument located elsewhere. In a large proportion of RFI cases in general, RF interfering signals travel at least part of the way from their source to the victim device or circuit via a.c. mains wiring. It is possible for RF energy to travel for several kilometers along power lines. Generally, all mains-powered electronic equipment should have a.c. line filters (or “power-line filters”) [19]. This is particularly important for equipment that is sensitive to RF (such as low-signal-level analog devices), or which produces it (especially switching power supplies). (The use of such filters is also discussed on page 86.) It is possible to supply power to small electronic devices optically – see page 362. In the case of cables that form part of a low-impedance circuit, a very simple and often effective method of suppressing RF noise on these involves wrapping them around a ferrite toroid (see Fig. 11.9). The resulting configuration is a type of common-mode choke (see the discussion on page 364). This method works both for balanced (e.g. twisted pair) and unbalanced (e.g. coaxial) cables. The use of these devices is discussed in Refs. [18] and [19].
Radio-frequency grounding The grounding of apparatus and cables in the presence of high-frequency RF energy is done differently from low-frequency grounding. At high frequencies (much above about 1 MHz), multi-point grounding of shields is frequently required in order to ensure that these are at system ground (or common) potential [10]. Hence, cable shields used to screen out RF fields (as opposed to low-frequency or electrostatic ones) must be grounded at both ends of the cable. Furthermore, at these frequencies multi-point grounding is hard to avoid anyway, because of the stray capacitive coupling that occurs between conductors that are galvanically isolated. Consequently, low-frequency ground-loop notions are generally (and justifiably) ignored when the noise or signals of interest are in the high-frequency RF range, and as many ground contacts are implemented as are needed in order to achieve adequate shielding. Generally, ground connections should be made to a cable shield at intervals of one-twentieth to one-tenth of the wavelength of the RF noise or signal on the shield [18].
377
11.2 Electromagnetic interference
If RFI in low-frequency apparatus is a concern, then multi-point grounding can be achieved at high frequencies by making ground connections through small-value capacitors, which have low impedances at such frequencies. This permits single-point grounding to be implemented at low frequencies. For some purposes, suitable capacitors are 2.2–10 nF / 1000 V ceramic types, with very short leads (to minimize lead inductance) [19]. Such an arrangement is referred to as hybrid grounding. (See also the discussion on pages 359–360.) The presence of ground loops (see Section 11.2.1) can allow RF noise to enter devices via cables. Hence, a strategy for reducing RF interference (especially involving RF noise at frequencies less than about 1 MHz) should include looking for, and breaking, ground loops. The latter is done by implementing single-point grounds, using signal transformers or differential amplifiers, installing opto-isolators, and so on – as discussed earlier in the chapter. As indicated above, the usual low-frequency methods for breaking ground loops tend to be ineffective when high-frequency RF noise is an issue. In such situations, the use of well-shielded coaxial cables or balanced signal-transmission arrangements (see page Section 11.2.4), and/or common-mode chokes, are much more effective at reducing groundloop problems [19]. At the highest frequencies, the use of well-shielded cables is by far the most effective approach.
Switching vs. linear power supplies A frequently problematic local source of RFI in laboratories is the switching power supply (or switcher). These devices are popular because they provide a large power density (both in terms of mass and volume), with very little heat dissipation, compared to linear power supplies [3]. Switchers are commonly used in many different types of equipment, including computers. If RFI is a problem, linear power supplies should generally be used wherever possible. (This is usually not practical with computers.) As mentioned earlier, any switching power supply must be provided with an a.c. line filter, unless the power supply already contains such a device. Hybrid linear/switching power supplies are available (e.g. for running superconducting magnets), which are claimed to produce very little RF noise, while retaining most of the beneficial characteristics of ordinary non-hybrid switchers. The electronic ballasts used in certain types of fluorescent lamp are essentially switching power supplies, and can also produce RFI. This may be reduced by using fluorescent lamps with ordinary ballasts, or (even better) incandescent lights. (Even fluorescent lamps that contain ordinary non-electronic ballasts can be sources of RFI, because they make use of a gaseous electrical discharge.)
Shielded rooms Sometimes, the environmental RF noise level, or the sensitivity of experimental equipment, is so high that the above measures are insufficient to provide adequate protection against interference. In such cases, shielded rooms (often called shielded enclosures) are sometimes employed. These are large metal (often steel or copper) boxes that completely surround the experimental system and its occupants. They make use of special doors, windows, and other
378
Electronic systems
fixtures that allow the room to be used in a more-or-less normal way, while providing up to 120 dB of attenuation of RF fields, at frequencies ranging from about 100 kHz to several giga-hertz. Shielded rooms are also used in some cases to confine electromagnetic radiation produced by very noisy devices. For instance, the strategy in the author’s laboratory is to place all very noisy equipment (e.g. induction heating devices, sputtering systems, etc.), belonging to the various research groups, in a large communal shielded room. While effective, shielded rooms tend to be unpleasant places to work in. They are often cramped, and (because of their metal walls) frequently have very disagreeable acoustic properties. Shielded rooms are also very expensive, and are generally used only as a last resort. The design, construction, care, testing and repair of these structures is discussed in Ref. [28]. At the cost of reduced shielding performance, it is possible to turn ordinary rooms into shielded ones by applying special conductive fabric to the interior surfaces [19]. This results in attenuation levels of 30–50 dB over a broad frequency range. In experimental work at low temperatures, it is often possible to use metal cryostats as shielded enclosures for the low-temperature apparatus. This is discussed on page 302.
11.2.2.4 Detecting and locating RF noise in the environment The presence of an RFI problem is often established by correlating the unwanted behavior of the electronic equipment with the operation of a known source of radiation (such as a radio transmitter, or an arc welder) [19]. An important early step in understanding RFI is to see how it manifests itself within the electronic equipment (whether it is broadbandor narrowband, the frequency, repetition rate for pulsed interference, etc.). This is done with the aid of an oscilloscope or (preferably) a spectrum analyzer. Ordinary oscilloscope probes may not be useful for this, because their high capacitances can load the circuit under investigation, and suppress any RF noise. An H-field probe (which does not require electrical contact to a circuit) is superior from this standpoint. Clamp-on current probes are convenient tools for sensing RF noise on power cords, ground wires, and signal cables. Various H-field probes can be purchased from commercial sources, or homemade [19]. The locating of RF sources can sometimes be beneficial. However (as mentioned earlier), the transmission path between the source of RF energy and the affected device may be highly convoluted. This can sometimes make the task of locating very difficult in practice. Also, especially in the case of certain narrowband sources (such as broadcast transmitters and mobile radios) knowing the position of the source will probably not be helpful, since these generally cannot be modified to reduce the emissions. In such cases, it is usually better to take the necessary steps to eliminate the interference at the point of reception, rather than worry about where it is coming from. As discussed above, many important sources of RFI have a broad frequency spectrum. A simple way of detecting and locating these involves the use of an ordinary handheld AM radio [29,30]. With the radio tuned between stations at the low end of the AM band, one listens for the buzzing, rasping, or frying sounds that are often characteristic of such sources. (Holding the radio near a computer keyboard will provide an example of what to
379
11.2 Electromagnetic interference
expect.) The presence or absence of these sounds can be correlated with the equipment misbehavior to establish whether broadband RF energy is the culprit. Since the ferrite stick antenna in the radio is directional, the heading of the source can often be estimated. A direction perpendicular to the orientation of the ferrite stick (which is normally aligned with the longest side of the radio enclosure) will usually have the greatest sensitivity. By moving around a building with the radio in hand, local broad-spectrum sources may be found. Direction finding with an AM broadcast radio may not be possible if the RF noise is too intense. If this is the case, an AM aircraft-band pocket-scanner (which operates at a higher frequency, at which typical broadband noise intensity will be smaller) can be used instead. If a source is found, it might be possible to take corrective action. For example, a faulty thermostat can be replaced, an RF filter may be added to a fluorescent lamp, or bypass capacitors can be attached to a brush-type electric motor. A variety of measures for dealing with noisy electrical and electronic devices, including computers, are discussed in Ref. [27]. A different approach is generally needed to detect and locate sources of radiation with a narrow spectrum. The frequency of the interference may make it possible to characterize the source without much additional effort. For example, it may lie within the ranges allocated by the regulatory authorities for use by broadcast transmitters, mobile radios, or the emissions of industrial, scientific and medical equipment (the ISM bands). (The latter would include devices such as RF-excited CO2 lasers, and RF sputtering systems.) The presence of an RF signal at this frequency in the environment (i.e. not created internally by some malfunction of the affected equipment), and its character, can sometimes be established by using a commercial multi-band (AM, short-wave, FM, etc.) radio receiver. Direction finding is sometimes possible with such receivers, at least in the lower frequency ranges. If the RF fields are sufficiently strong, they can be detected using a standard telescopic antenna (or a metal rod, of about 0.75 m length), oriented vertically, loaded by a 50 coaxial load and connected to an oscilloscope or a spectrum analyzer [19]. In order to improve sensitivity and accuracy, it is best to place a 0.75 m by 0.75 m ground plane (see page 389) under the antenna, and attach this to the coaxial ground. This approach allows signals of any frequency (not just in certain standard communications bands) to be detected. Radio-direction-finding techniques can be used to locate narrowband sources (see Ref. [29]). This can be done by using a simple homemade loop antenna connected to an oscilloscope or a spectrum analyzer.
11.2.3 Interference from low-frequency magnetic fields 11.2.3.1 Affected items Some laboratory items may be unusually sensitive to inductively coupled interference from stray magnetic fields. These include conductors that form ground loops (see Section 11.2.1), wires and cables used for low-level signals, inductors, signal transformers, and wire-wound resistors.
380
Electronic systems
11.2.3.2 Sources of magnetic fields Some sources of large magnetic fields are: non-toroidal (i.e. EI core) linear and switch-mode power supply transformers, chokes used with gas discharge lamps and fluorescent lights, and a.c. mains wiring [3,8,21]. The transformer category includes those used in the small stand-alone power supplies that are often employed to operate low-power devices (i.e. “a.c. power adapters”, or “wall warts”). These are inconspicuous, but produce surprisingly large stray fields. Cheap, low-quality, power transformers in general often generate high stray magnetic fields. Large power transformers located in laboratory substations can also be troublesome [21]. CRT-based computer monitors produce significant stray fields, because of their electron beam steering-coils. Electrical equipment mounted in the spaces between drop ceilings and true ceilings in a room can be insidious sources of electromagnetic interference [31]. These regions can contain all kinds of high-power devices, including transformers, elevator motors, and complete heating, ventilation and air-conditioning (HVAC) systems. Large superconducting magnets can be sources of high stray magnetic fields, but the latter are generally either very slowly varying, or of constant intensity.
11.2.3.3 Prevention of interference The strength of low-frequency magnetic fields diminishes very rapidly with distance from the source. Hence, the most general solution to magnetic interference problems, and often the easiest, is to increase the distance between the source and the affected items. Power supplies that contain toroidal transformers, rather than the more common EI core ones, should be used in problematic locations, if possible, since stray fields produced by welldesigned toroidal devices are comparatively small. In the case of pickup by conductors, matters can also be improved by decreasing the open loop area formed by the conductors (e.g. by twisting wires – see page 461). Toroidal signaltransformers and inductors are much less susceptible to magnetic pickup than other types. (Unfortunately, toroidal low-frequency signal-transformers are relatively uncommon – see Section 11.8.6.) It is possible to obtain wire-wound resistors that are made in such as way as to minimize inductance [30]. Changing the physical orientation of sensitive items is often a very useful way of reducing magnetic pickup [32]. In certain situations, it may be necessary to use magnetic shielding. The construction of shields for screening low-frequency magnetic fields is more difficult than making RF shields, and is usually done only when other approaches have been tried. Low-frequency magnetic shields are normally made of high-permeability ferromagnetic metals. For screening against fields at the lowest frequencies and d.c., special alloys such as mumetal are used, whereas at high frequencies (above a few kilohertz, and depending on the thickness of the shield) low-resistance materials such as steel are better. (Inordinate thicknesses of steel are needed in order to provide substantial shielding at the lowest frequencies.) At radio frequencies, even nonmagnetic conductors, such as copper, can be very effective (see page 373). At low frequencies, large amounts of interference can be reduced with several well-spaced layers of shields. For example, at 60 Hz, attenuation levels of 30, 60 and 90 dB may be achieved using 1, 2, and 3 nested mumetal cans [2]. In the presence of particularly strong fields, saturation of high-permeability shielding alloys can
381
11.2 Electromagnetic interference
be a problem. The use of several shields made from materials with different permeabilities may be helpful in these situations. For example, a low-permeability material, such as steel, could form the outer shield, while the inner one could be made of a high-permeability alloy, such as mumetal. Deforming a high-permeability shield or exposing it to mechanical shocks can significantly reduce its effectiveness [2,30]. The shield must then be annealed in order to restore its permeability. If a shield becomes magnetized, vibration-induced electrical noise (microphonics) may be produced in the items that it is designed to protect (see Section 11.8.6). Magnetic shielding is discussed in Refs. [2], [10], and [33]. Often the easiest way of solving low-frequency magnetic-shielding problems is to discuss them with an engineer at a company that makes high-permeability shielding materials [28].
11.2.3.4 Detection of a.c. magnetic fields Magnetic fields can be detected with a loop antenna. This generally consists of a number of turns of wire wound on a mandrel, which are surrounded by a Faraday shield to prevent the pickup of electric fields [19].
11.2.4 Some EMI issues involving cables, including crosstalk between cables The effect of interference picked up by analog (or possibly digital) cables passing through an electrically noisy environment can be minimized by several means. Cable shielding, shield grounding, the use of twisted wire pairs, and preferred methods of cable installation are discussed in Section 12.4. Cable shields have their limitations, and the use of shielding alone may not be sufficient to prevent interference. It is generally helpful to keep cables short, unless extra length is required to avoid noise sources, or to reduce ground-loop areas. Interference in cables (whether arising from capacitive, magnetic, or radiative coupling) usually takes the form of common-mode, rather than differential-mode, noise. Hence, the types of common-mode rejection techniques used to reduce ground-loop noise are effective in dealing with the general problem of cable interference. Matters can be improved, especially at frequencies less than 100 kHz, by balancing the entire signal transmission arrangement (not just at the receiving end), using balanced signal sources (drivers), transmission lines, and differential amplifiers or transformers [10]. If balanced circuits are employed, it may not even be necessary to use shielded cables in some cases. However, many of the electronic instruments that are commercially available tend to be unbalanced (having coaxial connectors, for example). In such cases, transformers can be used to balance the transmission line part of the system. These may be placed at the transmitting as well as the receiving ends. Another measure for reducing interference is to preamplify the signals as close as possible to their sources before sending them through the cables. If this is insufficient, the signals can be digitized near their source and sent as digital logic levels to their destination, where they can be converted back into analog form. Digital signals are inherently resistant to many kinds of interference, and the use of digital methods makes it possible to employ error-correction techniques to ensure that the analog signal is accurately recreated at the receiving end. Balanced transmission techniques can be used with digital signals [3].
Electronic systems
382
Finally, if even the digital signals are in danger of being corrupted during their passage, they can be sent through optical fibers, which are completely unaffected by electrical noise. (This method can also be used to prevent digital signals from contaminating an electrically quiet environment.) As mentioned earlier, analog fiber-optic links can be obtained. Such systems convert analog electrical signals to digital optical ones for transmission along optical fibers. Analog signals at frequencies extending from d.c. to the gigahertz range can be accommodated by these systems. The hole through which an optical fiber is brought into a shielded enclosure can allow the passage of RF energy. To prevent this, the opening should lead into a long and narrow metal tube, which is electrically connected to the enclosure, and acts as a waveguide below its cutoff frequency. It is necessary to be on the lookout for optical fibers that are covered with metal braid or armor, since these can lead to ground loops. Crosstalk between cables is a common problem. Along with the aforementioned EMI reduction methods, one of the most effective ways of preventing this (and often the most straightforward) is physically to separate the cables producing the interference from the ones receiving it. Quiet and noisy cables should be kept apart. For example, analog signal cables should not be routed directly alongside a.c. mains power ones. Instead, they should be placed with other analog cables, and preferably with those using similar signal levels. Generally, cables should be separated into clusters of different functional types, such as low-level analog, digital, high-level RF, relay control, d.c. power, and a.c. power groups. The use of neat cable layouts is generally beneficial. If quiet and noisy cables must cross, they should do so at right angles, in order to reduce the coupling between them. Crosstalk between cables can be reduced by placing them close to a ground plane (e.g. a large expanse of sheet metal grounded to the electronic system common), or (better) routing them through suitably grounded cable-raceways (see page 464) [19]. The use of optical fibers can be very helpful in situations involving crosstalk. Methods of diagnosing and dealing with crosstalk are discussed in Ref. [19].
11.2.5 Professional assistance with EMI problems If electromagnetic interference problems are proving particularly difficult to solve, it might be worth seeking the help of an EMC (electromagnetic compatibility) consulting company. Some of these firms offer services that are relevant to laboratory work. They may be able to track down causes of electromagnetic interference, give advice, and fix problems of this type.
11.3 High-voltage problems: corona, arcing, and tracking 11.3.1 The phenomena and their effects High-voltage7 equipment is vulnerable to failures caused by electric discharges [34,35]. Arcing is a continuous high-current discharge that involves the complete breakdown of 7
That is, voltages of greater than 600 V (however, other definitions also exist).
383
11.3 High-voltage problems: corona, arcing, and tracking
the insulating gap between conductors at different potentials. It is the most well-known example of an electric discharge, and is often very noticeable, due to the highly audible noise and light that is produced. The term “spark” usually refers to an arc discharge of very short duration. A more subtle and less well-known effect is the phenomenon of corona, or partial discharge. This is a low-current effect, which occurs when the electric field is non-uniform, at the places where it is highest (such as near pointed conductors). Corona is visible in darkness as a faint red-blue glow, and is accompanied by low-level hissing or sizzling sounds. As the voltage is increased, the corona will eventually be replaced by a full spark or arc discharge. If the electric field is uniform, a spark or arc discharge will take place, when the voltage is increased sufficiently, without the preliminary corona. Electric discharges are frequently associated with the pungent smell of ozone, which often provides the first indication of their presence. Tracking is the breakdown of the surfaces of solid insulating materials as a result of long exposure to high electric fields. Organic insulating materials, such as plastics, are gradually carbonized at the location of a discharge. Such carbonized materials are themselves conducting, and their presence results in increased electrical stresses over the un-carbonized remainder of the insulator. On the surfaces of insulators, degraded regions are visible as dark tracks. Electric discharges can result in very serious damage to high-voltage apparatus and cables. Arcing produces the most immediate damage, as a result of the large amount of heat released by these discharges. This takes the form of melting, burning, pitting, or drilling of insulators and conductors. The damage produced by corona usually becomes serious only in the long term, but is generally more insidious than that caused by arcing. The ozone produced by the corona is highly reactive, and will often attack organic insulation materials. It also damages vacuum seals and other objects made out of common rubbers. (This can be avoided by using special types of rubber – see pages 237 and 266.) Furthermore, in the presence of moisture, nitric acid is created in the discharge, which will corrode copper and other substances. If the high voltage is being used in vacuum equipment (e.g. an ion pump), such corrosion may result in a leak. Corona also results in tracking, which can eventually result in arcing. Even if no immediate damage results from discharges, they can produce malfunctions due to current leakage. For instance, this can result in errors in the measurement of current in a high-voltage circuit, or may cause circuit breakers in a power supply to trip. Electric discharges also generate RF interference (see page 371). Another potential problem is the production of light that can interfere with the operation of optical sensors [34]. Materials vaporized by discharges may also be deposited on optical surfaces. High voltages at high frequencies can be especially troublesome. Radio-frequency corona discharges release considerable energy in concentrated form, and can drill holes in insulators (see page 446). Insulation that could easily withstand a given voltage at d.c. may be breached by the same voltage at radio frequencies [36]. Data on the breakdown of insulation at frequencies up to 100 MHz are provided in Ref. [37].
11.3.2 Conditions likely to result in discharges Corona and arcing is particularly likely to occur near conductors with sharp points, edges, or other discontinuities. These include: wire and cable terminations, along the edges of
Electronic systems
384
rectangular conductors (often used at high frequencies because of their low inductances), at nicks on conductors, and at screws, bolts, and other hardware [34,38]. Even (nonconducting) plastic screws with sharp edges can be sources of corona. Small-diameter and stranded wires can also present corona problems. The windings of dry (as opposed to oil-filled) high-voltage transformers are frequent trouble spots. Small air gaps between solid insulators (especially those with high dielectric constants) and conductors or other insulators may be subjected to particularly high electric fields, and are also prone to corona. Insulated, but unshielded, high-voltage cables that are allowed to touch a grounded surface may generate corona, because of small air gaps between the insulation and the surface [38]. Loose cable ties and fabric are other possible sources [34]. Corona can take place at pores or inclusions in high-voltage insulators [39]. Tiny cracks in ceramic insulators are possible locations for this phenomenon, and are easy to overlook [40]. High humidity and wet conditions greatly increase the chances of electric breakdown.8 Dampness works in combination with dust or other contaminants on insulators to increase electric leakage across them (see page 66). This problem is compounded by the tendency for high-voltage (dc) equipment to attract dust because of the large electric fields present. Locations near the sea are particularly at risk from moisture, because of the salt in the air. Surface contamination in general tends to increase corona and arcing problems. If corona takes place in a poorly ventilated space, ions can build up that can reduce the breakdown voltage [38]. This may lead to more corona, or possibly arcing. Breakdown voltages generally drop as the air pressure is reduced, at least initially. This is an important consideration when high-voltage devices are used in vacuum systems, or if equipment that is intended to operate at sea level is taken to a high altitude (e.g. a mountaintop observatory). The subject of voltage breakdown under vacuum conditions is discussed in Ref. [41]. Helium gas has a breakdown voltage which is lower by a factor of ten than that of air [42]. An insulating gap that could easily withstand a given voltage in air can arc if the same voltage is applied in helium. For this reason, helium leak testing must not be done around live high-voltage devices. High-voltage connectors and cables are discussed on pages 445–446 and in Section 12.4.
11.3.3 Measures for preventing discharges The reliability of high-voltage equipment is very dependent on the quality of its design and construction [34]. For example, certain details of construction, such as preventing the formation of cracks or voids in insulating materials, and insuring the integrity of adhesivebonds, are very important. Low-quality equipment is likely to have electric discharge problems eventually. Corona and arcing can be prevented by [34,38,43]: (a) keeping high-voltage equipment and cables clean and dry, (b) moderating ambient humidity and dust levels, 8
NB: This is not because of a reduction of the breakdown voltage of air – humid air has a slightly higher breakdown voltage than dry air.
11.3 High-voltage problems: corona, arcing, and tracking
385
(c) (d) (e) (f) (g) (h) (i) (j) (k)
(l)
providing high-voltage equipment with good ventilation, smoothing sharp edges on conductors, increasing air-gap spacing, filling in air gaps with solid insulating substances (remember that small air gaps between solid insulators can be troublesome), using insulating materials with low dielectric constants (<6), to minimize electric fields across any small air gaps, for high-frequency circuits, using insulating materials with dissipation factors of less than 0.1, in order to reduce insulation degeneration due to dielectric heating, shorting out small air gaps involving insulators, by coating the sides of the insulators facing the gap with conductive coatings, and joining these electrically, using shielded (coaxial) high-voltage cables (see page 446), and keeping unshielded ones well away from grounded surfaces or cables, replacing or repairing insulation that has been damaged by corona or arcing (e.g. carbonization, or conductive deposits), or mechanically (e.g. chipped or cracked ceramics), and repairing or replacing conductors that have been damaged so as to produce sharp edges or points (e.g. broken strands of wire).
Corona problems frequently arise because of the gradual accumulation of contamination on exposed components. Hence, high-voltage apparatus should be cleaned on a regular basis. How often this must be done will depend, of course, on the environmental conditions and the equipment involved. For example, in the case of an ion pump power supply used in a benign laboratory environment (away from the sea, major sources of pollution, or airborne dust), a cleaning interval of two years might be appropriate. The use of air conditioners, dehumidifiers, and air-cleaning devices to moderate humidity and dust levels is worth considering. The glaze on a porcelain high-voltage insulator is important in preventing breakdown across its surface. Therefore, such insulators should be cleaned by using techniques that do not chip or abrade this coating. Sand blasting is one method that is best avoided. If insulators are exposed to moisture, dust, or pollution, or are naturally hygroscopic, it may be useful to apply a moisture-repellant coating to their surfaces. A number of coatings that are specifically intended for high-voltage applications are available commercially. Perhaps the most useful types are based on RTV silicone rubbers. These substances (referred to as “high-voltage insulator coatings”) are extremely resistant to corona, arcs, and ozone, and are very effective at repelling moisture [44]. Unlike many organic insulating materials, silicones do not form conductive byproducts when they are degraded by electric discharges. Other coatings, in the form of acrylic sprays or brush-on lacquers, are also available. Several means are available to construct or modify conductors for the purpose of reducing potential gradients. For example: (a) sharp points and edges can be smoothed, (b) conductors with larger diameters can be used, (c) objects can be wrapped with commercially available semiconducting tape or metal mesh to form rounded surfaces,
386
Electronic systems
(d) corona shields (i.e. rounded metal covers) can be placed over screws, nuts, and other sharp-edged items, (e) corona rings (i.e. toroidal metal rings) can be attached at locations of discontinuities, and (f) rounded solder joints can be created (see page 421). RTV silicone rubbers are good materials for filling in small gaps. Transparent types are particularly useful, because they make it possible to see any electrical discharges taking place in filled regions. The most common single-component silicone rubbers generate acetic acid while they cure. These should be shunned in favor of electronic grades, which release only alcohol during this process. Some silicone rubbers come in the form of liquids, which can be poured into place before they set. Two-component silicones (involving a base compound and a curing agent) are particularly useful when thick layers are required. (Liquid fillers such as silicone and epoxy should be vacuum deaerated before being poured, in order to reduce the possibility that voids will be present in the solidified material.) Silicone tapes are available that can be used as gap-fillers. It is also possible to obtain films and adhesive tapes that are made from a corona-resistant polyimide, which is referred R CR” [45]. Another useful gap-filling material is “high-voltage putty”. This to as “Kapton is a non-hardening silicone-rubber based compound that can be molded to fit irregular spaces. In some cases, greases (and particularly silicone types [46]) are effective gap-fillers, which have the advantage of being easily renewable. Special silicone grease, called “highvoltage insulator coating compound” can be used for this purpose [44]. Silicone greases in general are very difficult to completely remove from most surfaces, and can cause contamination problems. The application of silicone grease to a surface can make it very hard to attach an adhesive material, such as silicone rubber, at a later time. These issues are discussed in Section 3.8.3. Air gaps that are too large to be conveniently filled with the aforementioned materials can be fitted with porcelain insulators.
11.3.4 Detection of corona and tracking The odor of ozone (which can be smelt near photocopiers) provides a very noticeable indication of the presence of electric discharges [38]. There may be little visible evidence of damage to high-voltage components if the level of corona activity is small and of short duration. In more advanced stages, however, the deterioration becomes much more apparent. On organic insulation materials, carbon tracks may be observed. Cable insulation may be discolored (faded) and pitted, or have very fine cracks, perhaps displaying a dull surface finish. White powdery deposits can appear on rubber objects. If humidity levels are high, copper conductors may show signs of corrosion by the nitric acid created in the corona. This takes the form of deposits of blue-green powder. An AM radio can be used to detect corona and other discharges (see page 378). However, a corona discharge may not necessarily produce a detectable radio signal.
387
11.4 High-impedance systems
Even small corona discharges can be heard and located with the aid of a standard medical stethoscope [47]. The pickup is removed from the stethoscope, and a Plexiglas tube with 0.5 cm inner diameter, of some convenient length, is attached in its place. This technique is inexpensive, and may be adequate for many purposes. (NB: It goes without saying that safety is paramount when investigating high-voltage problems. Any worker who proposes to troubleshoot high-voltage equipment should find out the recommended and legally mandated procedures for doing this.) A potentially more sensitive method of sensing and locating corona involves the detection of ultrasound. Corona discharges usually produce substantial acoustic noise at ultrasonic frequencies. This noise can be readily sensed using commercial ultrasonic detectors, which operate in the range from 20 kHz to 100 kHz. These devices are not particularly expensive, and can also be used for other laboratory diagnostic tasks, such as leak detection (see pages 178–179). If the ultrasonic microphone is inserted in a plastic tube of several centimeters inside diameter and 1–2 m length, close probing of the high-voltage equipment may be carried out [48]. This method provides very good directional sensitivity, and allows the corona to be located to within a few centimeters. Ultrasound readily passes through small openings, so it is generally not essential to take the cover off an equipment housing in order to determine whether corona is occurring inside. For instance, the microphone can be held near some ventilation holes. In some situations, it may be desirable to create a sealable listening port in a housing that does not already have openings. Corona is visible to the eye under conditions of complete darkness. If it is not possible to arrange this (perhaps because of safety reasons), an alternative is to use a solar-blind ultraviolet camera. These devices can detect corona even in daylight. Although ultraviolet cameras are generally very costly to buy, they can be rented. One very sensitive method of detecting corona involves connecting an oscilloscope, through a suitable test circuit, to the high voltage item in question [48]. Spurious voltage pulses appear if corona is present. Extensive discussions on the detection and locating of corona can be found in Refs. [47] and [48].
11.4 High-impedance systems 11.4.1 The difficulties Analog systems involving high impedance levels (roughly, more than a few tens of kiloohms) can suffer from several types of reliability problem. For example, low-level or highprecision high-impedance circuits are unusually susceptible to capacitively coupled (i.e. electric-field coupled) EMI. The appearance of a 50/60 Hz distorted sinusoidal signal (see Fig. 11.15) is often a manifestation of such interference. Radio-frequency fields sometimes cause problems as well [26]. A signal from a high-impedance source that is sent through a long cable is more likely to be degraded by interference than one originating from a low-impedance source. Capacitive coupling can also lead to crosstalk within a circuit or between cables.
388
Fig. 11.15
Electronic systems
Example of 50 Hz capacitively-coupled noise from stray laboratory electric fields. Signals from high-impedance sources can also be corrupted by voltages generated by triboelectric effects. Such are caused by friction between dielectrics and conductors in a cable that is exposed to changing mechanical stresses. If a d.c. voltage is present between the conductors, changes in their mutual capacitance caused by shifting stresses can also result in noise. These noises can take the form of microphonics in the presence of vibrations. They may also arise as a result of thermal expansion and contraction of cables, or through handling. Very-high-impedance circuits are vulnerable to unwanted leakage currents, which pass through layers of moisture, dust, or other contaminants on insulator surfaces. This often becomes a problem when impedances are above about 107 –108 . Such currents can be very erratic. Leakage can be a major issue with circuit boards that have been insufficiently cleaned of soldering flux. Touching insulators and the bodies of components (such as highvalue resistors) with bare hands can leave deposits that can also result in this behavior. These difficulties are often aggravated by high humidity levels. Dust on insulator surfaces tends to absorb atmospheric moisture and form a conducting path (see page 66). It is also possible for ionic contaminants to interact with metals in the circuit (e.g. copper printed circuit board tracks) to form electrochemical cells. These cells can cause currents in the nanoamp range to flow for considerable periods [26].
11.4.2 Some solutions If it is necessary to send a signal from a high-impedance device a long distance through a cable, it should be buffered and/or amplified as close as possible to the device, so that a low impedance is presented to the cable. The use of Faraday-shielded enclosures and cables with high-impedance circuits is very important when low-level or high-precision signals are involved, in order to reduce capacitive coupling of EMI. Balanced circuits, as discussed on page 381, are highly effective at minimizing interference. It may also be helpful to increase the separation between high-impedance items and sources of interfering
389
11.4 High-impedance systems
electric fields, or to eliminate the sources. For instance, fluorescent lights often produce electric-field interference [49]. Incandescent lights are significantly better in this regard. CRTs (cathode ray tubes) are another possible source of such interference. Ground planes (e.g. an expanse of sheet metal that is grounded to the common of a circuit) are very useful when it is impractical use Faraday shields, or when such shields are insufficient [3]. The thickness of a ground plane is unimportant, except to provide robustness. If apparatus is being set up on a workbench, a temporary ground plane can be provided by covering the bench with a double layer of heavy-duty aluminum foil [19]. This foil should extend well beyond the area occupied by the apparatus and its associated wires and cables. The latter should be positioned near the ground plane, and preferably fixed directly to it. If a high-impedance circuit is to be constructed on a printed circuit board, a useful technique is to use double-sided board. The copper on one side is devoted to the circuit traces, and the other acts as a ground plane. In order to avoid microphonics, and other effects produced by mechanical stresses, cables, wires, and electronic components that form part of a high-impedance circuit should be stabilized [26]. Conductors should be short, and tied or taped down to a rigid, non-vibrating structure, such as a workbench or a wall. Steps should be taken to minimize vibrations (see Section 3.5.3). Cables should not be exposed to temperature changes, which could introduce thermal stresses. Special twisted-pair cables are made that contain a special semiconductive tape layer between the shield and the inner conductors. This reduces microphonic noise due to triboelectric effects by about 20 dB. Another low-noise cable is a coaxial type, which contains a layer of graphite between the shield and the inner dielectric. Very-high-impedance devices and circuits should be avoided, if possible. Systems that are vulnerable to problems caused by leakage currents must be kept scrupulously clean and dry. Special attention should be given to removing flux from circuit boards and connectors after soldering (see page 419). Care must be taken to ensure that skin secretions from the fingers are not deposited on insulating surfaces [26]. It is also necessary to ensure that insulators do not touch materials that can cause contamination (some common packaging materials can do this). Light contaminants can be removed from an insulator using a cleanroom swab soaked in spectroscopic-grade methyl alcohol. The insulator should then be dried thoroughly (leaving it for a few hours under low-humidity conditions, or blowing it with dry nitrogen) before use. The comments regarding the cleaning of optics in Section 10.6.5 are relevant. A procedure for removing contaminants from circuit boards, in cases where even the smallest leakage currents cannot be tolerated, is described in Ref. [30]. The selection of insulating materials for use in very-high-impedance systems is imporR , and sapphire (the latter two in particular) are tant. For example, polyethylene, Teflon useful types. This is because of their high volume resistivity, resistance to water absorption, minimal piezoelectric and triboelectric effects (at least in the case of polyethylene and sapR , and glass-epoxy phire), and low dielectric absorption. On the other hand, nylon, Lucite are generally not so good in these regards (the latter is a circuit board material). The term “dielectric absorption” refers to the ability of an insulator to store and release charge over long times. These topics are discussed in Ref. [26]. Humidity levels in the environment should be moderated [26]. If possible, the relative humidity should be 50% or less. If it is not possible to do this, insulator materials, such as
390
Electronic systems
R Teflon , should be chosen that repel moisture. Another possibility is to cover the insulator with a moisture-repellent conformal coating. Silicone-resin aerosol sprays are available for this purpose. These are particularly helpful in the case of printed circuit boards, which tend to absorb moisture. Methods for protecting very-high-impedance circuits from moisture are discussed in Ref. [30]. Humidity issues in general are discussed in Section 3.4.2. The technique of guarding is an important one for reducing leakage currents and the problems caused by shunt capacitances in high-impedance circuits. It involves surrounding a high-impedance point in a circuit by another conductor that is made to be at nearly the same potential by a driver (i.e. an active buffer device) with a low-impedance output. Since there is essentially no difference in voltage between the high-impedance point and its surroundings, no leakage current flows from it. The method is analogous to the guard vacuum technique used to prevent air leaks in vacuum systems (see Section 6.8). For example, on printed circuit boards, guard foils can be positioned between traces that act as sources and receivers of leakage currents [30]. Guarding can also be applied to cables, and other parts of an apparatus. The technique is described in detail in Refs. [3] and [26].
11.5 Damage and electromagnetic interference caused by electrostatic discharge (ESD) 11.5.1 Origins, character, and effects of ESD Static electricity frequently arises because of the buildup of electric charges due to friction between different materials. This is referred to as triboelectric charging. Although one normally thinks of friction between insulators in this context, friction between metals and insulators, or even other metals, can also be responsible. A common example of electrostatic discharge, or ESD, is the spark that sometimes occurs when one touches a metal object after shuffling across a rug. The voltages created by triboelectric charging can be very high – possibly several tens of kilovolts. This makes ESD a serious hazard to electronic devices [3]. The main dangers are posed by those who work with such devices and develop static charges during their activities. Pronounced levels of triboelectric charging are frequently correlated with low humidities. Hence, ESD problems are dependent on the seasons (often climbing dramatically in winter), the weather, and the local climate. Although anyone can generate damaging static, ESD phenomena may be accentuated by certain personal traits, such as having a dry skin, wearing certain types of clothing, or having a tendency to fidget or shuffle the feet while walking or sitting. Perhaps somewhat surprisingly, just moving one’s arm while wearing a wool sweater or a shirt (especially synthetic types) can produce several thousand volts [3]. The electrostatic voltages that can arise during a number of activities are listed in Table 11.1. Although ESD is usually associated with air sparks, it is not necessary for an electric breakdown to take place in air for damage to occur [50]. For example, electric fields created
391
11.5 Damage and electromagnetic interference caused by electrostatic discharge (ESD)
Table 11.1 Electrostatic voltages created during various activities, and at different relative humidity (RH) levels [50] Static Voltages Activity
20% RH
80% RH
Walk across a vinyl floor Walk across a synthetic carpet Arise from a foam cushion Pick up a polyethylene bag Slide a styrene box on a carpet Use an aerosol circuit freeze spray
12 kV 35 kV 18 kV 20 kV 18 kV 15 kV
250 V 1.5 kV 1.5 kV 600 V 1.5 kV 5 kV
by charged surfaces near ESD-sensitive devices can easily generate voltages sufficient to damage them. A charged object does not have to come particularly close to a sensitive device to cause failure, and a spark need not take place during the event. In this sense, ESD is best characterized as a voltage surge, not an air spark. Most semiconductor devices can be destroyed by ESD. However, some types are much more vulnerable than others. For example, small-geometry devices used in high-frequency RF (and especially microwave) electronics, and devices based on MOS technology, such as power MOSFETs, are particularly damage prone. Devices that contain a thin insulating barrier, such as MOSFETs and SIS junctions, are destroyed because of dielectric breakdown of the barrier. In some cases, such barriers may only be about 10 nm thick, or even less. Numerous devices are susceptible to voltages of less than 2000 V [51]. Many components can be damaged by ESD at levels of less than 100 V. Some vulnerable semiconductor (and other) electronic devices are [30,50,51]: (a) microwave devices (operating frequency >1 GHz), such as Schottky barrier diodes, point contact diodes, SIS junctions, etc., (b) monolithic microwave integrated circuits (MMICs), (c) discrete MOSFET devices, (d) surface acoustic wave (SAW) devices, (e) junction field-effect transistors, (f) MOSFET input ICs, (g) diode lasers, (h) charged coupled devices (CCDs), (i) precision voltage regulator diodes, (j) CMOS circuits, (k) very-high-speed (digital) integrated circuits, such as computer processors, (l) some operational amplifiers (op-amps), (m) silicon-controlled rectifiers (SCRs) having Io <0.175 A at an ambient temperature of 10 ◦ C, (n) precision (<0.1% variation) thin-film resistors.
Electronic systems
392
Discrete components are frequently more vulnerable to ESD than integrated circuits, since the latter often contain special circuits (usually involving diodes) that provide at least some protection. However, the maximum voltages that can be handled by such protective circuits typically do not exceed 2000 V [50]. Static discharges from the human body involving voltages below 3500 V cannot be felt [10]. Since such discharges also can neither be heard nor seen, this means that a damaging ESD event may go completely unnoticed. Complete and permanent destruction of a component is not the only possible outcome of ESD. Intermittent faults are sometimes regarded as being a very frequent occurrence [50]. More subtle degradation, such as changes in device parameters (as in the case of precision thin-film resistors), is another possibility. Yet another potential outcome of ESD is damage that does not result in immediate and complete failure of the device, but leads to it at some later time. This phenomenon, which is thought to be real but very rare, is called latent failure. Electronic circuits that are part of a completely assembled and enclosed electronic device (e.g. an instrument) are generally much less susceptible to ESD damage than individual components. This is because the completed devices are (or should be) provided with protective circuits in order to reduce the voltages experienced by their constituent parts. Also, enclosures are often designed to act as Faraday cages, which will intercept electric fields and discharges. Nevertheless, penetration of enclosures is possible. Possible paths for the entry of surges into equipment include, for example: through unshielded or inadequately shielded enclosures and I/O cables, into pins on unused connectors, along control shafts, or between the keys on keyboards [10]. Once a device enclosure is opened, susceptible components within are much more vulnerable to damage. The handling of circuit boards (such as computer cards) and hard-disc drives should generally involve suitable precautionary measures, as discussed below. Very-high-frequency (and especially microwave) devices and instruments are often unusually vulnerable to ESD, because protective devices degrade their performance. Hence, protection against high surge voltages (e.g. more than a few hundred volts) may not be feasible. Although permanent ESD-induced failure of assembled and enclosed devices is generally unusual, temporary malfunction, caused by electromagnetic interference, is relatively common. This is because the energy required to produce interference is much less than that needed to cause damage. Such problems occur mainly in digital circuits, which (unlike analog ones) are prone to disruption caused by isolated electromagnetic impulses. Typical symptoms include unexplained lockups, resets, loss of data, and other strange and erratic behavior. Such events are called soft errors, in contrast to hardware damages, which are referred to as hard errors [10]. ESD damage of components often results from [50]: (a) (b) (c) (d)
touching or handling them with fingers or tools, allowing component leads to be touched by conductors, exposure to electric fields produced by spray-coating, and exposure to electric fields produced by charged insulators (e.g. plastics, including in particular, packaging materials that are not ESD-safe).
11.5 Damage and electromagnetic interference caused by electrostatic discharge (ESD)
393
11.5.2 Preventing ESD problems The danger of ESD damage to MOSFETs, and other sensitive devices, is hard to overstate [3]. Despite this, there is often a marked reluctance to take the measures that are needed in order to prevent it. Some elementary, easy-to-implement, and inexpensive precautions for avoiding such problems are as follows [50]. (a) Keep the leads of ESD-sensitive devices shorted (e.g. using conductive black foam, or shorting clips) until they are ready for use. (b) Store such devices in ESD-protective containers. (c) Do not touch an ESD-sensitive device unless it is really necessary. (d) If it is necessary to touch such devices, hold them by their cases, and avoid touching the leads, if possible. (e) While working with such devices, wear a grounded conductive wrist strap.9 (In an emergency, in the absence of a wrist strap, touch a grounded object before touching such a device.) (f) Do not allow sensitive devices to touch troublesome materials, such as clothing. (g) Allow such devices to touch only grounded conductive (or preferably antistatic) materials. (h) Ground the tips of soldering irons used on such devices (see page 418). (i) Keep common plastics well away (preferably >1 m) from such devices. (j) Do not work on such devices in carpeted areas. (k) Use a static-dissipative tablemat where ESD-sensitive work is being done. If one is not available, keep in mind that entirely wooden benches are preferable to other common types. Special static-dissipative tops for workbenches are available. (l) Use an entirely wooden chair or stool in preference to those with plastic coverings or foam cushions. Alternatively, special conductive chairs are available. (m) Wear close-fitting shirts, with sleeves that are short or rolled-up above the elbow. Sweaters should not be worn. Shoes with leather soles are preferable to synthetic types. (n) Avoid shuffling the feet or fidgeting while working on sensitive devices. Conductive wrist straps can be unreliable, for a variety of reasons, and should be tested regularly. It is preferable to use a special wrist strap in conjunction with a continuous wriststrap monitor, which sounds an alarm if the wearer is not adequately connected to ground [50]. In some cases, the wrist strap may not make good contact to the wearer because their skin is dry, and hence a poor conductor. In such cases, the application of antistatic lotion to the wrist can be helpful. A very useful method of minimizing static buildup from walking on a tiled floor, or shuffling feet on it while sitting, is to coat it with a special type of floor wax, called a static-limiting floor finish [50]. It is important to do some investigation prior to purchasing such a finish, because some types are actually worse than ordinary floor wax [52]. If it is necessary to have a carpet in a given area (such as a room with computers that are 9
Such wrist straps must always include a suitable current limiting resistor, to prevent electrocution if the body should come into contact with, e.g., a conductor connected to the a.c. mains.
Electronic systems
394
experiencing ESD-related upsets), special conductive-fiber carpets can be installed [50]. While these are not as effective in reducing ESD as tiled floors coated with static-limiting floor finish, their use can result in a marked reduction in such problems. Objects with questionable static-dissipative properties can be coated with a topical antistat, which increases surface conductivity by attracting moisture. Although these substances are very useful in ESD control, they can cause contamination and (unless “chloride-free” types are used) corrosion [50]. A useful, but optional, ESD control measure is to keep the relative humidity above 30% at 21 ◦ C, and preferably above 40% (see page 68). The use of static-limiting floor finishes and humidity control are helpful backups to the primary methods of ESD control outlined in the above list. For example, people often forget or decline to wear wrist straps. Hence, such backup measures can act as a safety net to limit the severity of any resulting problems. Other precautions that reduce reliance on vigilance, such as removing problematic furniture from a room, can also be helpful. In situations when ESD is a serious concern (as when devices that are sensitive to voltages of 100 V are involved), a field meter is a very useful device to have [50]. This instrument makes it possible to determine whether objects pose an ESD hazard due to electric fields generated by static charges. The use of the above ESD control measures is desirable while working with microwave equipment. This includes activities such as attaching signal cables to these devices, and cleaning their connectors. For example, a grounded static-dissipative tablemat should be installed in front of a microwave instrument. Conductive wrist straps should be worn while working on exposed connectors, and it is also a good idea to touch the grounded outer shell of the connector before proceeding with the work. Damage to microwave equipment sometimes occurs as a result of connecting coaxial cables to them, because of static charges that have built up on their center conductors. Such charges should be dissipated by connecting the cable to a load before attaching it to the equipment. Useful information about minimizing ESD problems at low cost can be found in Ref. [50]. The design of electronic equipment hardware and software to make such equipment less susceptible to ESD damage and interference is discussed in Ref. [10]. Hardware that is designed to withstand ESD is usually also resistant to radio-frequency interference.
11.6 Protecting electronics from excessive voltages Electronic devices and circuits are often damaged by exposure to voltages for which they were not designed. For example, the input transistors of amplifiers and preamplifiers sometimes fail because of this [36]. Such problems can happen as a result of: (a) (b) (c) (d)
static electricity, power-line surges (as discussed on pages 84–85), interruption of current passing through an inductive load, flashover (sparks or arcs) from nearby high-voltage conductors (this sometimes occurs in electron microscopes, for example), (e) short circuits,
395
11.7 Power electronics
(f) component failure (e.g. a shunt resistor forming part of a potential divider goes open circuit), and (g) human error, as when an instrument is mistakenly connected to some high voltage source. A variety of methods are available to protect devices and circuits from overvoltages. One approach is to place a crowbar across the voltage-carrying conductors. These devices switch from an insulating state to a highly conducting one if a given voltage level is exceeded, in order to cause surge currents to bypass the sensitive electronics. Such protective devices include, for example, spark gaps, SCRs and triacs. Another method is to place a clamp across the voltage-carrying conductors. Such components keep the voltage across themselves at an approximately constant level, regardless of the size of surge currents passing through them. These devices include metal oxide varistors (MOVs), avalanche diodes, and switching and rectifier diodes. Another class of clamps are the low leakage diodes (such as GaAsP LEDs, shielded from light), which are useful if only extremely tiny leakage currents can be tolerated when the diode is not clamping [30,54]. A third approach is to separate possible sources of common-mode overvoltages from sensitive circuitry using devices that have a high series impedance for common-mode voltages. Such devices include optoisolators, isolation transformers, and common-mode filters. There is no panacea for overvoltage problems, and each method and protective device has its strengths and weaknesses. Tradeoffs usually have to be made between response time, energy adsorption ability, current capacity, minimum operating voltage, leakage current, parasitic capacitance, and other parameters. Sometimes different methods can be used in combination, in order to combine the abilities of different devices. MOVs, which are widely used for protecting apparatus against overvoltages on the a.c mains (see page 86), can be unreliable devices [53]. They may degrade if subjected to repeated surges, so that their clamping voltage diminishes, and their leakage current rises. Such degradation can eventually result in the explosion of an MOV. These devices can also explode if they are subjected to a single, exceptionally high, surge voltage. Some incidents have occurred in which such explosions have caused injuries or significant equipment damage. Another possible failure mode is that an MOV goes open circuit because of excessive temperatures. Open circuits, resulting from explosion or excessive temperatures, may go unnoticed, since a high series impedance is the normal state of an MOV. Short circuits are another possible result of exposure to very high overvoltages. The use of generous derating (see Section 3.2) is essential to avoid such problems. The selection of MOVs with these issues in mind is described in Refs. [19] and [54]. The subject of protecting circuits from overvoltages is discussed in detail in Refs. [19] and [54].
11.7 Power electronics High-power electronic devices may be defined as those devices that are intended to supply or control electrical power, and which also require the implementation of certain measures
396
Electronic systems
during design and construction in order to prevent overheating. Mains-operated power supplies and power amplifiers often fall into this category. Failure of the output transistors, as well as other components, is a common problem with these items. High-power electronics are difficult to make reliable without expertise [30]. Hence, the design and construction of such devices should usually be left to professionals who specialize in this activity. Mains-operated power supplies of any type should almost always be purchased, rather than made in the laboratory. Historically, switching power supplies in particular (even commercial types) have had a reputation for poor reliability, although the situation has improved significantly over the years [3]. Nevertheless, such power supplies are still very difficult to design so that they work reliably. Linear power supplies are better in this regard. In any case, it usually pays to buy the best power equipment available. Well-designed high-power equipment will generally be provided with devices that protect them against adverse loads and conditions. These include an output current-limiting circuit, a device that turns off the equipment if it gets too hot (over-temperature shutdown), softstart protection (turns on the equipment gradually in order to minimize stresses on internal components) and also perhaps an output overvoltage protection circuit. The latter is often some type of crowbar arrangement [3]. In the case of power supplies, considerations of this type are discussed in Ref. [55]. If inductive loads are being driven, it is important to make sure that the equipment is designed to cope with them, or to provide additional protection if necessary. Cooling of high-power electronic equipment is a very important issue. Water-cooled devices should be avoided, if possible, in favor of air-cooled types (see page 263). Since cooling fans can be a weak point in electronics (see Section 11.8.3), cooling by natural convection is preferable. However, its use is normally confined to equipment that operates at relatively low power levels (less than a few tens of watts). During the installation and use of high-power electronics, it is important to ensure that no conditions can arise that may prevent adequate cooling (see page 64). The cooling of electronic equipment is discussed in Ref. [3]. Aluminium electrolytic capacitors (or electrolytics – see Section 11.8.4) are often used in d.c. power supplies in order to reduce voltage ripple. These components tend to be unreliable, and have a limited lifespan. Regular replacement of electrolytics is recommended [34]. This should be done about every five years, except in the case of inexpensive power supplies, in which case the power supplies themselves should be replaced. The use of electrolytic capacitors is minimized in well-designed equipment. Power supplies used in some applications have interlock arrangements that are intended to shut off the supply if undesirable conditions arise. However, the interlock circuits themselves can be a source of problems, by shutting down the supply as a result of harmless noise events. Heavy filtering of such interlock circuits may be necessary in order to prevent this [56]. Direct-current power supplies that have developed a fault, due to a failed capacitor or rectifier, or bad connections, can produce unusual amounts of 50/60 Hz (or 100/120 Hz) voltage ripple [30]. This can create excess noise in analog circuits. A good way of testing for such a problem is to substitute a battery for the power supply, and see if the noise disappears. Small instruments are sometimes run from the small stand-alone power supplies that are commonly used to operate low-power devices (i.e. “a.c. power adapters” or “wall
397
11.8 Some trouble-prone components
warts”). Many different types of power supplies of this kind, with numerous different output voltages, are available from commercial sources. While convenient, they have the considerable disadvantage that their output connectors are not standardized with regard to voltage ratings or polarity [3]. That is, there is no agreed one-to-one correspondence between a particular connector style and a specific d.c. voltage and polarity. As a result, it is very easy to destroy an instrument by attaching it to the wrong power supply. Proper labeling of the connectors is essential.
11.8 Some trouble-prone components 11.8.1 Switches and related devices 11.8.1.1 General points After connectors and cables, mechanical switches are the most unreliable electronic components [3]. High-quality types should normally be used. The same is also true of electromechanical relays and thermostats, and the comments made in the following section about switches also generally apply to such devices. Most of the faults that occur in these components result, not from failures of their mechanisms, but because of problems with their contacts. Intermittent or permanent high contact-resistances or open-circuits, caused by contamination or corrosion of the contacts (frequently resulting from moisture and elevated temperatures) are particularly common. In the case of switches that control analog signals, poor contacts can also result in electrical noise. With some devices, such as toggle switches, mechanical failure is also a relatively frequent problem [57]. Hence, a sturdy construction is desirable. Well-designed switches employ self-wiping contact configurations, so that the contact surfaces are cleaned whenever the devices are activated [3]. In electromechanical relays, deterioration of the actuating coil (brought about by humidity, high temperature, and excessive voltages) is also a common mode of failure. Because of the very slow opening of their contacts, thermostats that make use of thermal expansion in a bimetallic strip are particularly prone to arcing and pitting of the contact surfaces. Defective thermostats often produce electromagnetic interference (see page 371). In general, malfunctioning switches, relays and thermostats should be replaced rather than cleaned or otherwise serviced. Commercial “contact enhancer” sprays (containing water-soluble polyalkylene glycols) produce only a temporary decrease in the resistance of poor contacts, and can cause corrosion [58]. These should generally be avoided.
11.8.1.2 Switch selection for low and high current and voltage levels Different contact materials are used in low-level (voltage and current) and high-level switches [10]. In low-level types, electroplated gold alloys, which do not tarnish in air, are normally employed. The lack of tarnish allows dry switching to take place, which means
Electronic systems
398
(a)
(b)
+
0.05 µF 100
-
Fig. 11.16
Protection arrangements for switches controlling inductive loads. In the case of d.c. currents, a diode placed across the load is one form of protection (a). For a.c. currents, an RC snubber can be used (b). The R and C values shown are suitable for small inductive loads operated from the a.c. mains – see Ref. [3]. that signals involving currents of less than 100 mA and open-circuit voltages of less than 20 mV [26] can pass without hindrance across the contacts. Other contact materials, such as silver and its alloys, will generally tarnish, and can develop dry-circuit contact resistances of many ohms. Additional problems, such as signal distortion and noise, may also result – see Section 12.1. For example, it has been found that silver contacts in relays will develop tarnish films that can cause the contacts to act like diodes in some low-signal-level applications [59]. Such materials are suitable only for higher currents and voltages, which are able to break down the tarnish layers. Switches designed for use at low levels are not suitable for high voltage and current applications (involving perhaps more than a few tenths of a V·A), because the gold layer will evaporate under such conditions. (A possible outcome of using a low-level relay in a high-level application is that the contacts may weld together [30].) The manufacturer’s data sheet should always be consulted about what currents and voltages can be used with any switch. Some general purpose relays are available that can be used with either high or low currents [10]. These contain contacts in which gold is electroplated over silver. However, if such a relay is used with high currents, the gold plating quickly evaporates (leaving the silver behind), and the device is then no longer suitable for dry switching applications.
11.8.1.3 Switching large inductive loads Switches can severely degrade when they are used to interrupt the flow of current through a large inductive load, such as a relay, transformer, or electric motor. The resulting voltage surge across the contacts (which may reach 1000 V is some cases [3]) causes arcing, with consequent pitting of the contact surfaces. Such discharges will soon cause the switch to fail. The voltage surge and subsequent arcing can also cause electromagnetic interference (this is a very common problem). In the case of d.c. currents, this behavior can be prevented by placing a diode across the inductor (Fig. 11.16a). When the switch is opened, the diode starts conducting, and allows
11.8 Some trouble-prone components
399
the energy in the inductor to dissipate gradually. If a.c. currents are being passed through the inductor, the diode scheme cannot be used. Instead, an RC snubber network is often employed to reduce the magnitude of the voltage surge (Fig. 11.16b). Such devices are available in modular form from commercial sources. An alternative is to use a metal-oxide varistor (see Section 11.6). Power-supply transformers should always be provided with a snubber or a varistor, in order to protect the on/off switch. These issues are discussed in Ref. [3]. A more extensive treatment is provided in Ref. [10].
11.8.1.4 Derating factors The derating (see Section 3.2) of switches used to carry high currents and voltages is important. If a switch is used to operate a resistive or capacitive load, a current derating factor of at most 0.8 should be employed, and should preferably be 0.5 for the highest reliability [60]. If a switch is used to operate an inductive load, a derating factor of at most 0.5 should be employed, and should preferably be 0.3 for the highest reliability.
11.8.1.5 Alternatives to mechanical switches, relays and thermostats for improved reliability Switches are often used to sense the position of a mechanism (as in the case of limit switches – see page 224). Although this function is often carried out by mechanical microswitches, improved reliability can be achieved by using a magnetic reed switch arrangement (i.e. a reed proximity switch). In such a scheme, a permanent magnet on the moving mechanism actuates a stationary reed switch. The latter contains a contact that forms part of a ferromagnetic flexure. This moveable flexure-contact (along with its fixed counterpart) is hermetically sealed in a glass capsule. Because their contacts are completely protected from contamination, reed switches are highly reliable, in comparison with ordinary types, and last for a very long time. An even better setup, which contains no moving parts, involves using an electronic Hall sensor in place of the reed switch. Such an arrangement is called a Hall-effect proximity switch. A reed relay is a form of electromechanical relay that makes use of a reed switch, as described above. A coil on the outside of the capsule couples to the ferromagnetic flexure, and thereby actuates the relay. Reed relays are often used to switch low-level signals. Other alternatives to ordinary electromechanical relays are MOSFETs and solid-state relays [3]. These components, which use semiconductor devices to carry out the switching, have no moving parts. They can also be highly reliable and are extremely long-lived (in terms of the possible number of switching operations). Solid-state relays that are intended for use with a.c., and which employ a zero-crossing switching design,10 have the advantage of not generating spikes and noise on the power line (which may cause interference – see page 84). 10
These devices switch on when the voltage is zero, and switch off when the current is zero.
400
Electronic systems
Electronic thermostats (or temperature controllers) are very common. With these, temperatures are sensed by a solid-state device, and the resulting control signal is used to operate either a solid-state relay or an electromechanical one. Electronic thermostats are considerably more reliable than mechanical types. They also produce much less electromagnetic interference [40]. Although manually operated switches with no mechanical contacts are manufactured, they are not as widely used as solid-state relays, electronic thermostats, or position detectors based on Hall sensors. Generally, semiconductor-based switches (including solid-state relays and the like) are considerably less tolerant of current and voltage overloads than devices with mechanical contacts [61]. They can be protected from these with fast-blow fuses (not standard slow-blow ones), snubbers, and metal-oxide varistors (see Section 11.6).
11.8.2 Potentiometers After connectors, cables, and switches, potentiometers and trimmers are the most unreliable electronic components [3]. As with connectors and switches, the contacts are the main source of trouble. Potentiometers (or pots) tend to get electrically noisy as they wear out. Such noise is particularly noticeable while they are being adjusted. Dust or debris that enters a pot can cause erratic behavior. In the case of old wire-wound potentiometers, contact oxidation can also be a problem. It is possible for a potentiometer’s wiper contact to go open circuit temporarily while it is being moved. This may have consequences (possibly in the form of damage) for other components in an electronic circuit, unless care is taken with the circuit design [24]. It is also important to ensure that the maximum current limit of the wiper contact is not exceeded, and to pay attention to derating (see Section 3.2). Vibrations can reduce the stability and lifetime of potentiometer contacts. Malfunctioning potentiometers should generally be replaced. Cleaning the contacts may have a beneficial effect, but this is likely to be only temporary [61]. The use of commercial “contact enhancer” sprays should be avoided (see page 397).
11.8.3 Fans Failures of cooling fans are a frequent problem in electronic equipment. Lifetimes are usually short: small fans often fail after three to four years [62]. The use of fans causes other difficulties as well, which is why natural convection should be used for cooling apparatus whenever possible. For example, unless a filter has been provided, fans will bring dust into electronic enclosures. If a filter is present, then this must be cleaned periodically. Vibrations and acoustic noise from fans can also be a problem, especially towards the ends of their lives. Large, slow-moving fans can be obtained that will minimize noise and vibrations. It is also possible to obtain special fan mounts that decouple the fan from the electronic enclosure. High-quality fans with long-rated lifetimes should be selected. Normally (and especially if air temperatures are high) fans with ball bearings last longer than ones with plain bearings. It is possible to obtain fans with service lives of more than 20 years [63,64].
11.8 Some trouble-prone components
401
High humidity levels can be very hard on bearings (see page 66) and other parts of a fan. Some fans are especially designed to operate under such conditions. Considerations on the use of fans, including selecting them in order to satisfy cooling requirements, and to reduce vibrations, acoustic noise and electromagnetic interference from stray magnetic fields, are provided in Ref. [65].
11.8.4 Aluminium electrolytic capacitors Although electrolytic capacitors (or electrolytics) can provide huge capacitances in a given volume, they are, in just about every respect, the most unreliable capacitors. They have short lifetimes, are noisy (exhibiting “popcorn noise,” even when they are operating normally) and prone to drift (with extremely poor temperature stability) [3,24]. When they fail, they can leak electrolyte over other parts of an electronic circuit and cause corrosion [61]. Electrolytics are usually polarized, and if they are abused (e.g. by the application of an a.c. voltage, or a d.c. voltage with the wrong polarity), they can explode with considerable force [30].11 Because of their noise and instabilities, electrolytics are not suitable for use in signal paths or timing circuits. For such applications, other capacitors, such as polypropylene film types, are much better [30]. Electrolytics are really only appropriate for use in power supplies (where they are widely employed), and even there they should be used sparingly. The lifetimes of these capacitors are reduced considerably by elevated storage and operating temperatures. If the core operating temperature of an electrolytic is increased by 10 ◦ C, its lifespan will be reduced by about a factor of two [55]. Good quality types, which are sealed and rated for use at 105 ◦ C or above, should be selected. High voltages will also greatly reduce the lifespan of an electrolytic. The operating voltage and ripple currents should be derated by 50% (see Section 3.2) [55]. Storage temperatures of between 5 ◦ C and 25 ◦ C are recommended. Electrolytics can be damaged if they are allowed to freeze. In power-supply applications, solid tantalum capacitors are a better alternative to aluminium electrolytic types. Tantalum capacitors, which are also polarized, can explode if subjected to electrical abuse. In this regard they are more dangerous than electrolytics, since their bodies (which are made of sintered metal) fragment to form shrapnel [24]. Extensive information about the problems that may be encountered while using capacitors in general is provided in Refs. [30] and [53].
11.8.5 Batteries Batteries that are designed to be used once (i.e. are non-rechargable) are referred to as primary batteries. These include zinc–carbon, alkaline manganese, and various lithium types. Those that are designed to be recharged are referred to as secondary batteries. This category includes nickel–cadmium, nickel metal hydride, lithium-ion and lead–acid kinds. Primary batteries have a limited shelf life, and are affected by operating and storage temperatures. Generally, these (and indeed all batteries) should be kept cool. 11
Hence, eye protection should be used when working with electrolytic capacitors.
402
Electronic systems
It is common to find that some indispensable, but infrequently used, primary-batteryoperated device is non-functional due to depletion of the battery. The dead battery may also have leaked corrosive electrolyte over the interior of the device. Furthermore, one often discovers that backup batteries that were being kept in a drawer have also discharged. Such problems are especially serious with the older zinc–carbon batteries, which have a three-year shelf life. They can also occur with the newer (and more common) alkaline manganese types, which normally have a shelf life of five years. A good way of minimizing surprises of this type is to use lithium primary batteries. These offer a shelf life of up to ten years at room temperature, have a very high energy density, and can withstand a much larger range of operating and storage temperatures than other batteries. For lithium–manganese 9 V batteries, operating temperatures of −40 ◦ C to +70 ◦ C are typical. In the case of lithium iron-disulphide 1.5 V cells, the range is −20 ◦ C to +70 ◦ C. (Storage and operation at the higher temperatures will reduce the lifespan.) Dead-battery problems can also be reduced by selecting electronic devices that automatically shut themselves down if they have not been used for a certain time (this feature is sometimes called “auto power off”). For example, many of the better multimeters have this ability. Short-lived batteries should be removed from seldom-used devices and stored separately, to prevent damage from leaking electrolyte. Secondary batteries also have limited lifespans, which are determined both by operating and storage temperatures, and the details of how they are used. For nickel–cadmium and sealed lead–acid batteries, the maximum number of complete charge/discharge cycles ranges from 250 to 1000, depending on how they are charged and discharged [3]. The longevity of secondary batteries can be significantly diminished if they are very deeply discharged, or charged or discharged very quickly. Under optimum conditions, lifespans of roughly two to four years can be expected for nickel–cadmium batteries, and five to ten years for sealed lead–acid ones. Upper operating temperature limits of about 50 ◦ C are typical. Gel-type lead–acid batteries (often called “sealed lead–acid batteries”) are considered to be highly reliable secondary batteries, and are used in situations where reliability is essential [66]. They are also maintenance-free, not too expensive, and can last a relatively long time (up to about 12 years at 23 ◦ C). In contrast with wet lead–acid batteries (the kind generally used in cars), gel-types are clean, and will not leak acid [3]. Such batteries can be very useful in applications where an isolated low-noise power source is needed, including experiments involving low-level signals – see page 361. Since the ability of secondary batteries to hold a charge depends on details of their history, circumstances can arise in which they become completely discharged in an unexpectedly short time [67]. (Gel-type lead–acid batteries can be very helpful here, but they are not a complete solution.) It is possible to buy automatic battery-charger/maintainers that will periodically test a battery when it is not being used, and thereby reduce surprises of this kind. However, if the ability of a battery to provide power for a predictable length of time is essential, it is probably best to use a brand new primary battery, and replace it frequently. For instance, this might be desirable if the battery was being used in an experiment that could be done only once. Unusual cases have occurred in which primary and secondary lithium batteries have either caught fire or exploded. Such events have generally been the result of abuse of these devices,
403
Further reading
such as causing mechanical deformation or other damage, overheating, improper electrical connection, short circuiting, and recharging them when they have not been designed for this. Counterfeit and substandard batteries have been implicated in some of these incidents, and hence lithium batteries should be purchased only from reputable suppliers (see also Section 4.6.5). Lead–acid batteries can explode if excessive current is drawn from them [30].
11.8.6 Low-frequency signal transformers Signal transformers operating at frequencies below a few tens of kilohertz tend to be susceptible to interference from ambient magnetic fields. Transformers with toroidal cores are exceptions to this. However, owing to the difficulty of constructing such devices, especially if Faraday shields between the windings are needed, they are uncommon in comparison with non-toroidal EI core types [2]. Usually, interference can be prevented by surrounding the transformer with magnetic shields, and high-quality signal transformers should generally be provided with these. Magnetic shielding issues are discussed on pages 380–381. In some cases, vibrations may be a cause of noise (i.e. microphonics) in transformers. This can be especially serious if the core of a transformer has become magnetized, which is likely to occur if a sizeable d.c. current has been passed through its windings [2]. Magnetized transformer cores are usually noisy even in the absence of vibrations. The existence of magnetization can be established by looking for second-harmonic signal distortion. Magnetization of a transformer’s core often occurs when the continuity of its windings, or their resistance, is tested with an ohmmeter. A d.c. current of more than a few hundred microamps may be sufficient to do this in some cases. Hence, winding parameters should generally not be measured using these devices. Alternating-current methods should be employed if such testing is necessary. A method of demagnetizing transformer cores is described in Ref. [2]. Microphonic behavior can also occur if the transformer’s magnetic shield has become magnetized. This can happen if the shield is touched with a strongly magnetized tool, or if the transformer has been mounted on a magnetized steel chassis. These problems can be cured (after the source of the magnetizing field has been removed) by using suitable demagnetizing (or degaussing) procedures – see Ref. [2]. Although degaussing should be carried out in any event if a transformer or its shield is magnetized, microphonics can also be reduced by mounting the transformer on a vibration-isolating material, such as foam rubber [32].
Further reading One of the most useful sources of information on electronics as a whole is Ref. [3]. The prevention of electromagnetic interference in general is discussed in Ref. [10]. This has helpful summaries at the end of each chapter, and an overall summary appears at the end of the book. Methods of identifying and fixing interference problems in general are covered in Ref. [19]. (This book is also very useful.) Grounding and ground loops
404
Electronic systems
are discussed in Refs. [2], [3], [6], [10] and [19]. (References [2] and [3] are especially helpful on these subjects.) Radio-frequency interference is well covered in Refs. [10], [19] and [27]. Electromagnetic interference problems arising in experiments involving very powerful electrical pulses of short duration are dealt with in Ref. [13]. The troubleshooting of analog electronics (at the basic circuit and component level) is discussed in Refs. [24] and [30]. Both analog and digital electronic issues are covered in Ref. [3].
Summary of some important points 11.2 Electromagnetic interference 11.2.1 Grounding and ground loops (a) Correct grounding is highly important for the reliability of electronic systems – in its absence, electrical noise and equipment malfunctions are a frequent result. (b) Ground loops are a very common and problematic outcome of uncontrolled ground topologies. (c) Usually, the most troublesome ground loops in an electronic system involve multiple connections to the building a.c. mains safety-ground via equipment powercords. (d) They can also be formed by cables that interconnect various instruments in an electronic setup. (e) Ground loops can be a particularly troublesome source of noise in sensitive electronics working at low frequencies (much less than 1 MHz). (f) It is very important to plan ground topologies whenever an electronic system is configured. (g) An indispensable tool in designing and maintaining ground networks is the ground map. (h) The most basic method of avoiding ground loops is to ensure that contacts are not made between an electronic system and ground at more than one point. (i) Do not have unnecessary ground connections in a low-frequency, low-level electronic system. (j) For reasons of safety and legality, ground loops should never be broken by disconnecting power-cord safety grounds from equipment run off the ac mains. (k) Two useful ways of facilitating the safe and legal removal of safety-ground connections are to power electronic devices either from batteries, or by optical means. (l) Ground-loop noise can also be reduced by using differential amplifiers, semi-floating input grounds, transformers, and isolation amplifiers placed in signal paths. (m) The use of audio transformers is a particularly effective and convenient method for breaking low-frequency ground loops. (n) Common-mode chokes are useful for reducing radio-frequency ground-loop noise.
405
Summary of some important points
(o) When interconnected items of equipment must be powered from the a.c. mains, groundloop noise can be reduced by plugging their power cords into the same outlet strip or branch circuit. (p) Ground-loop noise appearing across signal cables can be reduced by keeping these cables as short as possible. (q) A ground loop tester is a very useful device for detecting and locating ground loops in an electronic setup.
11.2.2 Radio-frequency interference (a) Although it is generally not as common a problem as ground-loop-related noise at 50/60 Hz and their harmonics, interference from radio-frequency signals can be a cause of noise and malfunctions in sensitive electronic equipment. (b) Radio-frequency interference (RFI) does not just affect devices that work at radio frequencies. Because of nonlinear effects in electronic circuits, RFI can affect apparatus that operates at low frequencies – even d.c. (c) If it seems likely that apparatus is going to be affected by RFI, it is much better to take this into account during the design and constructions stages, rather than applying fixes when interference problems actually arise. (d) Shields should always be provided for electronic circuits that handle low-level signals, especially if these involve high impedances. (e) Circuits can be shielded from RF energy by conducting metal (e.g. steel or copper) enclosures. (f) In order to be effective as a shield, an enclosure must be grounded to the common of the electronic system (g) The most leak-prone parts of RF shields are seams and joints. (h) Even an extremely narrow slot in an RF shield can prevent it from functioning. (i) Radio-frequency energy often enters electronic devices through their leads (e.g. signal cables, power cords, etc.). Both input and output signal cables can act as pathways. (j) Filters should be provided for all conductors passing into a shielded enclosure. (k) In selecting capacitors or inductors for use in filters, account must be taken of the frequency ranges over which they are able to function. Capacitors become inductors, and inductors become capacitors, at sufficiently high frequencies. (l) Power-line filters are available, and should be used routinely in all electronic instruments, as well as any equipment that contain switching power supplies. (m) Ground loops can cause or worsen RFI problems. (n) The presence of RFI is often established by noting a correlation between electronic device problems and the operation of known RF emitters. (o) Broadband RF energy (e.g. from electric discharges and brush-type electric motors) can be detected and located using a hand-held AM radio. (p) Narrowband RF energy (e.g. from TV stations and induction heating equipment) can be detected using a simple telescopic antenna connected to an oscilloscope or a spectrum analyzer.
406
Electronic systems
11.2.3 Interference from low-frequency magnetic fields (a) Some items (e.g. cables carrying low-level, low-frequency signals, and signal transformers) are sensitive to low-frequency magnetic fields. (b) The most general solution to magnetic interference problems, and often the easiest to implement, is to increase the distance between the source and the affected items. (c) In the case of conductors, magnetic interference is normally reduced by reducing openloop areas (e.g. by twisting wires). (d) In order to reduce problems involving signal transformers and inductors, magnetic shields, made of high-permeability materials, can be employed.
11.2.4 Some EMI issues involving cables, including crosstalk between cables The pickup of electromagnetic interference by cables, or coupling between them (crosstalk), can be minimized by: cable shielding and conductor twisting measures, keeping cables short, using balanced interfaces, separating quiet and noisy cables, and using optical fibers to convey signals, data and power.
11.3 High-voltage problems: corona, arcing, and tracking (a) High-voltage equipment is vulnerable to damage and malfunction as a result of electrical discharges, including: corona (partial discharges), arcing (high-current discharges), and tracking (breakdown of solid insulating materials). (b) Electrical discharges are frequently associated with the pungent smell of ozone, which often provides the first indication of their presence. (c) Arcing is a high-current phenomenon that can produce immediate damage because of heating. (d) Corona is more subtle than arcing, yet can produce long-term damage to materials as a result of the release of ozone and (in the presence of moisture) nitric acid. (e) Both arcing and corona can produce radio-frequency interference. (f) Because of the low breakdown voltage of helium gas, helium leak testing must not be done around live high-voltage devices. (g) Some important measures for preventing electric discharges are: (i) keeping high-voltage equipment and cables clean and dry, (ii) moderating ambient humidity and dust levels, (iii) smoothing sharp edges on conductors, and (iv) filling in air gaps with solid insulating substances (including small air gaps between solid insulators). (h) One very effective method of sensing and locating corona is to use an ultrasonic detector.
11.4 High-impedance systems (a) High-impedance electronics are prone to a number of reliability problems.
407
Summary of some important points
(b) Capacitively coupled electromagnetic interference can be an especially severe problem in low-level or high-precision circuits with impedances of (very roughly) more than a few tens of kilo-ohms. (c) Cables exposed to changing mechanical stresses, due to vibrations, temperature changes, or handling, can also produce noise voltages. (d) Very-high-impedance circuits (more than about 107 –108 ) can suffer from erratic current leakages due to moisture and contaminants on insulators. (e) Cables and electronic devices operating with low-level or high-precision signals should always be shielded, especially if impedance levels are high. (f) High-impedance devices and cables should not be exposed to vibrations, or stresses due to other causes. (g) The use of very-high-impedance circuits should be avoided, if possible. (h) Circuits that are vulnerable to leakage problems should be kept extremely clean and dry.
11.5 Damage and electromagnetic interference caused by electrostatic discharge (ESD) (a) Static electricity can be a major threat to electronic devices. (b) For example, walking across a carpet under dry conditions can result in static voltages of 35 000 V. (c) Numerous components can be damaged by voltages of less than 2000 V, and many are vulnerable to voltages of less than 100 V. (d) Semiconductor devices used in high-frequency RF (and especially microwave) applications, and ones based on MOS technology, are particularly vulnerable. (e) Static discharges from the body involving voltages below 3500 V cannot be felt, so that damaging ESD events may go unnoticed. (f) Electrostatic discharges to operating digital equipment can also create electromagnetic interference, which may cause such equipment to temporarily malfunction. (g) Some basic methods of preventing ESD damage are: (i) short the leads of ESD-sensitive devices until they are ready for use, (ii) do not touch such devices unless necessary, (iii) wear grounded conductive wrist straps while working with such devices. (h) Damage to microwave equipment caused by static charges on cables can be prevented by connecting the cables to a load before attaching them to the equipment.
11.6 Protecting electronics from excessive voltages (a) Electronic devices (e.g. input transistors of preamplifiers) are often damaged by overvoltages, owing to, e.g., static electricity, power line surges, and high-voltage flashover. (b) A number of methods have been developed to protect electronics from such events, involving the use of crowbars (e.g. spark gaps), clamps (e.g. metal-oxide varistors), and devices with high series impedances for common-mode voltages (e.g. optoisolators).
408
Electronic systems
11.7 Power electronics (a) High-power electronic devices and power supplies often suffer from reliability problems. (b) The design and construction of such equipment should generally be left to professionals who specialize in this activity. (c) It usually pays to buy the best equipment available. (d) Aluminium electrolytic capacitors in power supplies should be replaced regularly.
11.8 Some trouble-prone components 11.8.1 Switches and related devices (a) After connectors and cables, mechanical switches (and relays and thermostats) are the most unreliable electronic components. (b) Switches used with low currents and voltages should have gold contacts, whereas those used with high currents and voltages should have silver ones. (c) If inductive loads are being switched, arcing at the switch contacts, and consequent degradation, can be a major problem. This can be prevented by placing a diode (for d.c. currents) or an RC snubber (for a.c. currents) across the load. (d) Some more reliable alternatives to ordinary mechanical switches, relays, and thermostats are magnetic reed switches and relays, solid-state relays, Hall-effect proximity switches, and electronic temperature controllers.
11.8.2 Potentiometers (a) After connectors, cables, and mechanical switches, potentiometers are the most unreliable electronic components. Contact noise is a common problem. (b) Noisy potentiometers should be replaced. Cleaning is usually not an effective long-term solution.
11.8.3 Fans (a) Cooling-fan failure is a common problem in electronic equipment. (b) Small fans typically last three to four years, but it is possible to obtain ones that last more than 20 years.
11.8.4 Aluminum electrolytic capacitors (a) Aluminium electrolytic capacitors are, in most respects, the most unreliable capacitors – being short-lived, noisy, and prone to drift. (b) Elevated storage and operating temperatures dramatically reduce the lifespan of electrolytics. (c) Solid tantalum capacitors are a more reliable alternative to aluminum electrolytic types.
409
References
(d) Both aluminium electrolytic and solid tantalum capacitors can explode if subjected to electrical abuse.
11.8.5 Batteries (a) Batteries should be kept cool. (b) Very useful primary (non-rechargeable) batteries for powering seldom-used devices are the lithium–manganese and lithium iron-disulphide types, which have exceptional (ten-year) shelf lives, and a wide range of operating and storage temperatures. (c) Secondary (rechargeable) batteries have limited lifetimes, which depend on the number of charge/discharge cycles, depth of discharge, operating temperature, and other factors. (d) Gel-type lead–acid batteries are considered to be highly reliable secondary batteries. (e) If the ability of a battery to provide power for a given length of time is essential, it is best to use a brand-new primary battery, rather than a secondary one. (f) Lithium batteries have been known to explode or catch fire if abused, and lead–acid types can explode if subjected to excessive current draw.
11.8.6 Low-frequency signal transformers (a) Low-frequency signal transformers are susceptible to interference from ambient magnetic fields. (b) Vibrations can cause microphonic noise in signal transformers, especially if their cores or magnetic shields have become magnetized. (c) It is possible to inadvertently magnetize the core of a transformer by measuring its winding resistance with an ohmmeter.
References 1. B. Whitlock, Understanding, Finding and Eliminating Ground Loops in Audio and Video Systems, Jensen Transformers, Inc. www.jensen-transformers.com 2. B. Whitlock, in Handbook for Sound Engineers, 3rd edn, G. M. Ballou (ed.), Butterworth-Heinemann, 2002. 3. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn, Cambridge University Press, 1989. 4. Application Note G-2, Grounding and Shielding in Electrochemical Instrumentation – Some Basic Considerations, Princeton Applied Research, 801 S. Illinois Avenue, Oak Ridge, Tennessee, USA. www.princetonappliedresearch.com 5. R. H. Lee, Industrial and Commercial Power Systems Technical Conference, IEEE, 1985, pp. 7–9. 6. H. W. Denny, Grounding for the Control of EMI, Don White Consultants, Inc., 1983. 7. R. S. Germain, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988.
410
Electronic systems
8. T. Williams, EMC for Product Designers, 2nd edn, Butterworth-Heinemann, 1996. 9. T. Waldron and K. Armstrong, Bonding Cable Shields at Both Ends to Reduce Noise, EMC Information Centre. www.compliance-club.com/pdf/Bondingcableshields.pdf 10. H. W. Ott, Noise Reduction Techniques in Electronic Systems, 2nd edn, John Wiley and Sons, 1988. 11. R. Morrison, Instrumentation Fundamentals and Applications, John Wiley and Sons, 1984. 12. R. Morrison, Solving Interference Problems in Electronics, John Wiley and Sons, 1995. 13. E. Thornton, Electrical Interference and Protection, Ellis Horwood, 1991. 14. A. Basanskaya, IEEE Spectrum 42, No.10, 18 (2005). 15. JDSU; 430 N. McCarthy Blvd., Milpitas, CA, USA. www.jdsu.com 16. M. Viola and D. Long, 12th Symposium on Fusion Engineering, IEEE, 1987, Vol. 1, pp. 456–459. 17. P. M. Bellan, Rev. Sci. Instrum. 78, 065104 (2007). 18. B. Bergeron, in The ARRL RFI Book, E. Hare (ed.), The American Radio Relay League, 1999. 19. M. Mardiguian, EMI Troubleshooting Techniques, McGraw-Hill, 2000. 20. J. J. Carr, The Technician’s EMI Handbook: Clues and Solutions, ButterworthHeinemann, 2000. 21. R. H. Alderson, Design of the Electron Microscope Laboratory, North-Holland, 1975. 22. D. A. Weston, Electromagnetic Compatibility: Principles and Applications, 2nd edn, Marcel Dekker, 2001. 23. J. Hecht, The Laser Guidebook, 2nd edn, TAB Books, 1992. 24. P. C. D. Hobbs, Building Electro-Optical Systems: Making it all Work, John Wiley and Sons, 2000. 25. J. Lee, in The ARRL RFI Book, E. Hare (ed.), The American Radio Relay League, 1999. 26. Low Level Measurements Handbook, 6th edn, Keithley Instruments, Inc., 28775 Aurora Road, Cleveland, Ohio, USA, 2004. www.keithley.com 27. The ARRL RFI Book, E. Hare (ed.), The American Radio Relay League, 1999. 28. L. T. Gnecco, The Design of Shielded Enclosures: Cost-Effective Methods to Prevent EMI, Butterworth-Heinemann, 2000. 29. J. Moell and B. Schetgen, in The ARRL RFI Book, E. Hare (ed.), The American Radio Relay League, 1999. 30. R.A. Pease, Troubleshooting Analog Circuits, Elsevier, 1991. 31. D. J. Cougias, E. L. Heiberger, K. Koop, and L. O’Connell (eds.), The Backup Book: Disaster Recovery From Desktop to Data Center, Schaser-Vartan Books, 2003. 32. S. Letzter and N. Webster, IEEE Spectrum, p. 67, August 1970. 33. A. J. Mager, IEEE Trans. Magnetics MAG-6, 67 (1970). 34. P. L. Martin and W. G. Dunbar, in Electronic Failure Analysis Handbook, P. L. Martin (ed.), McGraw-Hill, 1999. 35. M. S. Naidu and V. Kamaraju, High Voltage Engineering, 2nd edn, McGraw-Hill, 1996. 36. J. H. Moore, C. C. Davis, M. A. Coplan, and S. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002. 37. L. J. Frisco, Electro Tech. 68, 110, August 1961.
411
References
38. M. Lautenschlager, NETA World, Fall 1998. 39. D. Brandon and W. D. Kaplan, Joining Processes: an Introduction, John Wiley and Sons, 1997. 40. J. Boucher, in The ARRL RFI Book, E. Hare (ed.), The American Radio Relay League, 1999. 41. W. H. Kohl, Handbook of Materials and Techniques for Vacuum Devices, AIP Press, 1995. 42. J. F. O’Hanlon, in Encyclopedia of Applied Physics, Vol. 23, G. L. Trigg (ed.), WileyVCH Verlag, 1998. 43. C. E. Mercier and W. E. Elliott, Electrical Mfg. 60, 140, November 1957. 44. Dow Corning Corporation, Corporate Center, PO Box 994, MIDLAND MI, USA. www.dowcorning.com 45. DuPont (E. I. du Pont de Nemours and Company). www2.dupont.com 46. K. M. Heeger, S. R. Elliott, R. G. H. Robertson et al., IEEE Trans. Nucl. Sci. 47, No. 6, 1829 (2000). 47. R. T. Harrold, in Engineering Dielectrics, Vol. 1, Corona Measurement and Interpretation, R. Bartnikas and E. J. McMahon (eds.), American Society for Testing and Materials, 1979. 48. F. H. Kreuger, Partial Discharge Detection in High-Voltage Equipment, Butterworths, 1989. 49. B. H. Brown, R. H. Smallwood, D. C. Barber, P. V. Lawford, and D. R. Hose, Medical Physics and Biomedical Engineering, Taylor and Francis, 1999. 50. J. M. Kolyer and D. E. Watson, ESD from A to Z: Electrostatic Discharge Control for Electronics, 2nd edn, Kluwer Academic Publishers, 1999. 51. Electrostatic Discharge Association, 7900 Turin Road, Building 3, Rome, NY, USA. www.esda.org 52. J. E. Vinson, J. C. Bernier, G. D. Croft, and J. J. Liou, ESD Design and Analysis Handbook, Kluwer Academic Publishers, 2003. 53. D. Galler, R. A. Blanchard, D. Glover et al. in Electronic Failure Analysis Handbook, P. L. Martin (ed.), McGraw-Hill, 1999. 54. R. B. Standler, Protection of Electronic Circuits from Overvoltages, Dover, 2002. 55. D. Gerstle, Reliability, Maintainability, & Supportability 8, No. 2, 2 (2004) 56. D. Wolff and H. Pfeffer, EPAC96 – Fifth European particle accelerator conference, Vol. 3, S. Myers, A. Pacheco, R. Pascual, Ch. Petit-Jean-Genaz, and J. Poole (eds.), Institute of Physics Publishing, 1997, pp. 2337–2339. 57. P. L. Martin, R. Blanchard, W. Denson et al. in Electronic Failure Analysis Handbook, P. L. Martin (ed.), McGraw-Hill, 1999. 58. M. Antler, in Electrical Contacts – Principles and Applications, P. G. Slade (ed.), Marcel Dekker, Inc., 1999. 59. R. S. Mroczkowski, in Electronic Connector Handbook, R. S. Mroczkowski (ed.), McGraw-Hill, 1998. 60. P. D. T. O’Connor, Practical Reliability Engineering, 4th edn, Wiley 2002. 61. G. Ballou, in Handbook for Sound Engineers, 3rd edn, G. M. Ballou (ed.), ButterworthHeinemann, 2002.
412
Electronic systems
62. S. Suhring, Proceedings of the 2003 Particle Accelerator Conference (IEEE Cat. No. 03CH37423), Part Vol. 1, pp. 625–629, IEEE, 2003. 63. H. Watanabe, J. Electron. Eng. 31, 48 (July 1994). 64. Long Life Fan; SanyoDenki, 1–15-1, Kita-otsuka, Toshima-ku, Tokyo, Japan. www. sanyodenki.co.jp 65. M. Turner, Electronic Packaging Production 35, 44 (1995). 66. R. Wagner, in Valve-regulated lead-acid batteries, D. A. J. Rand, P. T. Moseley, J. Garche, and C. D. Parker (eds.), Elsevier, 2004. 67. D. Davis and E. Patronis, Jr., Sound System Engineering, 3rd edn, Elsevier, 2006.
12
Interconnecting, wiring, and cabling for electronics
12.1 Introduction Among the most common problems encountered when dealing with electronic items are those caused by faulty contacts, connectors, wires and cables. For example, bad solder joints are among the most frequent causes of failures in electronic devices in general [1]. In any given electronic system, the most unreliable components will be its connectors and cables [2]. (Perhaps somewhat surprisingly, the number of connector problems that occurred during the Apollo space program was greater than that of all other problems combined [3].) More specifically, faulty connectors and cables are an important source of trouble in experimental work, in the form of unreliable measurements and wasted time [4,5]. Information on this subject is widely dispersed. The purpose of the following chapter is to bring together knowledge about these mundane but troublesome items from some of the various scattered sources. Some contact, connector, wire, and cable problems are: (a) excessively high or low resistances, or complete open or short circuits, (b) noisy contacts (perhaps in the form of 1/f noise), (c) production and reception of electromagnetic interference (including crosstalk between nearby wires and cables), (d) leakage currents over insulators in high-impedance circuits, (e) noise produced as a result of cable vibrations in high-impedance circuits (“microphonics”), (f) nonlinear I–V characteristics of poor-quality contacts (i.e. they can become non-ohmic1 – this may lead to interference with sensitive measurements due to rectification of electromagnetic noise in the environment, and also signal distortion), (g) reflections on cables at radio (and particularly microwave) frequencies, owing to impedance mismatches, (h) arcing and corona discharge at contacts, and in cables and connectors, at high voltages, (i) overheating and failure in high current and power circuits resulting from poor contacts, and (j) thermoelectric EMFs in low-level d.c. voltage measurements. 1
413
For instance, the oxide film that forms on copper is a semiconductor. Hence, the metal–semiconductor junctions that can exist between poorly bonded copper wires can act as rectifiers. In effect, they function as Schottky diodes.
Interconnecting, wiring, and cabling for electronics
414
The first and second items in the list are generally the most troublesome. Many of the above problems can be intermittent – sometimes requiring a huge amount of time to track down and repair. These include: high resistances and complete open circuits (especially at contacts), electromagnetic interference, leakage currents, arcing and corona, microphonics, and short circuits. Problems due to poor contacts in general can be insidious, and it may take several years after the contacts are made for them to become apparent. For those circuits carrying analog signals, a typical open or short circuit failure sequence might be (a) an increase in the level of noise produced by the item, (b) the appearance of intermittent faults, and eventually (c) the formation of a permanent fault. Difficulties with contacts, connectors, wires and cables stem not just from their relatively low reliability, but also because very large numbers of them are frequently present in electronic systems. Coaxial patch cords2 in particular tend to be deployed by physicists in great (and perhaps sometimes excessive) abundance in their experimental setups. The most helpful general advice which can be given is that one should minimize the use of such things as much as possible, as long as this is reasonably consistent with other requirements such as the need for modularity. Furthermore, when these items are being used under extreme electrical conditions – particularly those involving low-level widebandwidth signals, high voltages, high currents, high impedances, and/or high frequencies – it becomes very important to maintain good standards of design, workmanship, and care.
12.2 Permanent or semi-permanent electrical contacts 12.2.1 Soldering 12.2.1.1 Modes of failure Common electrical manifestations of poorly made or degraded solder joints are permanent or intermittent high resistances or open circuits, and noise (possibly with a 1/f power spectral density). Cracked solder joints are often responsible for intermittent problems. Sometimes the characteristics of solder joints will change with variations in mechanical stress (i.e. they will act as pressure sensitive switches or transducers). Bad solder joints can also be non-ohmic (i.e. have nonlinear I–V characteristics) [6]. In the presence of radio-frequency energy, this can result in electromagnetic interference, even in low-frequency apparatus (see page 372). Non-ohmic contacts in general can be a significant source of errors in low-level electrical measurements [7]. During soldering, an electrical connection may inadvertently receive no solder (a distinct possibility in circuits containing many connections). The contact may nevertheless function well at first, only to fail later on (perhaps after months or years) as oxidation and/or corrosion forms on the metal surfaces [2].
2
A patch cord is a length of cable with a male connector at each end.
415
12.2 Permanent or semi-permanent electrical contacts
12.2.1.2 Training requirements Environmental conditions (e.g. temperature) and mechanical stresses are important in determining the reliability of solder joints. However, the most troublesome and common forms of fault are often caused by solder joints that were not made properly in the first place. Such contacts are likely to become worse with time, possibly eventually resulting in complete failure (i.e. open circuits). Problems with poorly made solder joints may not show up right away – it may take several years for them to become evident. The ability to solder well is an important skill in a laboratory, and one that requires some effort to acquire. In scientific research, soldering is often treated as a task that requires little or no proficiency – an attitude that probably leads to many reliability problems. In industry, soldering is a subject that is taken very seriously – books and journal articles are written about it, and one can take courses on it. For workers in research, who solder from time to time as part of their everyday activities, it is worth spending some time learning the technique, which is not a difficult one to master. High standards of workmanship for related operations, such as the removal of insulation from wires (without causing damage to the underlying conductors), are also important. Soldering belongs to a class of skills that are best acquired by observing the relevant operations being carried out. This can be done by watching a skilled solderer (preferably an electronics technician or other professional) or by looking at a soldering instruction video. A number of the latter can be found online at Refs. [8] and [9]. It is worth watching several such recordings, as they often contain complimentary information. A particularly useful site, which contains a video, comprehensive written information, and many useful links, can be found at Ref. [10]. Other helpful written material is provided in Refs. [11], [12], [13] and [14].
12.2.1.3 Some important points regarding the creation of reliable joints Importance of cleaning prior to soldering Cleanliness is a critical factor in soldering. It is essential to ensure that the items to be soldered, the soldering iron tip, tools used to form wires or component leads, and the solder, are free of contaminants [12]. These include, for instance, lubricants and the like (especially silicones – see Section 3.8.3), thick oxide layers, corrosion, tape residues, and burnt flux.
Weakness of solder and the need for mechanical support One subject that is often not given sufficient attention in discussions on soldering is the poor mechanical properties of solder. Solder joints in general are very weak, and have poor fatigue properties. (Static stresses can be a problem as well, especially if the operating temperature of the joint is high, but dynamic stresses are more important, because of their ability to cause fatigue failure.) Next to improper soldering, faults caused by fatigue failure are probably the most problematic. The boundary between a wire and the solder that is being used to connect it to something is a particularly vulnerable location, because of the large stress concentrations that exist in this region.
416
Interconnecting, wiring, and cabling for electronics
For these reasons, solder must not be required to support any significant mechanical loads. Mechanical support (i.e. some form of stress relief) must come from elsewhere [14]. For example, wires that are to be connected (i.e. spliced) using solder should be twisted together before the application of the solder. Connectors that are being attached to cables must be provided, without exception, with a cable clamp, so that stresses on the cable are taken by the clamp, and not by the solder joints in the connector. Connections between wires and printed circuit boards should be made using swage-solder terminals, and not via large printed circuit pads on the board [2]. Heavy electronic components (larger than about 0.1 kg), such as transformers or large electrolytic capacitors, should not be mounted on printed circuit boards. If it is not possible to provide a stress relief for a solder joint, then the joint must be reinforced, possibly with epoxy. Vibrations of the conductors leading up to the solder joint can also be an important cause of fatigue failure. A possible source is a nearby vacuum pump that is somehow mechanically coupled to the items connected by the solder joint (see Section 3.5.2 for a discussion of vibration problems). Another cause of fatigue failure is cyclic stresses produced by thermal expansion and contraction of items that are connected to the solder joint. This can happen particularly in apparatus that gets very hot, and which is turned on and off frequently. Another possible location for this phenomenon is in cryogenic apparatus, which is also cooled and warmed over a large temperature range on a regular basis. Solder joints generally degrade with time, even if they are kept in a benign environment, and not performing any electrical function. This degradation, which proceeds slowly at room temperature, involves coarsening of the grain structure of the solder, and the growth of intermetallic compounds at the solder/substrate interface [15]. However, elevated temperatures will greatly increase the rate of both effects. At high temperatures and if under stress, a solder joint can also undergo creep (slow ongoing deformation). Hence, in places where temperatures are high (e.g. near furnaces, or on the leads of components that can get hot, such as power resistors) one may find that solder connections weaken and crack after a relatively short time. This can also happen if such joints are employed in circuits that are required to carry large currents [16]. (See the discussion on page 421.) Cyclic mechanical stresses will also accelerate degradation by grain coarsening.
Selection of solder The solders used on electrical and electronic devices must be electronic grade products. One should never use solders or fluxes intended for plumbing, general workshop activities, or for soldering aluminum. (The fluxes used in these non-electronic applications are usually corrosive.) The most useful alloys are the 63–37 Sn–Pb, or 62–36–2 Sn–Pb–Ag eutectic solders. These wet copper readily and flow easily when molten. Furthermore, since they go directly from the molten state to the solid state during cooling (and do not have a pasty intermediate phase), they are more resistant to the formation of undesirable “disturbed solder joints” than other Sn/Pb-based alloys [12]. Their relatively low melting points are also useful in reducing the heating of sensitive electronic components. The phase diagram of the tin–lead system is shown in Fig. 12.1 [17].
12.2 Permanent or semi-permanent electrical contacts
417
350 Pb+Sn (liquid)
300
Temperature, °C
250
solid+liquid Pb (solid)
200
150 Sn (solid) Pb+Sn (solid)
100
50
eutectic composition 10
20
30
40
50
60
70
80
85
90
95
Weight Per Cent Lead
Fig. 12.1
Phase diagram of the tin–lead system, showing properties for different compositions and temperatures. The eutectic alloy is the only one that does not go through a “pasty” phase (combining a mixture of solid and liquid components), during solidification. Data are taken from Ref. [17]. The Sn–Pb–Ag eutectic has the advantage over the Sn–Pb one of having a slightly lower melting point (178 ◦ C vs. 183 ◦ C), and also has greater strength and fatigue resistance. The 60–40 Sn–Pb “near-eutectic” alloy is an acceptable alternative to the eutectic types, but is more likely to form disturbed joints [12]. In general, no matter what the elemental composition of the solder happens to be, eutectic alloys are usually preferred over noneutectic ones. There is a general move in industry (encouraged by legislation) towards the use of “lead-free solders,” in order to avoid the perceived environmental impact of using lead. It seems likely that solders that contain lead will eventually be completely eliminated from most commercial electronics. The most favored lead-free solder alloys appear to be those containing tin, silver, and copper (one example is 96.5–3–0.5 Sn–Ag–Cu). However, there are a number of problems associated with the most common lead-free solders. For example, they do not wet copper as well as the usual Sn–Pb alloys [15], and require higher soldering temperatures. Another problem with lead-free solders is that even a good-quality joint has a dull or frosty appearance, rather than the shiny luster that is normally associated with high-quality Sn–Pb eutectic solder joints. This means that it is more difficult to use the appearance of a lead-free solder joint as an indicator of quality. Solders with a high proportion of tin (which includes the most common lead-free types) are susceptible to cracking if they are used in low-temperature apparatus [5]. This is because such solders become very brittle under cryogenic conditions, and fail when they are thermally cycled between low temperatures and room temperature (see page 161).
418
Interconnecting, wiring, and cabling for electronics
It is recommended that only the standard eutectic or near-eutectic Sn–Pb or Sn–Pb–Ag solders be used in general laboratory applications. Laboratory electronics are, of course, not intended for sale, and are therefore likely to be exempted from lead-free soldering requirements by the regulatory authorities for the foreseeable future. Useful information about lead-free solders can be found in Ref. [15]. The fluxes that are acceptable for electronic soldering include rosin (sometimes designated “R”), mildly activated rosin (“RMA”) and fully activated rosin (“RA”). Even slightly tarnished copper is generally not easily wetted by solder containing ordinary rosin flux, and with this one often has to use at least a mildly activated type. In the case of heavily tarnished copper, it may be necessary to use fully activated rosin flux. Mildly activated and fully activated fluxes are widely available as the cores of the wire solders commonly used for hand soldering. Water-soluble fluxes, which are different from the rosin-based types, tend to leave corrosive products on circuitry, and should generally be avoided [2]. If enameled single-strand wires (such as “magnet wire”) are being soldered, even RA and RMA fluxes are too harsh, as they can attack the insulation and (in the case of copper) the conductor. For such applications, zero-activation “no-clean” fluxes should be used (see pages 470–471). The inadvertent mixing of different solder alloys can cause reliability problems arising from fatigue failure. Such mixing may, for example, be caused by using the same soldering iron on joints soldered using different alloys, or possibly by reworking a defective solder joint using a different solder. The soldering of components whose leads have been coated with another type of solder is yet another way in which this can happen. A good example of where problems can occur involves the combination of lead and indium containing alloys, such as 50–50 In–Sn and 60–40 Sn–Pb solders [18]. Combining lead-containing solders with lead-free ones can also lead to difficulties.
12.2.1.4 Electrostatic discharge (ESD) issues If soldering is to be carried out on electronic devices that are very sensitive to ESD, one should ground the tip of the soldering iron [19]. Although “ESD-safe” soldering irons can be obtained commercially, their tip-grounding arrangements may degrade over time (perhaps because of the oxidation that occurs on the various components that make up the iron). It is best if a grounded clip is attached directly to the tip. The continuity provided by tip-grounding arrangements should be tested periodically. The resistance between the soldering iron tip and ground should be less than 5.0 [20]. Also, the appearance of large power supply voltages on the tip of the iron (due to current leakage, and possibly temperature-regulation switching transients) is a potential hazard to sensitive electronics, and grounding the tip will reduce this. Irons that are heated with low-voltage power are to be preferred over ones that run directly from the mains. Potential differences between the tip of a hot iron and a grounded electronic device being soldered should be no larger than 2 mV [20]. ESD issues are discussed further in Section 11.5.
419
12.2 Permanent or semi-permanent electrical contacts
12.2.1.5 Removal of flux In the presence of moisture, electrical problems such as current leakage can be caused by residual flux on a soldered assembly (e.g. a circuit board), if the flux contains activators [15]. Corrosion caused by residual active flux is also a possibility. The eventual result of this may be permanent or intermittent open circuits. When temperatures are below about 50 ◦ C, pure (un-activated) rosin flux residues tend to protect circuit boards from moisture and dirt. However, when temperatures exceed 50 ◦ C (which is quite possible inside a hot electronics enclosure), rosin becomes sticky and tends to attract particulate matter. As mentioned on page 66, dust can become slightly conductive in the presence of moisture. Hence, the removal of flux residues from soldered assemblies after soldering is an important issue. This should be done in those cases where high-performance circuits are involved (i.e. operating in regimes of very-high impedance, high voltage, low noise, high frequency, etc.) where small leakage currents might affect the performance. It is also important if active fluxes, such as activated rosin (RA), are being used [2]. Flux removal may not be necessary if general-purpose (non-high-performance) soldered assemblies are involved, and if rosin (R) or mildly activated rosin (RMA) fluxes are employed in the soldering. If there is any evidence that connector contacts have been contaminated with flux, then this must be removed under all circumstances. Flux removal should be done as soon as possible after soldering (e.g. within 24 h), since it becomes much more difficult to remove as it ages.
12.2.1.6 Some special soldering problems Dissolution of thin conductors (especially gold) by solder When fine wires and thin films of common conductors are soldered with solder alloys containing tin, the rate at which these are dissolved (or “leached”) can be so large that the conductor can easily disappear before a connection is made. For wires, a more insidious problem is that they can be severely weakened during soldering. Among the more commonly used conductors, leaching effects are worst for gold, somewhat less troublesome for silver, and cause the fewest problems for copper – see Table 12.1. Despite the relatively low dissolution rate of copper compared with that of gold, the soldering of ultrafine (<50 µm diameter) copper wires can be problematic if one tries to solder these at normal soldering iron temperatures (400 ◦ C). The most generally useful way of dealing with this problem is to carry out the soldering as quickly as possible, and to use the lowest practical soldering temperature. If a gold wire is to be bonded to a thick copper wire (a common procedure in certain types of measurement), another useful approach is as follows. The copper wire is pretinned with ordinary solder, and a length of the gold wire is wrapped around it. Heat is then applied to the copper wire slightly above the pretinned region. When the solder starts to melt, the soldering iron is removed. Another, very general, method for dealing with gold or silver is to use an indium-based solder, such as pure indium or (preferably) a stronger alloy, such as 97–3 In–Ag [5] or
Interconnecting, wiring, and cabling for electronics
420
Table 12.1 Rates at which the thicknesses of conductor materials diminish, as a result of being dissolved by molten solder. Data are provided for solder at two temperatures (from Ref. [15]) Dissolution rates in 60–40 Sn–Pb solder (µm/s) Material
at 215 ◦ C
at 250 ◦ C
Au Ag Cu Pd Ni, Pt
1.7 0.75 0.075 0.025 0.01
5.25 1.6 0.15 0.075 0.01
70–30 In–Pb. The use of flux is not desirable and not necessary when soldering gold or silver with indium. Indium-based solders do not result in good joints when used with copper. Brittle indium/copper intermetallic compounds form, which may lead to failure in some cases if the joint is subjected to thermal shocks or thermal cycling [21]. Whether this is a problem in practice depends on the application. If it is, the difficulty can be avoided by electroplating (or electroless plating) the copper with 1 µm or more of nickel, and soldering to the nickel. (Nickel-plated copper wires are available commercially.) For silver and copper conductors, one approach that may be helpful is to use commercially available solders that already have these elements in solution at saturation levels. Examples are Sn–Pb–Ag alloys (for soldering silver conductors) and Sn–Pb–Cu materials (for copper ones). For both silver- and copper-containing solders, alloying levels of about 2% are sufficient. In the latter case, such solders are typically offered for the purpose of extending the life of copper soldering iron tips. Finally, small-diameter copper wires are available commercially that are designed to resist dissolution during soldering. Such wires have a layer (possibly nickel) embedded within the copper that undergoes little or no leaching in ordinary Sn–Pb solder [22]. The application by electroplating of platinum, palladium, or nickel onto conductors that are sensitive to leaching is a general solution to this problem [15].
Gold embrittlement Another problem may arise as a result of using ordinary tin–lead solders on gold surfaces. Not only does gold dissolve very readily in such solders, but it tends to react with them to form brittle intermetallic compounds at the boundary. This can cause such joints to crack under even small stresses [15]. Intermittent open circuits are a possible consequence of soldering to gold [23]. Ordinarily, one is most likely to solder gold in the form of electroplated layers on electronic component leads and connector pins. If the gold layers are not too thick, they will dissolve completely into the solder, and no difficulty will arise. Gold thicknesses of between 0.13 µm and 0.38 µm, deposited onto solderable plated-nickel
421
12.2 Permanent or semi-permanent electrical contacts
base layers, are normally recommended [24]. However, for certain very high-reliability applications, direct soldering to gold surfaces of any thickness is expressly forbidden by the relevant standards, unless a preliminary step to remove the gold (called “de-golding”) is undertaken [14]. One must occasionally make contact to gold wires (e.g. on cryogenic thermometers), and the use of ordinary tin–lead solder may be unavoidable in such cases. However, this need not cause problems if stress on the joints is very small, and they can be inspected from time to time for cracks.
Creation of solder contacts in high-voltage circuits In high-voltage circuits, the possibility of corona discharge exists in places where conductors have a small radius of curvature. For this reason, solder joints at a high potential in such circuits should comprise smooth balls of solder with large radii. Special attention must be given to preventing the formation of discontinuities (e.g. sharp points and edges) [14]. The 60–40 Sn–Pb near-eutectic solder (rather than the 63–37 Sn–Pb eutectic) is a good choice for this purpose, because the somewhat more pasty nature of the former makes it easier to create rounded surfaces. The presence of inclusions in the joint should also be avoided. Illustrations and commentaries on some of these considerations can be found in Ref. [25]. The complete removal of flux residues from soldered high-voltage circuits is also very important [26]. Arcing and corona problems in high-voltage devices are discussed further in Section 11.3.
Use of solder in high-temperature environments or high-current circuits As mentioned on page 416, solder joints that are exposed to high temperatures (i.e. approaching the melting point of the solder) for long times will undergo degradation, which can lead eventually to the cracking and failure of the joint. This is especially relevant if temperature cycling is present, which generally leads to fatigue failure. Because of the relatively high electrical resistivity and poor thermal conductivity of solder (compared to copper), high temperatures can easily arise if the joint is required to carry large currents. For example, one might encounter this situation in arranging for the provision of current to a large electromagnet. The severity of such problems is increased if the environmental temperature is already high. The mechanical strength of ordinary solder joints degrades rapidly if their temperature is above about 93 ◦ C [27]. In order to achieve reliable service for long periods (e.g. five years), solder-joint temperatures should not be permitted to exceed 60 ◦ C. If a solder joint needs to withstand high temperatures or carry large currents, the use of “high-melting-point solders” can help. However, these are generally more difficult to apply than the usual tin–lead eutectic or near-eutectic solders, and undesirable “cold solder joints” can occur frequently. Two such alloys are 93.5–5–1.5 Pb–Sn–Ag, and 96–4 Sn–Ag, with respective melting points of 296–301 ◦ C and 221–229 ◦ C. In both situations, it may be worthwhile to consider other joining methods that can provide much better performance under such conditions, such as crimping, welding, or brazing (see below).
422
Interconnecting, wiring, and cabling for electronics
Generally, soft-soldered connections should not be required to carry currents of more than 50 A [28].
12.2.2 Crimping, brazing, welding, and the use of fasteners Alternative methods of making electrical contacts may be more appropriate than soldering in certain cases. Three in particular (crimping, brazing, and welding) generally result in more reliable connections than soldering even under normal conditions. However, they are especially useful when high temperatures or high currents are present – conditions under which solder joints can degrade very rapidly. In the case of high currents in particular, connections must be of a very high level of integrity, and hence these methods are especially suitable.
12.2.2.1 Crimp connections This technique is used when stranded wires must be joined to other stranded wires, lugs, or connector terminals. It is generally not used to join solid wires or stranded conductors that have been tinned, or to connect electronic components on a circuit board. The method essentially involves placing the wires in a special soft metal terminal (a “crimp terminal”) and, with the aid of a special “crimp tool,” mechanically deforming and compressing the terminal around the wires. If done properly the resulting joint is, from an electrical point of view, a welded one, with a metallurgical bond forming between the crimp terminal and the wires. The electrical qualities of crimp connections are very good, and their reliability is very high. If they are made with the correct tools and hardware, crimp joints are about ten times more reliable than soldered ones in general electrical and electronics applications [29]. Unlike solder joints, high temperature aging processes are minimal under most conditions, and the strain relief provided to the wires (i.e. the prevention of large bending stresses on the wires at the junction) is superior to that provided by solder. Crimp joints are also relatively strong compared with soldered ones. Furthermore, in the case of RF connectors, tests have shown that those with crimped contacts have a smaller SWR (standing wave ratio – i.e. lower reflections) than those in which the center contacts have been soldered [30]. Because of the mechanical nature of the crimping operation, crimp connections are very easy to make, and (unlike soldering) require little skill. This means that crimping can be used to make good joints with greater consistency than is generally possible with soldering. Crimp connections are also relatively dependable in high-temperature and high-current applications. In order to achieve several years of reliable service, ordinary crimp joints should not be exposed to temperatures of more than about 160 ◦ C [27]. (Nickel crimp terminals for use with nickel wires at temperatures of up to 650 ◦ C are available.) Where it can be done, crimping is the preferred method of making contacts in applications requiring the highest levels of reliability [29]. In the laboratory, crimping is particularly useful if large numbers of connections of the same type must be created (e.g. when making cable assemblies). The main disadvantages of crimping are (1) its relatively limited range of application (to joining stranded wires with other stranded wires, lugs, or connector
423
12.2 Permanent or semi-permanent electrical contacts
terminals), and (2) the need for a good crimp tool and associated parts, which can be expensive. Also, crimped joints (unlike soldered ones) are permanent. This can make the repair of items with such connections (e.g. the replacement of a damaged connector on a cable) more difficult. The main requirements for making reliable crimp connections are: clean conductors, crimps that are the correct size and kind for the wires being used, and the correct crimp tool. The most useful types of tool are “full cycle, ratchet-controlled” kinds that do not release the crimp until the crimping process has been completely carried out. Ordinary pliers are not adequate for this purpose, and their use is likely to result in faulty connections, broken connectors, or other problems. It is neither necessary nor desirable to complement the crimp joint by adding solder to it (as is sometimes advised). One reason for this is that the addition of solder is likely to negate the good strain relief properties of the crimp joint. Furthermore, unlike crimping, soldering involves the possibility of heating and damaging wire insulation. One also has to contend with the presence of flux, and other potential problems associated with the use of solder. A discussion of crimped connections is presented in Ref. [27].
12.2.2.2 Welding and brazing In high-temperature or high-current applications, the formation of connections by welding and brazing can be very useful. These methods are also employed in situations where the presence of solder would be undesirable or unacceptable, such as: (a) ultrahigh vacuum work, where low-vapor-pressure materials are needed, (b) situations where low thermoelectric EMF connections are required (see Section 12.2.5), and (c) low-temperature investigations, in cases where it is desirable to avoid normalsuperconducting transitions that might otherwise disturb measurements. The most commonly used method for creating welded electrical connections is probably “resistance welding,” which in the case of small and localized bonds is also referred to as “spot welding.” This involves compressing the conductors to be joined between two electrodes, and passing a large current between these for a short period. Resistance-welding devices that create the current by discharging a capacitor are particularly useful in laboratory work, and are often found in physics laboratories. Large conductors (in sheet-metal form) and small conductors (of any shape) can be spot welded. The method is frequently used to join electrodes in charged-particle optical systems. The quality of the joint produced by resistance welding depends on the materials. In the best cases (e.g. when nickel or platinum is being joined), the welds are of very good quality. It generally does not work well for copper, silver or aluminum. See Ref. [27] for a discussion of the method, and Ref. [31] for information on weld quality for various combinations of materials. In many cases in which resistance-welded joints are weak or brittle (e.g. when bonding tungsten to tungsten), improved results may be obtained using “resistance brazing.” This variation on the usual resistance welding procedure involves sandwiching a thin foil of a metal which welds very readily to other materials (Ni, Pt, and Au are often used) between the metals to be joined. The welding current is then passed through the assembly in the
424
Interconnecting, wiring, and cabling for electronics
normal way. Resistance-welded connections can often be improved by flowing argon over the joint region during welding, since this reduces oxidation of the materials. Another, fairly common, welding method, which can be used with copper, silver, aluminum, and many other materials, is tungsten inert gas (or TIG) welding. This involves heating the materials to be joined using an electric arc that is formed between a tungsten electrode and the materials, and preventing oxidation by surrounding these with a stream of argon gas. The method is especially suited for joining large, very-high-current conductors, such as copper bus bars. It is also appropriate for bonding metal sheets comprising electromagnetically shielded enclosures. However, even small wires and other tiny items can be joined using variations on the usual TIG method, known as “micro-TIG” and “micro-plasma” welding. In particular, the micro-plasma method makes it possible to join wires with diameters of 100 µm or even less. Normal TIG welders are frequently used in research laboratory workshops. Reference [32] contains a discussion of the technique. Brazing, which is also known as hard soldering, is very similar to soft soldering, in its use of a heat source (usually a torch), a filler metal (consisting of an alloy with melting points generally between about 700 ◦ C and 1000 ◦ C), and a flux. The filler metals are generally much stronger than soft solders. Very large and small conductors can be brazed. Since the fluxes tends to be corrosive, connections should not be made that cannot be thoroughly cleaned afterwards. (This generally rules out stranded wires.) However, if copper is being brazed, copper–phosphorous or copper–phosphorous–silver “self-fluxing” filler materials (BCuP types) are available which can be applied without the need for any flux, and which therefore require no subsequent cleaning. Even relatively small copper wires can be brazed using this filler, with the aid of a miniature oxy-hydrogen torch (see the discussion in Ref. [32]). Joints made with BCuP materials may be weaker than those created using ordinary filler materials and fluxes. As with TIG welding, brazing is a familiar operation in laboratory workshops. A description of the method can be found in Ref. [32].
12.2.2.3 Use of mechanical fasteners in high-current connections The use of bolted contacts in high-current applications is (in comparison with crimped, welded, or brazed ones) not ideal. This is because of the significant possibility of the loosening of the fasteners, perhaps as a result of vibration, or because of thermal expansion and contraction of the contact assembly. Pressed-conductor electrical contacts that must pass large currents tend to degrade if the contact is poor. Local heating takes place, and the resulting elevated temperature starts to cause oxidation and creep of the contacts. This leads to an increase in the contact resistance, which results in even more heating, and further degradation. It can be seen that this is an unstable positive-feedback process – a type of “thermal runaway.” Once a certain threshold of deterioration has been reached, complete destruction of the contacts is very rapid. (See also the discussion on pages 446–447.) If the use of fasteners is necessary, several conditions should be fulfilled. Copper must be used in all connection hardware (excluding fasteners and washers, but including lugs, terminals, connectors, etc.) [28]. As usual, cleanliness of the contacting surfaces is very important. On copper conductors, silver- or tin-plating over a plated nickel undercoating is often applied to the contact area to prevent problems due to oxidation of the copper. Silver is generally preferable to tin. Nickel is used to prevent inter-diffusion of copper with the
425
12.2 Permanent or semi-permanent electrical contacts
silver or tin top layers. (In the case of tin, operating temperatures should be kept below 100 ◦ C, and the contact area should be provided with a contact lubricant to prevent the entry of atmospheric gases [33].) The contact assembly, including the fasteners, should be made of materials with similar thermal expansion coefficients, if possible. The use of copper or brass bolts should be avoided on connections that may carry more than 50 A [28]. Fasteners made of stainless steel or silicon bronze are preferred. These should be tightened with a torque wrench to ensure that sufficient force is applied to the contacts, and that this is done consistently. The use of a least two bolts on each lug provides some security against loosening of the connection. Disk-spring washers (Belleville washers), used in combination with thick flat washers in the fastener assembly, can be very useful in maintaining a constant force between the contacts, and in preventing loosening [34]. Fastener hardware (i.e. bolts, bolt-studs, nuts and washers) should never be used as primary current-carrying conductors. The contact surface area of bolted connections should be at least 7 × 10−4 m2 per 1000 A, and the contact pressure should be at least 14 MPa [28]. The reliable use of fasteners in high-current applications is confined to the joining of solid conductors (such as high-current bus bars), and should not be used for stranded ones without further steps. They can be used with stranded conductors only if these are fixed into a solid terminal (e.g. a “ring terminal”), preferably by some form of crimping process. Bolted high-current connections should be inspected regularly for overheating (using an infrared thermometer – see Section 12.5.4), contamination and corrosion, and disassembled and cleaned if necessary.
12.2.3 Summary of methods for making contacts to difficult materials For materials that are resistant to being soldered by ordinary means (e.g. aluminum, stainless steel, niobium, and tungsten), the following techniques can be helpful.
12.2.3.1 Ultrasonic soldering In the case of metals that resist soldering because of the tenacious oxide layers that form in the presence of oxygen (probably the main problem in most – but not all – cases) ultrasonic soldering is a very useful technique. Some such metals, such as stainless steel and aluminum, can be soldered in the normal way with the aid of corrosive fluxes. In principle these fluxes can be removed from solder joints by a combination of washing and mechanical action. However, complete removal is difficult in practice (especially if stranded wires are involved), and it is generally recognized that such fluxes should not be used for the soldering of electrical and electronic items. Ultrasonic soldering does not involve the use of flux of any kind. With this technique, one employs a special soldering iron that contains an ultrasonic transducer and a heating element. The iron is connected to a separate electronics unit that supplies a sinusoidal signal at about 30–60 kHz, as well as heater power. In use, the hot iron and fluxless solder are applied to the joint in the usual way. When the ultrasound is switched on, cavitation takes place in the molten solder, involving the creation and violent collapse of small bubbles.
426
Interconnecting, wiring, and cabling for electronics
This activity causes the removal of oxide layers on the conductor surfaces, which permits the solder to wet them. Ultrasonic soldering is suitable for use on many metal surfaces that cannot ordinarily be wetted without using corrosive flux, including stainless steel and aluminum. In fact, the method makes it possible to join many materials that are normally virtually impossible to solder, even with highly corrosive fluxes. By using this method, good-quality, low-resistance connections can be created. Because the resulting joints are completely free of flux, they are capable of being more reliable than those made in the ordinary way. Solder joints to aluminum have been made with contact resistances of less than 1 n (measured under cryogenic conditions) [35]. Ultrasonic soldering does not work very well with copper, because the material is relatively tough, and its oxide films are resistant to being broken up by the ultrasound [36]. The ultrasound does not contribute significant amounts of heat energy to the joint, and the heating of the solder and joint materials must be done in the normal way by the heating element in the iron. If the power of this element is insufficient, additional heat will have to be provided by a separate source, such as a hotplate. In practice with ultrasonic soldering, it is often helpful to remove the bulk of the oxide layers from the conductors beforehand, either by chemical etching or mechanical abrasion. This means that the cavitation activity needs to remove only the thin oxide that forms in the brief remaining time up to the point of soldering. This step is especially important in the case of the more exotic materials such as niobium. Other contaminants should be removed from the conductor surfaces as in the case of ordinary soldering. Very rough or porous surfaces are difficult to solder using this method [37]. During soldering, the iron should be applied directly to the joint material with a wiping action, and not suspended above it in the molten solder. Solders containing a large percentage of indium work particularly well with ultrasonic soldering, although other solders can also be employed. Alloys containing a high percentage of tin are also useful. Solders containing a large amount of lead are problematic, because lead absorbs the ultrasound. Ultrasonic soldering, like ultrasonic cleaning (see Section 3.12), can induce damaging vibrational resonances in delicate components, such as diodes, transistors, and integrated circuits, and therefore should not be used on these. Material samples that are the subjects of research are also at risk of damage from the ultrasound. This technique is really only suitable for relatively robust items. One disadvantage of the method is that even high-quality joints generally have a dull appearance (as opposed to those made using the tin–lead eutectic solder with flux). This means that ultrasonically soldered joints are more difficult to inspect than normal ones. An additional disadvantage is the relatively high cost of commercial ultrasonic soldering irons. Discussions of the technique can be found in Refs. [16], [37], and [38].
12.2.3.2 Friction-soldering methods In the absence of ultrasonic soldering equipment, the friction-soldering method may be helpful. This involves scraping away the oxide layer underneath the pool of molten solder
427
12.2 Permanent or semi-permanent electrical contacts
using the tip of a soldering iron, without the use of flux. When both surfaces have been “friction-tinned” with solder in this manner, they can be brought together and heated, so that the solder layers merge. For example, aluminum can be soldered using this technique. For hard materials, the use of a soldering iron with a purpose-made tungsten carbide tip has proven effective in removing the oxide layers (see Ref. [39]). Friction soldering is generally not as effective as ultrasonic soldering.
12.2.3.3 Solders for joining difficult materials In general, indium-based solders are particularly useful when difficult materials are joined. A good example is the 52/48 In/Sn eutectic. An unusual property of indium solders is their ability to wet oxide-type materials such as glass. This behavior can be misleading when the intention is to make electrical contacts to metals, since the appearance of wetting can indicate either bonding to the bulk metal, or to a surface oxide layer. In the latter case, the contact resistance may be so high as to make the solder joint useless. In order to ensure that a metallic bond has been formed, one can either measure the electrical resistance of the joint and compare it with calculated values, or test the mechanical strength of the joint at the interface. The latter will generally be relatively low unless a true metal-to-metal bond has been created. Indium solders are much more susceptible to corrosion than the more standard tin– lead types. Indium solders can corrode in, for example, conditions of high humidity, pollution, or high-salt content in the air (perhaps owing to proximity to the sea). The use of active fluxes with indium solders (even those which would not be a problem for ordinary Sn–Pb solders) can cause difficulties. The presence of halide ions, such as Cl− , Br− , F− and I− , which are present in some flux compositions, can be particularly problematic. Furthermore, indium solders have relatively poor wetting and flow properties compared with tin–lead solders when flux is used. Because the ultrasonic and friction soldering methods involve no flux, they are good ways of applying indium solders. Indium solders are mechanically weak compared with many other types. Pure indium is the worst in this regard, and generally does not make a very satisfactory solder. Furthermore, as discussed on page 420, indium solders form brittle intermetallic compounds when used on copper. Much useful information on soldering with indium and its alloys can be found in Ref. [40]. When aluminum is soldered (by whatever method), galvanic corrosion is a possible problem if moisture is present, owing to the galvanic potential difference between aluminum and many solders. A special 91/9 Sn/Zn solder can be used to reduce such problems, as long as other metals (e.g. copper) are not also involved in the joint. Solder corrosion problems caused by environmental conditions can often be prevented by applying an insulating conformal coating, such as a lacquer, onto the joint after soldering. The mechanical weakness of indium alloys can be countered by providing supplementary support for the joint. Some highly inert metals, such as tungsten, do not accept solder, irrespective of the presence or absence of any surface oxide layers.
428
Interconnecting, wiring, and cabling for electronics
12.2.3.4 Use of inert gases in soldering processes Regardless of whatever method is used (ultrasonic soldering, friction soldering, or soldering with a normal soldering iron), the envelopment of the items to be soldered with an inert gas may be beneficial in improving the quality of the joint. This approach can also improve matters when a particular solder has poor wetting properties. For example, “inert-gas soldering irons” are available commercially for soldering with lead-free solders. These are ordinary soldering irons in most respects, except for being configured to allow an inert gas, such as nitrogen, to pass along the iron (inside a coaxial tube) and out over the tip.
12.2.3.5 Electroplating and sputter deposition as a way of enhancing solderability It is possible to make difficult materials easy to solder by electroplating them with copper or nickel, or some other easily soldered metal. This operation is normally carried out by specialist companies. Another approach is to vacuum deposit (especially by sputtering) an easily soldered material onto the difficult one. In some cases, sputtering is likely to produce better results than electroplating. For example, titanium can be vacuum-coated with palladium, which can be soldered with little difficulty [5]. A possible drawback with this technique is that sputter-deposited layers are generally very thin. However, sputtering can be used to prime the surface of materials that are not easily electroplated, so that thicker layers can then be deposited by electroplating.
12.2.3.6 Resistance welding Resistance welding (see page 423) nicely complements soldering, since it often results in very good quality joints in those situations in which ordinary soldering methods are unsatisfactory (and vice versa). Some metals that are readily resistance-welded (e.g. platinum and nickel) are also easy to solder. These can be used as intermediaries, so as to allow solder connections to be made to normally unsolderable materials.
12.2.3.7 Crimping In those situations in which crimping (see pages 422–423) may be used, it can be an effective technique for joining difficult materials. For example, it has been found to be an acceptable method for connecting stainless steel wires in space applications [41]. Crimping has also been used to form superconducting joints between niobium wires in low-temperature experimental work [42].
12.2.3.8 Silver epoxy Adhesives that have been filled with silver powder, in order to make them conductive, provide a versatile and reliable means of making electrical contacts. These substances (known as “conductive adhesives”) have the advantage, over soldering and welding, of bonding well to most materials, irrespective of their metallurgical properties. The strongest types of conductive adhesive are epoxies (“silver epoxies”). These are the familiar
429
12.2 Permanent or semi-permanent electrical contacts
two-component bonding agents that have a separate resin and hardener, which must be mixed together in order to facilitate curing. The chief disadvantage of conductive adhesives is that the joints made with them generally have relatively high contact resistances compared with those created using other techniques. This is because (unlike, e.g. soldering, which involves the use of flux) there is no mechanism for removing surface oxide layers during the formation of the epoxy contact. In order to minimize this problem, extra attention should be given to the removal of oxide layers, using mechanical or other methods, immediately prior to joining. A further (although usually less troublesome) disadvantage of conductive adhesives is that their bulk resistivities tend to be very large compared with those of solders. There is some evidence that conductive adhesives degrade over time [43]. A variety of silver epoxies are available from commercial sources.
12.2.4 Ground contacts The quality of hardware-ground contacts (to safety-ground conductors, equipment enclosures, instrument racks, and electromagnetic shields) is very important, but often not easy to achieve. Problems arising from poor-quality or accidental grounds (e.g. intermittent ground loops) can be extremely difficult to troubleshoot. However, ground contacts are often made in ways that would never be considered acceptable for signal lines (such as the contacts to the center conductor of a coaxial cable). For example, connections to electromagnetic shields are often made with mechanical fasteners (screws, etc.). If these should loosen, small changes in the strain on the conductors will change the ground impedances, probably in an intermittent way. Unfortunately, in practice it is often difficult to avoid using joining methods of this type, and poor ground connections are a common problem. Furthermore, the metals that are often included in hardware-ground contacts (e.g. aluminum, steel, brass, etc.) are generally not ideal contact materials. Reliable press contacts are not easy to create, and (especially in the case of aluminum) soldering is difficult. Galvanic corrosion, and galvanic voltages, may be an issue if dissimilar metals are involved and moisture is present. For these reasons, and others, one should not use items such as safety-ground conductors, electromagnetic shields, etc., to complete an electrical circuit. (The only exception to this is if the shield is part of a coaxial cable – see pages 444–445 [44].) In general, when making contacts to hardware grounds, soldering, brazing or welding is usually better than making press contacts using mechanical fasteners [45]. If a mechanical fastener must be used to connect a wire to a ground surface, this should not be done by merely compressing the bare wire underneath the fastener. Instead, the wire should be crimped to a ring terminal, which can then be secured under the fastener. Star washers (or “toothed lockwashers”) should be employed to prevent loosening of fasteners [46]. Paint, anodizing, or other insulating protective finishes should be removed from surfaces where contacts are to be made. One should not rely on the teeth of a star washer, or other fastener hardware, to penetrate this and make the contact. (In fact, star washers, screws, or other fastener hardware should not be employed as primary conductors.) The presence of a “conductive” chromate finish (having an yellow or olive-green appearance, and often seen
Interconnecting, wiring, and cabling for electronics
430
B
A T1 Nanovoltmeter
Fig. 12.2
A T2
EAB
Measurement of the thermoelectric voltage EAB generated between the junctions of two materials (A and B) in series at temperatures T1 and T2 (see Ref. [7]). on military-type connectors) will probably result in an unreliable contact. This should also be removed [47]. Whenever the choice is available, plated metal finishes are to be preferred for providing corrosion protection on hardware that must be grounded. One should be careful to avoid accidental or uncertain grounds in sensitive systems, such as (a) those formed by electronics enclosures that happen to be in contact with each other, (b) connectors or cable shields touching grounded metal surfaces, or (c) grounds made through sliding metal drawers or hinges [45]. The properties of such grounds can vary with time, temperature, or other conditions. Intermittent ground loops or electrical hazards (i.e. loss of a safety ground) are potential problems in these situations. A ground connection should either be made using a deliberate and reliable method, or it should not exist at all. It is important to visually inspect hardware ground contacts periodically for corrosion, loosened fasteners, disconnected conductors, etc. [43]. Contact resistances (measured by using a four-wire, or “Kelvin”, setup) should be less than 1 m. Hardware-ground contact issues are covered in detail in Ref. [43], and are also discussed in Ref. [45].
12.2.5 Minimization of thermoelectric EMFs in low-level d.c. circuits The most common cause of disturbances during d.c. voltage measurements in the microvolt range is temperature differences across junctions between dissimilar materials. The size of the voltage that is developed between two points in a circuit involving materials A and B (see Fig. 12.2) amounts to: EAB = QAB (T1 −T2 ), where QAB is the Seebeck coefficient of A with respect to B [7]. The Seebeck coefficients for various material combinations are as follows. (a) (b) (c) (d) (e)
Cu–Cu Cu–Au Cu–Sn/Pb Cu–Kovar Cu–CuO
≤0.2 µV/C 0.3 µV/C 1–3 µV/C ≈40–75 µV/C ≈1000 µV/C
In order to minimize thermoelectric EMFs in low-level circuits, it is desirable to use the same material in regions where temperature gradients are present. If possible, junctions of any kind (even involving the same materials) should be avoided, or at least their number minimized.
12.3 Connectors
431
If junctions must be present, it is generally best not to solder. In particular, the Seebeck coefficients of normal tin–lead solders with respect to copper are all relatively high. In those cases where the copper conductors must be soldered, the 97–3 In–Ag solder alloy is preferable [5]. This material has a Seebeck coefficient with respect to copper of 0.7 µV/C. The cadmium-based solder that is sometimes advocated for such purposes should be avoided, partly owing to the toxicity of cadmium, and also because the solder has poor wetting properties. The best ways of joining copper conductors are by welding, crimping (using copper sleeves, with no plating), or by forming press contacts between them using mechanical fasteners. Because of the high Seebeck coefficient of the Cu–CuO combination, junctions between copper conductors must be clean and free of oxide. The usual method is to abrade the conductors just prior to making the contact. The surfaces of copper conductors involved in press contacts should be abraded regularly, so that they are always bright. Connectors in general should be avoided in low-level d.c. measurements. Although types are available with low thermoelectric EMF contacts (usually gold-plated tellurium copper), these are mainly used for front-panel connections to sensitive instruments. The connecting wires themselves should be as homogeneous as possible. Types that are intended to be used with thermocouples are particularly suitable [5]. Methods for minimizing thermoelectric EMFs are discussed extensively in Refs. [5] and [7].
12.3 Connectors 12.3.1 Introduction In electronic systems generally, connectors are frequently a major cause, and often the leading cause, of faults [2,48]. The most frequent difficulties take the form of permanent or intermittent high contact resistances or open circuits. (The relevant contacts,3 in this case, are those that take place between the mating surfaces in two mating connectors.) According to a study of connector reliability, the large majority of connector problems are intermittent [49]. For these and other reasons, the number of these devices used in an experimental setup should be minimized (consistent with requirements for cabling modularity).
12.3.2 Failure modes High-contact resistances are a potential connector defect. Even extremely thin non-metallic ˚ can be insulating [50]. In the presence films resulting from corrosion (less than about 100 A) of a current, “1/f noise” or “popcorn noise” is another possible difficulty [45,51]. Highcurrent connectors are susceptible to overheating of the contacts, and subsequent runaway 3
In the following section on connectors, the word “contacts” also refers to the physical structures that make contact – such as the pins in a male connector.
432
Interconnecting, wiring, and cabling for electronics
Fig. 12.3
Two-pin connector with its outer housing (or shell) removed. Fatigue failure of the cable conductors often takes place at the positions indicated by the arrows, if the cable-clamp arrangement is inadequate. degradation and failure (see Section 12.3.5). In the case of high-voltage connectors, arcing or corona discharge across the surface of the insulator can be a frequent issue. If a connector is intended to be part of a shielded-cable assembly,4 problems with the connector shell or deficiencies in its attachment to the cable shield can lead to electromagnetic interference (EMI). In the case of connectors used in radio frequency applications, contact damage or improper termination of the cable to the connector can cause signal reflections due to impedance mismatches. (This is a problem that becomes worse at higher frequencies, and is a critical issue in the microwave range.) The electrical properties of defective connectors can sometimes be very strange, and similar to those of poor solder joints. Nonlinear effects, such as significant levels of intermodulation noise or frequency mixing, can be caused by loose connectors, and especially those with tarnished or corroded contact surfaces [52]. Electromagnetic interference resulting from audio rectification effects (see page 372) is another possible result of nonlinear (or non-ohmic) contacts. Connector faults are often attributable to the following conditions: (a) problems caused by improper wiring of the connector (see below), (b) damage to the contacts (e.g. bent pins), or the buildup of insulating material on them by oxidation, contamination or corrosion [48], (c) fatigue failure of the connections (e.g. solder joints) between the cable wires and the connector pins within the connector – see Fig. 12.3 (this is often caused by the lack of, or the improper use of, a cable clamp on the connector – see pages 437–438), (d) loosening or corrosion of the sections that make up shielded connector shells (often screwed together), which can lead to EMI or other problems relating to an increased 4
A cable assembly is a collection of one or more cables that are joined together, with attached connectors.
12.3 Connectors
433
resistance of the ground [53] (this results in a higher transfer impedance for the connector – see pages 457–458), (e) loosening, damage, or corrosion of a cable shield where it enters the connector [53] (these can also lead to EMI difficulties, owing to the increased transfer impedance), and (f) buildup of contamination or moisture on insulating surfaces, which can lead to leakage currents in very-high-impedance applications, or corona or arcing in high-voltage ones.
12.3.3 Causes of connector failure 12.3.3.1 Human error The primary causes of connector failures are usually negligence and abuse (particularly mishandling [23]), such as: (a) improper wiring and attachment of the connector to a cable or an instrument enclosure (very common), (b) joining connectors without looking at the mating parts first in order to see how they are intended to fit together, and if in fact they are capable of fitting together, (c) attempting to mate connectors without adequately aligning them, or de-mating them by pulling them apart at an angle, (d) mating or de-mating threaded coaxial connectors by rotating the body of one of the connectors (rather than the nut that is used to secure them) relative to the body of the other, (e) exposing connector contacts or insulator surfaces to dust, moisture, solder flux, or corrosive agents, and/or failing to keep connectors clean. Item (a) includes the following errors: (A) inadequate provision of strain relief for the wires – e.g. by the incorrect selection of a cable clamp to fit a particular cable, or poorly securing a cable clamp, (B) improper attachment of the wires to the contacts (cold solder joints, non-wetting of the conductors by the solder, use of an improper crimp tool or incorrect wire size for a crimp terminal, catching wire insulation in a crimp terminal, etc.), (C) failure to attach all the strands in a multi-stranded wire (or braided shields in coaxial connectors) to a contact, resulting in the presence of wayward strands in the connector housing, and the possibility of short circuits, (D) inadequate termination of a cable shield on the connector shell, resulting in electromagnetic interference problems (see the discussion of pigtails on pages 459–460), and (E) melting back of the wire insulation while soldering wires to the contacts, resulting in the possibility of short circuits, or impedance mismatches in the case of high-frequency ( 108 Hz) connectors due to melting or deformation of the dielectric. These are just a few of the possible mistakes. There are usually many ways in which a connector can be incorrectly joined to a cable. It is probable that a large number of problems are caused by not following the connector manufacturer’s recommended joining
Interconnecting, wiring, and cabling for electronics
434
procedures. If a need arises for a cable assembly (e.g. a coaxial patch cord5 ), it should be kept in mind that such items can often be obtained ready-made from commercial sources. These should be used whenever possible. Regarding point (b), it is possible to damage or destroy connectors by attempting to mate two similar looking, but slightly different, connector types. For example, if BNC and two-slot triaxial6 connectors (which look very similar) are joined, permanent damage will occur to both the plug and receptacle [7]. (Three-slot triaxial connectors have been recently developed in order to prevent this.) Another example is the mating of a male 50 type-N coaxial connector to a female 75 one. The two will join together, but since the male center pin is too large for the female spring contact, this action will ruin the female connector [54]. (If, on the other hand, a male 75 type-N connector is mated to a female 50 one, an intermittent contact will result, since the male center pin is too small for the female contact in this case [55].) These problems, and others, are often a consequence of blind mating, in which someone joins connectors (perhaps located at the back of an instrument) without being able to see them. Concerning point (c), some connectors are particularly susceptible to damage if they are not aligned properly before being mated. The bending of pins and female contacts are a possible consequence of inadequate alignment. Subminiature coaxial connectors, such as SMA types, are often prone to this sort of problem. Multi-pin D-type (i.e. “D-subminiature” or “D-sub” kinds) and CAMAC connectors are also relatively vulnerable. In the case of coaxial connectors that have threaded coupling nuts (to hold them together following mating), cross-threading is also a potential outcome of non-alignment. The relatively common practice of de-mating connectors by pulling them apart at an angle (rather than straight away from each other) can be very damaging if the connectors are not designed to withstand this type of abuse. (See the comments on “scoop-proof” connectors on page 440.) The bending of pins during this operation is a notable problem with subminiature coaxial and D-type connectors. Pin damage during mating or de-mating can be especially troublesome for large D-type connectors that require high forces for insertion and withdrawal [41]. Regarding point (d), the twisting of the body of a threaded coaxial connector during mating or de-mating can be very damaging to the center conductors and other interior parts of the two connectors. This problem arises when connector adapters, or the small modules that are often used in RF and microwave work (e.g. mixers and filters), are attached to cables. These adapters or modules are often twisted onto cable-mounted connectors for convenience, but with harmful results. Only the connector coupling nut should rotate during mating or de-mating operations.
12.3.3.2 Damage and degradation during normal operation and use Connector failure will also be caused eventually by wear-and-tear, even if they are used correctly. For example, the plating on the connector contacts will eventually wear off with 5 6
A patch cord is a cable assembly comprising a single length of cable (often about 1 m long) with a male connector at each end. Triaxial connectors are similar to coaxial ones (such as BNC types), except for the presence of an extra coaxial shield (a “guard” shield) that is used for special purposes (see page 390).
435
12.3 Connectors
repeated matings and de-matings. Once the plating has worn off, the underlying base metal will be exposed, and high contact resistances or open circuits (possibly intermittent), or other problems, are a likely consequence. Multiple mating/de-mating cycles will also reduce contact spring forces, with similar results. These effects can occur after only a relatively short period of use – many common connectors are only rated for a few hundred mating/demating cycles. For example, BNC and SMA coaxial connectors are often rated for 500 cycles. (SMA connectors are particularly short-lived, unless they are treated very carefully. They are not really designed for repeated mating and de-mating [54].) Some connectors are rated for only a few mating cycles, while others are designed to withstand thousands. (Ways of overcoming wear problems are discussed on pages 448– 449.) Owing to a process called “stress relaxation,” the force that can be applied by spring contacts will gradually diminish over time. This phenomenon becomes more important at elevated temperatures, and can be especially significant in hot environments that may be encountered during normal operation (e.g. inside an equipment rack). For instance, brass contacts, while satisfactory at room temperature, generally cannot be used above 75 ◦ C because of this effect [56]. NB: In general it is the force between a pair of contacts (normal to their surfaces), and not the surface area of contact or the pressure (except insofar as this depends on the force), which determines the contact resistance [57]. Connectors in which the forces between the contacts have diminished are also susceptible to problems due to corrosion, and are relatively unstable in the presence of vibration [52]. Certain contact plating materials (mainly tin, but also nickel) are degraded by a kind of wear process, called fretting, which is caused by oxidation in the presence of vibrations. Even relatively small vibration levels, such as those produced by instrument cooling fans, may be sufficient to cause problems [58].
12.3.3.3 Corrosion Many common contact materials, with the exception of gold, will eventually corrode if exposed to atmospheric pollutants. This can be especially problematic in polluted urban or industrial areas, or places with salt in the air – such as those near the sea. Concentrations of corrosive gases need only be at the level of one part per billion or even less to cause problems [50]. Contact corrosion usually takes place even under conditions that appear to the human senses to be benign (see Fig. 12.4) [50]. High humidity, and especially humidity cycling, can increase problems due to the presence of pollution. However, the presence of high-humidity is not necessary to cause problems. Certain chlorine-containing cleaning chemicals can also be potential sources of harm to contact surfaces. Connectors without gold-plated contacts, which are mated and then left undisturbed for long periods (a year or so), often display degraded electrical properties as contact corrosion takes place [51]. High-contact resistances and other problems are a frequent result. Connectors that are frequently mated and de-mated are not as vulnerable. If dissimilar metals are in contact, galvanic corrosion (see Section 3.9) may also cause difficulties. This can be especially troublesome if humidity levels are high and contaminants (such as dust or fingerprints) are present on the surfaces [33]. Contact material pairs that are well separated in the galvanic series (such as gold and nickel [59]) are especially prone
436
Interconnecting, wiring, and cabling for electronics
Fig. 12.4
Corrosion (tarnishing) of silver-plated BNC connectors on a laboratory instrument that has been exposed for years to a benign laboratory atmosphere. The connector on the lower left has been protected with the cap shown beneath it. to galvanic corrosion. Another, fairly common and particularly troublesome, combination is an aluminum connector shell mated to a silver-plated one. Generally, the contacts of connectors that are intended to be used in mating pairs are not made from different metals. Although high humidity levels make galvanic corrosion more likely, their presence is not necessary for this. Connector corrosion in general is discussed further in the section on contact material selection (pages 442–443). If a voltage is applied between isolated connector contacts in a damp environment, it is possible for another type of corrosion to take place [33]. This is variously referred to as: “electrolytic corrosion,” “metallic electromigration,” or “electrolysis.” Electrolytic corrosion involves the transport of metal across the insulator surface through a film of water. It can occur even if the connector contacts are made of the same material. Contaminants on the insulator increase the conductivity of the water and accelerate the degradation. Silver-plated contacts are the worst affected, as electrolytic corrosion of silver can take place even under conditions of high humidity (with only a few monolayers of water on the insulator). This may also occur to a limited extent with copper and tin. If bulk water (perhaps resulting from condensation) is present on the insulator, many other metals (e.g. gold, nickel and solder) can also undergo electrolytic corrosion. Connectors with high electric fields between the contacts are affected most by this phenomenon. Steady (d.c.) fields are the most troublesome, but even low-frequency a.c. fields (60 Hz) can cause problems. Electrolytic corrosion can eventually cause low resistances or short circuits between
12.3 Connectors
437
contacts. One way of minimizing it is to use non-hygroscopic insulator materials, such as PTFE (Teflon) [60]. The use of connector lubricants to prevent corrosion is discussed on page 449.
12.3.3.4 Particulates and debris The presence of dust and other particulate matter can degrade electrical contacts in several ways. The most immediate effect of their presence is to prevent the contacts from touching – producing permanent or intermittent open circuits [61]. Such particles also tend to act as abrasives that can damage the contacts (by removing plating metals) during successive mating/de-mating cycles. Particulates on contact surfaces can also lead to corrosion.
12.3.4 Selection of connectors 12.3.4.1 General points In research it generally pays to use connectors of the highest quality. Many common connectors (such as BNC coaxial types) are available in low-cost versions. These should generally be avoided. Connectors that are intended for applications in consumer electronics are often very poorly designed and manufactured. The phono connector, which is used in consumer audio equipment, and the type-F coaxial connector, which is used in televisions, are good examples of devices in this category [2]. The miniature phone plug and jack system, which is also found in consumer audio equipment, is yet another. A well-designed connector should have the following features. (a) A good-quality cable clamp must be provided, in order to allow stresses on the cable to be transmitted to the connector shell, rather than to the (relatively fragile) joints between the cable conductors and the connector contacts (see Fig. 12.5). This is essential. It is necessary to ensure that the cable clamp supplied with a given connector will properly fit the cable that is to be used, so that there is not too much initial clearance R and between them. In some cases (e.g. with the collet cable clamps used in LEMO TM 7 Fischer connectors ) the initial fit between the clamp and the cable must be fairly good. In the case of coaxial connectors, those in which the outer cable clamp is crimped onto the cable are more robust mechanically and electrically than ones involving a ferrule clamp and backnut (see Fig. 12.6) [13,62]. If a connector does not have an adequate cable clamp, and an alternative cannot be found, the cable should be secured to a fixed surface (such as an equipment rack or enclosure), so that stresses on the cable are not borne by the cable conductor – connector contact joints. 7
R Incidentally, LEMO and FischerTM connectors are generally very well designed and manufactured, provide good shielding against RFI, and last a long time. They are widely used in research – particularly in the fields of nuclear and particle physics. Single- and multi-pin types are available.
Interconnecting, wiring, and cabling for electronics
438
Fig. 12.5
Example of a cable clamp.
(a)
Fig. 12.6
(b)
Coaxial connectors with crimp-style cable clamps (a) are more robust mechanically and electrically than those that use a ferrule clamp and backnut (b). Cable clamps intended for use with shielded cables should provide a circularly symmetric (360◦ ) grip on the cable shield – see Fig. 12.7 [47]. Flat cable clamps, which are often found on multi-pin connectors, should be avoided in these situations (see pages 459–460). (b) A bend relief should be present, so as to prevent the cable from being sharply bent at its junction with the connector. This reduces strain on the cable conductors near the connector, which are inclined to break as a result of fatigue failure. Bend
12.3 Connectors
439
Fig. 12.7
Shielded D-type (or “D-sub”) multi-pin connector, with the desired 360◦ cable-clamping arrangement.
Fig. 12.8
Bend relief on a BNC connector.
reliefs (sometimes called strain reliefs) are usually tapered semi-flexible objects (see Fig. 12.8). They are available (sometimes as accessory items) for most connectors. (c) A locking mechanism should be present that allows the connector to be demated only through a deliberate action (such as the screw-snap motion applied to a BNC connector), in order to prevent this from happening as a result of accidental tension on the cable. This also reduces the temptation to try and demate connectors by tugging on their cables, which is a common cause of cable damage. Many consumer-grade connectors do not have a locking mechanism.
Interconnecting, wiring, and cabling for electronics
440
(d)
(e)
(f)
(g)
The failure to fasten retaining screws (jackscrews) on D-type multi-pin connectors is a frequent source of problems [49]. It is possible to obtain D-type connectors with screw-less automatic locking mechanisms [63], but these are not commonplace. Connectors that have not been inserted sufficiently into their mating counterparts are a common source of problems. Hence, connectors that give an audible click and/or definite tactile feedback to indicate when they are fully mated (as in the case of R , FischerTM or bayonet types) are desirable. (However, in the case of coaxial LEMO connectors, this advantage may come at the expense of performance problems – see below.) Connector contacts should be housed in such a way that they are protected from the mechanical damage that may result if, for example, the connector is dropped, or if two mating connectors are pulled apart at an angle. The miniature hexagonal connector, with its protruding and exposed pins, is an example of one that is designed poorly in this respect. A better configuration involves having the pins recessed within a hood. However, connectors that use a “scoop proof” design are preferable to other types. These are arranged so that the connector shells must be perfectly aligned before mating can take place, thereby preventing damage to the male or female contacts [41]. Such connectors are especially useful if “hot mating” or “hot plugging” of the connector (i.e. mating with energized circuits) is anticipated, where one wants to avoid short-circuiting contacts. Connectors with metal housings are generally more durable than plastic types. Warped or broken plastic housings are relatively common [49]. Also, although plastic housings can be metal plated in order to provide electromagnetic shielding, solid metal housings are somewhat better in this regard, particularly at low frequencies. For instance, IEEE488 (GP-IB) interface bus connectors with metal housings tend to be considerably more robust than those with plastic ones, and metal housings also provide effective shielding against electromagnetic interference from the bus. The presence of a metal housing also indicates that a given connector is probably made to higher standards in other ways. In situations in which the inadvertent placement of connector plugs in the wrong sockets is undesirable, the use of connectors with different shell sizes, or some form of keying system, is necessary. The former is the preferred approach [29]. Some connectors, such R and FischerTM , can be obtained with keys. as certain types made by LEMO
In addition, the following points should be considered. (a) Circular multi-pin connectors are more robust than D-type ones, and are to be preferred in most circumstances [41]. Circular connectors are also usually provided with easy-touse locking mechanisms. However, circular connectors are much more susceptible to crosstalk between the contacts than D connectors, particularly if the contact density is high [62]. This is because of the more clustered arrangement of the contacts in circular connectors, in comparison with D-types. Methods of grouping and grounding contacts to minimize such problems are discussed on page 449. (b) It is usually preferable if the joints between the cable wires and the connector contacts are crimped, rather than soldered. (See the discussion on pages 422–423.) Generally,
12.3 Connectors
441
connectors that can be soldered cannot be crimped, and so it is necessary to obtain connectors that are designed with crimping in mind. Connectors that are attached to cables using a twist-on action (as in the case of some coaxial types) are highly unreliable, and should be completely avoided. Similarly, signal-carrying connectors in which the wires are secured to the contacts with the aid of screws (as in the case of certain phone plugs) should also be shunned. (c) In the case of shielded connectors, those that employ a screw-on threaded couplingnut for mating (e.g. N, SMA, and TNC connectors) provide more stable RF shielding and electrical contact in the presence of vibration or varying sideways forces on the cable than those that use a bayonet mechanism (e.g. BNC connectors) [47, 64]. Also, the ability of threaded connectors to shield electromagnetic fields is higher than that of bayonet types [47, 64]. The intrinsically low shield inductances and shield contact resistances of threaded connectors result in comparatively low transfer impedances (see pages 457–458), which are an indication of better shielding performance. TNC connectors are essentially a threaded version of the BNC kind, and are useful when a connector with the size of a BNC, and the shielding effectiveness of a threaded design, is needed. Connectors that use a simple slide-on shield (such as D-type multi-pin devices) are much inferior to threaded or BNC types with regard to shielding performance. Unexpected frequency-dependent signal losses can be introduced by some connectors [65]. This is an especially serious issue with BNC types, and may cause problems in precision radio-frequency measurements. N connectors are to be preferred for this purpose. (d) In the case of coaxial connectors, the bayonet action of the BNC connector, or the R and FischerTM types, have the advantage of push-pull latching action of LEMO minimizing the possibility of improper mating due to human error. Threaded coaxial connectors must generally be fully threaded and tightened completely in order to achieve an adequate connection. Loose connector shells can also lead to electromagnetic interference problems, and (if they are tarnished or corroded) nonlinear behavior. One of the worst in this regard is the SMA connector [62]. If it has not been connected properly, this may not make contact at all at low frequencies, or (at high frequencies) may make capacitive contact across the air gap between the inner male and female parts, but exhibit large resonances and peculiar vibration-sensitive electrical behavior. In order to avoid this, SMA connectors must be tightened firmly using a torque wrench (i.e. “finger-tight” is not enough).8 (In general, coaxial connectors that have hex-wrench flats on their coupling nuts must be tightened by using a torque wrench. Overtightening, which is a strong possibility with a normal wrench, can easily damage or reduce the life of the connector.) TNC connectors are somewhat more tolerant of incomplete tightening, while N-connectors are relatively forgiving of such mistakes [62]. (Moreover, the latter two connectors can be tightened by hand, without a torque wrench.) Threaded connectors are susceptible to mechanical damage by cross threading during engagement. This is especially true of finely threaded ones such as SMA and 8
Special torque wrenches for connectors are commercially available.
Interconnecting, wiring, and cabling for electronics
442
N-devices. Connectors with coarse threads are the most robust in this regard. (In other respects, N-connectors are very robust, and perhaps even more resistant to damage than BNCs.) It can be seen that the choice between threaded coaxial connectors and bayonet or slide-on types generally involves a tradeoff between performance and resistance to motional disturbances, on the one hand, and resistance to human error, on the other. (e) Connectors and male–female adapters can be obtained with built-in radiofrequency filters. These “filtered connectors” (often available as D-subminiature types) are useful in experimental apparatus (such as low-temperature equipment) where radio-frequency interference is a problem. They have the advantage of being relatively compact compared with other filtering arrangements. The use of these devices is discussed in Ref. [47].
12.3.4.2 Contact materials The base materials in connector contacts (e.g. brass) provide bulk conductivity, strength, and spring properties for the contact. Such metals are generally not adequate by themselves as electrical interface materials, and must therefore be supplied with suitable surface coatings. The choice of these coatings is important. If analog or digital electrical signals are involved, the most reliable type (because of its resistance to oxidation and corrosion) is electroplated gold over electroplated nickel [58]. This is especially useful in circuits involving low currents and voltages (below about 100 mA and 500 mV) [66]. Electroplated silver over electroplated nickel can also be reliable, although its ability to withstand oxidation and corrosion is not as good as that of good-quality (non-porous) gold-on-nickel types. Silver oxide is conductive, and the softness of silver usually allows non-conductive silver-sulfide tarnish layers (formed in the presence of sulfur dioxide in the air) to be penetrated without too much difficulty. Nevertheless, considerably higher intercontact forces are needed with silver-plated contacts than gold-plated ones, and excessive tarnishing causes problems in some situations. Silver is often preferred for contacts that must carry electric power [66], and high currents [67] – situations in which the presence of silver contact tarnish is usually not such a serious issue. The reasons for this preference are the large electrical and thermal conductivities of silver, and the ability of silver-plated contacts to resist welding under high-current conditions [68]. Difficulties can sometimes arise from the use of silver contact plating if low-level signals are involved. For example, problems can occur with BNC connectors (especially in humid or polluted environments) if the center contacts are silver plated, since these contacts are not well protected from the atmosphere. The resulting tarnishing can lead to a form of degraded performance in which the connectors act as lossy high-pass filters [69]. In order to avoid such possibilities, it is good practice to select BNC connectors (and in general all connectors carrying signals and data) with gold plated inner contacts.9 (This recommendation is subject to the caveat discussed below about incompatible contact materials.) An additional 9
The outer shells of shielded connectors are generally plated with other metals (e.g. nickel), with no loss of reliability.
443
12.3 Connectors
advantage of gold, in comparison with silver and other contact surface materials, is that it is easy to visually inspect for the presence of contamination, without the possibility of tarnish films being present to confuse the issue. Electroplated tin over electroplated nickel is also used in low-cost connectors, where high contact forces can be applied, and in which high durability is not important [50]. Tin is about the same as silver with regards to the inter-contact force that is needed in order to make a good connection [67]. In order to form truly reliable contacts with tin, a connector lubricant must be used (see page 449). Tin tends to be susceptible to “fretting” corrosion, or ongoing oxidation in the presence of small relative movements of the contacts due (for example) to vibration. Electroless nickel plate, without any protective layer of another metal, is used in parts of a connector (e.g. the outer shell of a shielded connector) where the mechanical forces involved in making the connection are high enough to allow oxide or corrosion layers to be penetrated. The inter-contact force that is needed in order to make a good connection is higher than that required with gold-, silver- or tin-plated contacts [67]. Nickel is a relatively hard and wear-resistant contact material. Unlike gold, nickel is not impervious to corrosion (especially if chlorine-containing contaminants are present in the air) and, unlike silver, its oxide films are not conductive. Nickel is susceptible to fretting corrosion. When contact surface materials are chosen, it should be kept in mind that there is little point in using a high-quality material (e.g. gold) if its mating counterpart is a lower-quality one (e.g. nickel or tin). The contact forces that are appropriate for the different materials can be an issue if dissimilar materials are used together. In particular, the mating of goldplated contacts (for which low forces are suitable) with ones that are plated with nickel, silver, tin, or some other non-noble finish (which require high forces), can result in very poor reliability [70]. What happens in such cases is that either: (a) gold-plated contacts are unable to penetrate the oxide or tarnish layers on non-noble metal plated ones, or (b) non-noble metal plated contacts plough through the soft gold plating on the other contact, down to the base metal. Furthermore, if moisture is present, galvanic corrosion is likely to be a problem if two dissimilar materials are brought together. For these reasons, the contacts of connectors that are to be joined should generally be plated with the same materials. The best base materials (supporting the overlying electroplated layers), in situations where the contact must undergo flexure (as in the case of female spring contacts), are generally one of the beryllium–copper alloys. This is because of their high strength, resistance to stress-relaxation, and ability to withstand repeated stress cycles. Phosphor bronze may be satisfactory in some applications. Brass should be avoided, if possible. For those contacts that do not undergo flexure, such as pins, brass is often used.
12.3.4.3 Connector derating in the presence of large currents or voltages The practice of derating is the use of a lower stress (e.g. voltage) level than the recommended maximum allowable value in order to improve reliability (see the discussion in Section 3.2). For connectors, the contact/contact and contact/shell voltages, and the contact currents, should be derated as shown in Table 12.2 [71].
444
Interconnecting, wiring, and cabling for electronics
Table 12.2 Recommended derating factors for connectors
Signal pins (current) Signal pins (voltage) Power pins (current) Power pins (voltage)
Normal
High-reliability
0.7 0.6 0.5 –
0.5 0.5 0.4 –
The connector temperature should be derated by 30 ◦ C [41]. Do not assume that connector contacts can all be operated simultaneously at their maximum rated current – check with the manufacturer. The derating of connector voltages depends on the detailed circumstances of use – see the comments regarding high-voltage connectors on pages 445–446.
12.3.4.4 Insulator materials In situations where insulation resistance is especially important, as in the case of very-highimpedance or high-voltage circuits, Teflon (or PTFE) is preferred, because of its aversion to surface contamination [7, 66]. It also has a very high volume-resistivity, excellent chemical and moisture resistance, and tolerance of temperature extremes. However, the mechanical softness of Teflon can be a disadvantage in some applications. The good electrical properties, mechanical robustness; and chemical, moisture, and temperature resistance of polyether-etherketone (PEEK) make it an outstanding all-round insulation [72]. In high-radiation environments, PEEK is a particularly useful insulation material.
12.3.4.5 Provision of a ground pin in multi-pin connectors Connector-shield contacts should generally not be relied upon for distributing grounding in an electronic system [5, 44]. The mechanical (and hence electrical) contact between the shells in mating multi-pin connectors is often unstable. Furthermore, the shells on many types of multi-pin chassis connectors are covered either in non-conducting protective layers, such as anodizing, or poorly conducting ones, such as chromate. These coatings or surface treatments lead to very poor and unreliable contacts between the connectors and R and FischerTM types, are better in chassis. (Some high-end connectors, such as LEMO these respects.) Especially in those cases in which the cable ground is part of a circuit, the ground connection should be made, not just by means of the connector shield contact, but also through a dedicated ground pin on the connector. Hence, a connector that contains four active pins should also have a fifth for the ground. In those cases in which the cable ground is part of a circuit, a dedicated ground wire in the cable (not just the cable shield) should be used for this purpose.
445
12.3 Connectors
Coaxial connectors and cables are, unavoidably, the exception to this rule. However, coaxial connectors are normally provided with highly conducting shell surface-finishes, such as nickel or silver. Furthermore, the better kinds (such as the type N) are provided with dedicated ground spring-contacts that are independent of the mechanical connection, and allow reliable electrical contact to take place between the shields of the mating pairs. (SMA connectors, on the other hand, are not as good in this respect.) At frequencies below about 100 kHz, and especially at audio frequencies, shielded twisted-pair cables are often preferred over coaxial cables. A three-pin shielded connector can be used in such cases, with a pin devoted to each wire in the pair, and a third pin employed to carry the grounded shield through the connector. This arrangement is employed in the XLR connectors used in professional audio systems.
12.3.5 Some particularly troublesome connector types 12.3.5.1 Microwave connectors Even tiny amounts of damage can be very detrimental to the proper operation of coaxial connectors used well into the microwave range. (Microwave connectors can also be extremely expensive to replace.) Furthermore, if a dirty or damaged microwave connector is mated to another that that is in good condition, even just once, it can cause irreparable damage to both connectors. In this fashion, connector damage can spread in a way that resembles an epidemic. In general, microwave connectors must be used with extraordinary care – especially if they are being employed for precision measurements. Such connectors should always be inspected prior to mating, and discarded immediately or marked and sent away for repair if any damage is found. Cleaning is essential if contamination (especially metal particles) is visible. Appropriate protection and storage, and regular gauging are also extremely important for these devices. A complete look at these issues is beyond the scope of this discussion. Further information can be found in Ref. [54]. The subminiature SMA coaxial connector is very widely used in microwave work, partly because of its low cost. These devices are also used in other circumstances whenever a very compact coaxial connector is needed. SMA connectors are not precision components (especially in comparison with other microwave connectors), and are often crudely made [54]. (The “pins” in many male SMAs are merely the cut-off ends of the coaxial cable’s center conductor!) They are also fragile (compared with, for instance, TNC- and N-connectors) and can be short-lived (see page 435). Furthermore, because of their loose design and manufacturing tolerances, and slapdash construction, SMAs have the potential to damage other connectors to which they are mated. These include both other SMA connectors, and (especially) precision 3.5 mm microwave connectors, with which they are compatible. (See also the comments on the tightening of SMAs on page 441.)
12.3.5.2 High-voltage connectors Connectors that are exposed to high voltages (>600 V) can be a frequent source of reliability problems. These occur mainly in the form of corona and arcing. (See the general discussion
446
Interconnecting, wiring, and cabling for electronics
of high-voltage issues in Section 11.3.) Small air gaps within, or adjacent to, solid insulators in high voltage connectors often lead to trouble, since corona readily occurs in these regions [73]. High-voltage connectors are seldom completely satisfactory, and should be avoided, if possible, in favor of permanent connections using ceramic feedthroughs or standoffs [13]. Connectors and cables operating at voltages above about 2 kV should generally be coaxial types. This ensures that the electric field across the interior of the insulation is uniform, with no voltage gradient across the insulation surface (which could otherwise cause corona), and no field outside the cable shield [74]. The derating of high-voltage connectors is an important matter. The derating factor will depend on whether the applied voltage (a) is constant d.c., (b) contains a.c. components, or (c) is pulsed d.c. The frequency of the a.c. components or the pulses will also play a role. Voltage ratings that are provided in connector datasheets are normally for constant d.c. Also, operation in low-air-pressure conditions (including high altitudes) can affect derating. The manufacturer should be consulted on these issues. Appropriate techniques for making solder joints in high voltage circuits are discussed on page 421. Dust and other contaminants on insulating surfaces can cause problems, especially in conditions of high-humidity. High voltage connectors should always be cleaned before being mated. This is best done by wiping interface surfaces with high-purity (>92 %) isopropyl alcohol, and then immediately drying them by blowing with clean and dry compressed gas, such as nitrogen. If corona makes an appearance in high-voltage connectors or cables, it is likely to get worse in time unless something is done. Complete degradation and breakdown in the form of arcing is possible over a time scale ranging from a few days to a few years [75]. The rate of degradation of insulation by corona is generally proportional to the frequency of the applied voltage. At high frequencies (e.g. in the megahertz range), the energy released by the corona is considerable, and tends to become more concentrated. This can result in the rapid drilling of holes in insulators by the discharge [76]. Cleaning and repairing the affected parts can be effective in preventing further difficulties. Sometimes complete replacement is the only solution. The connector manufacturer may be able to offer useful advice about corona problems. Although not ideal, a possible interim solution to connector arcing or corona problems is to coat insulator surfaces and fill in air gaps with silicone grease (see page 386). This also helps to repel moisture and keep out contaminants. Once silicone grease has been applied to a surface, it can be extremely difficult to remove completely. (See the discussion of silicone contamination issues in Section 3.8.3.) A general discussion of high-voltage wiring and connector issues is presented in Ref. [76].
12.3.5.3 High-current connectors High-current connector contacts are subject to a type of thermal-runaway failure, in which increased contact resistance (resulting from oxidation, or some other effect) results in an increase in local heating. This raises the temperature of the contacts, which may result
447
12.3 Connectors
in further oxidation, and even greater contact resistance. At elevated temperatures, stressrelaxation effects may cause the contact force to diminish, which will also increase the contact resistances. Temperature-induced deformation of plastic insulators or connector housings can also reduce contact forces. High ambient temperatures may worsen the situation. This degradation process is unstable. At some stage, the deterioration of the contacts becomes very rapid, and complete failure quickly results. High-current connector contacts are often plated with silver, for reasons discussed on page 442. Derating is an important issue (see pages 443–444). The temperature of high-current connector housings should be checked periodically during use, and the contacts should be inspected from time to time. The cleaning of oxidized or corroded contacts may be effectual, as long as the deterioration is not too severe. However, if stress relaxation has taken place, there may be no alternative but to replace the connector. High-current connectors should not be decoupled when current is flowing, owing to the possibility of arcing and contact damage.
12.3.5.4 Mains-power plugs and receptacles Mains-power receptacles often loose their ability to grip plugs over time and with repeated use, owing to the effects discussed on page 435 and above. This phenomenon can lead to accidental disengagement of plugs, intermittent power-outages, large voltage drops due to high contact-resistances, and arcing across the contacts. (Arcing can lead to other problems, such as further contact damage and electromagnetic interference.) A particularly insidious possibility is the (possibly intermittent) loss of the safety-ground connection, which (unlike loss of power connections) stands a good chance of going unnoticed. In addition to the electrocution risk posed by such a loss, the situation may also lead to noise problems. Mains-operated equipment is often provided with bypass capacitors (typically in the range 1–100 nF), which are connected between the live and neutral ac power conductors and chassis ground. This is done in order to prevent the passage of RF noise into or out of the equipment by way of the power line. If the safety-ground connection of such an item of equipment is lost, a considerable a.c. potential difference (possibly half the mains voltage) will develop between its chassis and earth ground [77,78]. Consider a situation in which this unintentionally ungrounded equipment is connected to a grounded noisesensitive instrument via some isolation device (such as a signal transformer), which has been installed to prevent ground loops. The large a.c. voltage will appear across the isolator. Even if the isolator is not damaged, its common mode rejection (CMR) abilities will be severely taxed by the large potential difference (50 V or more), and the noise level of the system is likely to increase considerably. Such problems can be reduced through the use of “hospital-grade” plugs and receptacles. These are designed to be stronger, able to maintain more reliable connections over time, and capable of providing greater mechanical retention forces, than ordinary “general use”
Interconnecting, wiring, and cabling for electronics
448
devices.10 Hospital-grade devices have been strongly recommended for use in, for instance, professional audio systems [79].
12.3.5.5 Connectors on CAMAC instrumentation modules The CAMAC system of modular electronic instruments is often used in nuclear- and particle-physics research. The modules are joined to the main unit (the CAMAC crate) by a card-type connector with a large number of contacts (86). Each crate can hold up to 25 modules, so that a crate may contain up to 2150 contacts. In order to ensure reliable operation, the connectors should be clean, and otherwise kept in good condition [80]. CAMAC connectors are relatively delicate. Misalignment and jamming is possible during mating, and the connectors are easy to damage through the use of excessive force. Furthermore, if the connector is misaligned by only 2–3 mm, intermittent contact problems can result. Power to the crate should be switched off during mating, because the voltage and ground pins on the connector are in close proximity.
12.3.6 Some points concerning the use of connectors 12.3.6.1 Physical and visual access Delicate panel-mounted connectors should be physically and visually accessible (see page 434). Otherwise, robust kinds that are suitable for blind mating (e.g. scoop-proof connectors), or types that can be provided with guide pins or other mating aids, should be employed.
12.3.6.2 Reducing contact wear and corrosion As discussed on pages 434–435, the wearing-out of connectors can be a problem in some cases. For example, many common connectors are rated for only 500 mating/demating cycles, and for some types the lifespan may be even smaller. Since these limits can easily be exceeded in the lifetime of many laboratory instruments, precautions may be needed to avoid having to replace connectors on the instrument. One simple way of doing this is by using “connector savers” (or “port savers”) [41]. These are simply modules consisting of back-to-back male and female connectors, which are inserted between the instrument connector and the cable connector. Connector-savers are joined to the instrument on a semi-permanent basis, and wear takes place on them, rather than on the connector directly attached to the instrument. When the connector saver is worn out, it is unplugged from the instrument and replaced with another. The use of connector savers is very common with microwave equipment. This is because of the delicate nature of microwave connectors, their extreme sensitivity to even tiny amounts of damage, and the high cost of sending such equipment out for repair. 10
In the UK, ordinary mains plugs and receptacles are designed and constructed to an unusually high level of robustness, which is mandated by the British Standards Institution. There are no separate hospital-grade devices.
12.3 Connectors
449
low-level pair
Fig. 12.9
high-level pairs
Method of reducing crosstalk in a multi-pin connector carrying low-level and high-level signals. The victim low-level pins are separated from the culprit high-level ones by a barricade of grounded pins. Another possible approach to the prevention of connector wear is the use of connector lubricants (or “contact lubricants”). These substances are commercial products that are supplied in the form of oils or greases, and which are applied directly to the connector contacts. (Greases are generally preferred, as they are more likely to stay in place than oils.) Lubricants do not prevent the contacts from making electrical connections with each other, or otherwise diminish the quality of the connections. Not only do connector lubricants prevent wear, but they can also reduce the force required to mate and demate connectors. In the case of contacts that are susceptible to oxidation and corrosion, lubricants can also provide protection from moisture and corrosive agents in the atmosphere. They thereby reduce the risk of chemical, galvanic and electrolytic corrosion, as well as fretting. Connector lubricants are particularly useful (and can even be considered essential) in permitting tin-plated contacts to function reliably [33]. The main disadvantages of these substances are their tendency to attract dust, and the mess involved in their use. They can also make it difficult to inspect contacts. For these reasons, such lubricants are not likely to be very useful under most conditions in a normal laboratory environment. In particular, they are probably not suitable for miniature and precision connectors. The best connector lubricants have been found to be five- and six-ring polyphenyl ethers and viscous perfluoroalkyl polyethers [59]. Substances to be avoided include silicones (which, among other things, are poor metal lubricants) and water-soluble polyalkylene glycols (which are poor lubricants and can cause corrosion). (The latter are often referred to as “contact enhancers”, because they may produce a temporary reduction in contact resistance.) The use of connector lubricants is discussed in Ref. [59].
12.3.6.3 Minimizing crosstalk problems in multi-pin connectors As mentioned earlier, crosstalk between adjacent pins can be an important issue for circular multi-pin connectors. Such behavior can be reduced by isolating those pins that might be the source of interference (e.g. carrying high-level digital signals) from those that are sensitive to it (e.g. carrying low-level analog signals) with a barricade of grounded pins (see Fig. 12.9). The pins should be grounded on both sides of the mating connector pair (e.g. on the plug and socket) [47]. Alternatively, a row of unused pin cavities results in isolation that is sufficient for some purposes [29].
Interconnecting, wiring, and cabling for electronics
450
Certain types of circular multipin connectors (e.g. the bayonet-locking “Bendix” style, R and FischerTM connectors) permit the use of coaxial pin geometries. and push-pull LEMO These accept coaxial cable conductors, and can significantly reduce crosstalk problems. Crosstalk between conductors in multi-conductor cable is discussed on pages 462 and 468.
12.3.6.4 Thread-locking compounds for securing connector parts If the loosening of threaded connector-components (such as connector shell subsections, cable clamp screws and backnuts, etc.) is a problem, a thread-locking compound may be used to secure them. These substances are adhesives that have been especially devised to prevent threaded fasteners from becoming loose, but permit them to be taken apart should the need arise. Many such compounds are suitable for this purpose. The one indicated in Ref. [81] has proven useful for securing backnuts. Only a very small amount of a thread-locking compound should be applied to any threaded component.
12.3.6.5 Inspection and cleaning Connectors should occasionally be inspected for damage, corrosion or contamination. In the case of certain precision or subminiature coaxial connectors (especially those used in microwave work), this should be done every time the connector is mated. Connectors that are used infrequently should also be inspected before mating. Regarding damage, one should look for: bent, recessed or protruding pins; bent or missing pin tines;11 chipped or worn plating; displaced or damaged dielectric inserts; and thread damage on screw-on connectors such as SMA- or N-types. Coaxial connector shells (e.g. female BNC connectors) may be out-of-round. Pay attention to the possibility of the loosening of connector shell subsections or corrosion at their junctions; and loose or missing cable-clamp- or shell-ground-screws. One should also look for the presence of foreign bodies, especially in the female sections. In the case of high-voltage connectors, also check for evidence of corona, arcing and tracking (see Sections 11.3.1 and 11.3.4). High-current and high-power connectors should be inspected for contact pitting caused by arcing. Damaged connectors should generally be removed from service without hesitation. NB: Connectors are sometimes damaged because their mating counterparts are damaged (as in the case of microwave connectors). The cleanliness of interface areas is especially important for connectors that are part of high-voltage, high-current or very-high-impedance circuits, or high-frequency RF and microwave systems. Connectors in general that have not been used for a long time should be cleaned before mating. This should be done initially using clean compressed gas – but not air from the general laboratory compressed-air supply (see Section 3.8.2). If necessary, this should be followed by gentle wiping using a clean lint-free swab dampened with high purity (>92%) isopropyl alcohol. Other solvents, such as acetone, denatured alcohol, methyl alcohol, and chlorinated hydrocarbons, can harm plastic insulating parts in connectors [54]. Aerosol cans filled with solvent should not be used for cleaning, since the spray tends to penetrate and soak the connector. 11
Pin tines are flexible contact prongs.
12.4 Cables and wiring
451
Corrosion or tarnish on contact surfaces should not be removed with abrasive substances (e.g. metal polishes), because of the possibility of damaging plating materials. An eraser may be acceptable and useful for this purpose in some cases. However, if corrosion or tarnish is proving troublesome, the best remedy is usually to replace the connector. The practice of “fixing” high-resistance or intermittent connectors by repeatedly disconnecting and reseating them is seldom a satisfactory long-term solution to such problems. Also, it can make matters worse just as often as it improves them [82].
12.4 Cables and wiring 12.4.1 Modes of failure Along with connectors, cables are the most unreliable components in electronic systems [2]. (Making this distinction is not always helpful, since cables and connectors are generally used together, and it is often hard to distinguish between cable and connector problems.) The faults that occur in wires and cables are generally similar to those of connectors and contacts. Hence, one may observe: (a) (b) (c) (d) (e) (f) (g)
open and short circuits, excessive losses, in cables that are used as transmission lines, electromagnetic interference (EMI), including cross-talk between conductors, microphonic (i.e. vibration-induced) noise (mainly in high-impedance circuits), signal reflections due to impedance changes (particularly at frequencies 108 Hz), arcing and corona in high-voltage circuits, and overheating in power circuits.
As is usual with the items discussed in this chapter, intermittent problems are not uncommon. These include sporadic open and short circuits, EMI, microphonics, and arcing and corona. Open circuits in wires and cables are often the result of fatigue failure, possibly owing to aggressive or excessive bending during handling, or because of vibrations. A relatively common type of short circuit takes the form of contact between a wire or cable conductor, and earth or chassis ground. Such failures are known as “ground faults.” These often occur because of pinching of the wire or cable (e.g. underneath the lid of an enclosure), or chafing of their insulations (perhaps owing to vibrations). Accidental short circuits of this kind can cause ground loops (see Section 11.2.1) and other problems that may not be immediately evident. Failures in a coaxial cable that would be observed as short or open circuits at d.c. may not manifest themselves as such at radio frequencies. For example, a shorted cable can often act as a trap (i.e. a type of filter that rejects only a certain range of frequencies), because of the resistance and self-inductance of the short circuit [83]. Similarly, signals can propagate through a cable with an open circuit, owing to capacitive coupling effects, but with highly frequency-dependent characteristics.
452
Interconnecting, wiring, and cabling for electronics
If one of the wires in a cable that is part of a balanced circuit (e.g. a twisted pair) is accidentally grounded, the circuit will become unbalanced, and its immunity to commonmode electromagnetic interference will be greatly reduced (see page 381) [84]. Another potential cause of an unbalanced condition is the presence of a high-resistance joint in series with one of the wires. This can be caused by a poor-quality solder contact, corrosion of the joint, etc. Problems can also arise if one of the wires in a balanced pair is inadvertently connected to an unused (floating) wire in a multi-conductor cable. This will create a capacitive unbalance, which can significantly lower the immunity of the circuit to high-frequency common-mode interference (i.e. the unused wire can act as an antenna). Other subtle problems are also possible. For instance, in the case of coaxial cables used in RF and microwave applications, cable deformation can cause local changes of impedance that can lead to signal reflections. Deforming high-voltage cables can reduce the ability of these to withstand voltage [74]. In the case of shielded twisted-pair cables used in lowfrequency (<100 kHz) circuits, damage may take the form of alterations in the relative positions of the conductors. This can change the distributed capacitance of the cable [85]. Loss of balance in an otherwise balanced circuit, and consequent noise problems, is one possible result. Also, if such a cable is subjected to vibrations, it may generate microphonic noise. Shield-related electromagnetic interference problems in cables usually arise because the shield has been improperly connected at its ends, or because the end connections have deteriorated (see Section 12.4.4). It is not uncommon for the shielding effectiveness of end connections to diminish by 20 dB after five years of exposure to mechanical abuse, moisture, etc. [47]. However, unusual amounts of electromagnetic interference can be produced or received by such a cable because of oxidation or corrosion of the shield conductors themselves (see page 454), or because of physical damage to the shield. As for physical damage, even a narrow slit in a shield may result in serious EMI problems, because the slit can act as a “slot antenna” [45]. Damaged cables can have strange properties. For instance, if the center conductor of a coaxial cable has broken, and the severed wires are still touching, but have corroded, the result may be a nonlinear junction [6]. Although this is an unusual situation, it can result in signal rectification and EMI problems. In poorly made coaxial cable intended for high-voltage use, voids within the dielectric, or between the conductors and the dielectric, can be locations of corona [76].
12.4.2 Cable damage and degradation 12.4.2.1 Vulnerable cable types The sensitivity of a cable to damage and degradation is, to some extent, dependent on how it is being used. Cables that are employed in RF and (particularly) microwave systems, highvoltage apparatus, high-impedance circuits, and in experimental setups used in low-level or high-precision measurements, are especially vulnerable.
453
12.4 Cables and wiring
12.4.2.2 Causes of cable failure in the form of handling and abuse Manual handling and mistreatment of cables is undoubtedly one of the most significant causes of cable faults. Fatigue failure of conductors due to excessive flexing, short circuits, and damage to cable insulators/dielectrics, are some possible results. Cables are generally fragile, and have little mechanical strength. Cables generally have a minimum allowable bend radius, and can be destroyed if forced around smaller radius curves. Minimum bend radii of five times the cable diameter are usually acceptable, but for very high reliability applications a factor of ten is sometimes required [29]. Excessive bending can occur at places that are not obvious, such as near the interface between a cable and its connector (even if the connector is provided with a “bend relief”) or at the location of marker tubing. Dragging a cable over a sharp edge can also have this effect. Twisting a cable about its axis can be a particularly harmful activity – causing damage to the junction between the cable and its connector, or possibly the cable itself, depending on how the torque is applied. The common practice of coiling a cable for storage by tightly winding it around the hand and elbow is a frequent cause of damage as a result of the twisting involved. The correct method is to roll the cable up into loose loops. Similarly, preparing a cable for use by pulling loops out of the coil, rather than by unrolling it, creates harmful torsional stresses. One of the actions with the greatest potential of leading to cable failure in a laboratory is that of leaving cables on the floor. Here they can be tripped over, run over with wheeled equipment, have their connectors stepped on, etc. In addition, cables and their connectors can be damaged or degraded when floors are wet-mopped, and are at risk in the event of a flood. (If water is allowed to enter the end of a coaxial cable, it will migrate along the braid by capillary action, and cause corrosion, degradation of the dielectric, and irregular changes in capacitance and characteristic impedance.) Securing coaxial cables to objects by using “cable ties” under excessive tension can cause damage to the dielectric, in the form of deformation. Other common causes of severe cable (and connector) damage include: tugging on a cable in order to remove its connector from a socket, using cables to pull equipment around, or employing them to support devices or instruments. Sideways forces on a cable that is connected to something can be very harmful to the two connectors involved. Even in the absence of abuse, the greatest mechanical stresses on cables generally occur when they are moved and/or installed, and such activities will inevitably reduce their life spans.
12.4.2.3 Cable deterioration and ageing If cables are tightly stretched between objects that are subject to relative movement, they are at risk from fatigue failure. Mechanical vibrations in wires and cables can cause problems from several standpoints, especially if resonances are allowed to occur. This is of particular significance for wires or cables that are attached to mechanical pumps, compressors, fans, etc. Fatigue failure is one possibility. Insulation damage due to chafing can also take place,
Interconnecting, wiring, and cabling for electronics
454
if the vibrating wire or cable is touching a stationary object. If the object is a conductor, this can lead to short circuits, ground loops, or other problems. Static shielded cables (i.e. not subjected to movement) can be prone to a slow deterioration of their shielding ability. The effectiveness of braided cable shields relies on the ability of individual wires within the braid (which are just resting against each other) to make mutual electrical contact. Of course, one would not expect press contacts of this type to be very stable. Significant reductions in the shielding effectiveness of double copper-braided coaxial cables have been observed, while the cables were resting undisturbed in a laboratory environment for a period of three years [86]. The wires in the braid were plain (uncoated) copper. This degradation took the form of a roughly 40 dB increase in crosstalk between two 30 m cables, placed side-by-side, which were used as the test samples. The worst crosstalk in a frequency range of 1–100 MHz was used to make the comparison. This deterioration was apparently caused by oxidation of the copper, because cable with silver-plated copper braid wire showed no such effects. At the end of this three-year experiment, the cables were flexed, and this restored the original shielding performance (at least in the short term). Coaxial cables with tinned-copper braid were also examined in these investigations [86]. Although these were studied for a shorter time than the plain copper braid and silver-plated braid cables, they also exhibited an increase in crosstalk. The rate of increase in crosstalk for these cables (almost 20 dB in 1.5 years) was about the same as for the plain copper-braided ones. Coaxial cables with foam dielectrics have a tendency to degrade, owing to the softness of the dielectric, and the ease with which it permanently deforms. As a result, such cable is particularly susceptible to deterioration during handling and installation (pinching, bending, etc.). Furthermore, the foam is vulnerable to “cold-flow.” This means that the center conductor can migrate with respect to the shield over time, which will cause local changes in the characteristic impedance of the cable [87]. This is especially problematic if a cable has been routed around corners, and its conductors are therefore constantly under stress. Cables with solid polyethylene dielectrics are more rigid, and hence slightly more difficult to manipulate than foam dielectric types. They also have higher RF losses than foam ones. However, solid dielectric cables are more robust than foam types, and are much less susceptible to cold flow.
12.4.3 Selection of cables and cable assemblies12 12.4.3.1 Provenance Cables (especially patch cords13 ) with an uncertain history are often a source of trouble. One should know the provenance of cables used in critical applications: whether they are homemade or of commercial origin, who made them, how old they are, what they have been used for in the past, and whether they have been subjected to abuse. 12 13
A cable assembly is a collection of one or more cables that are joined together, with attached connectors. A patch cord is a cable assembly comprising a single length of cable (often about 1 m long) with a male connector at each end.
12.4 Cables and wiring
455
12.4.3.2 Commercial versus homemade types Many of the problems associated with cables stem from the incorrect wiring and installation of their connectors (see pages 433–434) [23]. A common reason for this is that the task of making cable assemblies is often taken up by, or imposed upon, those who have little experience, no training or written procedures, and possibly the wrong tools. Nevertheless, this route is often taken – probably owing, at least in part, to the apparent simplicity of the operation, as well for convenience and possibly as a way of saving money. In reality, the wiring of connectors is a task that requires skill, care and patience. This should normally be left to experienced workers. Instead of making one’s own cable assemblies, it is usually much better to buy them commercially. This is especially true in the case of standard items such as coaxial patch cords. The cost of these is usually just a small fraction of the total cost of the experimental setup. Coaxial patch cords (containing BNC or SMA connectors, etc.) are off-the-shelf items from many electronics distributors. If shielded twisted-pair patch cords are needed, ones made for professional audio applications may be suitable. The three-pin XLR connectors (“XLR-3” types) that are used in these are somewhat bulky, but otherwise satisfy most of the requirements of a good connector (see also page 445).14 In the case of non-standard cable assemblies, these can be put together on a custom basis by one of the numerous firms that specialize in this activity. There are wide variations in the quality of cables. Deficiencies may exist in, for instance, the effectiveness and durability of a cable’s shielding material, the uniformity and stability of its dielectric, the quality of its connectors, and the soundness of the joints between these and the cable. For the purpose of doing research it pays to obtain high-quality cables and cable assemblies. If it is necessary for the user to make up a cable assembly, it is highly advisable to follow the connector manufacturer’s instructions for joining the connectors to the cable(s). Useful information on some subtleties of wiring and installing shielded connectors and cables can be found in Ref. [47]. If connectors are to be used to carry electric power, their wiring should be arranged so that power is not present on unconnected male contacts, since these are more exposed than female ones, and therefore more vulnerable to accidental short circuits [29]. Newly purchased commercial patch cords should be marked in a way that allows them to be distinguished from homemade types, which tend to migrate around laboratories.
12.4.3.3 Choosing cables for use under conditions of flexure and vibration Normally it is desirable to use cables that contain stranded wires, rather than solid ones, since the former are more flexible, and tend to provide much greater reliability following repeated bending. (See also the comments regarding hookup wire on page 466.) A large 14
NB: The XLR connector-shells are often not grounded, and in these cases the shielding is effectively incomplete. In such cases, the cable shields are terminated in a “pigtail” configuration, which can cause EMI problems (see pages 459–460).
Interconnecting, wiring, and cabling for electronics
456
number of small-diameter strands provide better flexibility and greater reliability than a smaller number of large-diameter ones. Stranded wires are more effective than solid ones at damping vibrations, and thereby limiting the vibration amplitude [88]. If several independent wires or cables are going in the same direction, their damping and stiffness can be increased by harnessing them together. Aluminum has poor fatigue properties, and this limits its use as a flexible electrical conductor material [89]. (It is also difficult to make reliable contacts to aluminum through its robust native oxide layer.)
12.4.3.4 Robustness of small cable conductors Wires that are smaller than about 0.51 mm (24 AWG) are delicate and require special handling. In cables, wires smaller than this should be made of a high-strength copper alloy. Cables containing wires smaller than about 0.41 mm (26 AWG) should generally be avoided [41].
12.4.3.5 Derating of wires and cables The recommended derating factors for conductor current and voltage between conductors are: 0.8 (normal reliability) and 0.7 (high reliability) [71]. Wires that are grouped in bundles of more than 15 should be given a current derating factor of 0.5 [41]. The necessary currentderating factor may depend on the tolerable voltage drop along the conductor. Also, if wires and cables are to be used in a high-voltage application, where corona and arcing is a possibility, other considerations may have to be taken into account (see the comments regarding high-voltage connectors on page 446). The manufacturer should be consulted on these issues. Operation of conductors at reduced pressures can affect derating factors. For instance, in a hard vacuum, air convection that would normally help to cool a wire is absent. Under such conditions, smaller current-derating factors are required.
12.4.4 Electromagnetic interference 12.4.4.1 Grounding of cable shields Cable shields must not be allowed to float if they are to function as electromagnetic shields.15 It is essential that they be connected (grounded) to equipment enclosures (shielded chassis) and circuit commons at one or both ends of the cable. Some grounding schemes are illustrated in Fig. 12.10. If radio-frequency interference is a concern, a cable shield must be directly connected to the equipment enclosures at both ends (without forming “pigtails” – see below). If, in non-coaxial shielded-cable (such as shielded twisted-pair), electrostatic interference and ground loops are the only concerns, it is permissible to interrupt the cable shield at one end for the purpose of preventing a ground loop (see Fig. 11.5). (Electrostatic 15
Coaxial cable shields are sometimes used only as circuit return conductors, with no intent to take advantage of their shielding ability.
12.4 Cables and wiring
457
(a)
signal source
shielded twisted-pair cable
shielded enclosure (b) coaxial cable
(c)
signal source (with center tap to ground, if available)
diff amp
Fig. 12.10
Some preferred methods of grounding cables: (a) shielded twisted-pair in an unbalanced circuit, (b) coaxial cable in an unbalanced circuit, (c) shielded twisted-pair in a balanced circuit, with a differential amplifier. interference is that from low-frequency electric fields.) If RF interference, as well as electrostatic interference and ground loops, is a problem with such cable, the continuity of the shield at low frequencies may be broken, while the RF continuity is maintained, by using a capacitor to complete the ground at one end. Ground-loop issues are discussed in Section 11.2.1. Cable grounding strategies are covered in detail in Ref. [45].
12.4.4.2 Choice of cable-shield coverage The coverage provided by a cable shield (i.e. the fractional area of the cable covered by the shield) is an important consideration if EMI is an issue. Nevertheless, it must be kept in mind that most cable-related EMI problems at radio frequencies (i.e. RFI) are not caused by the quality of the cable shield, but how it is connected at the ends, in the following sense [47]. The screening ability of a shielded cable link can be quantified in terms of its transfer impedance Zt . This relates the interference noise current that flows along the surface of the shield (Ishield ) to the voltage that appears between the inner conductor(s) and the shield
Interconnecting, wiring, and cabling for electronics
458
101
Transfer Impedance Z t (Ω/m)
100 RG-58/U (single braid) 10-1
10-2 RG-214 (double braid)
RG-55/U (double braid)
10-3 2 braids 1 mumetal
coaxial (semi-rigid)
2 braids 1 mumetal
10-4 10 kHz
Fig. 12.11
100 kHz
1 MHz
10 MHz
100 MHz
1 GHz
Transfer impedance of various types of coaxial cable as a function of frequency (data are from Ref. [47]). (Vi ) via: Zt = Vi /Ishield [47]. The lower the value of Zt (which is frequency dependent) the better the shielding performance. Its value is determined by the cable-shield contribution, and (often at least as important) that of the cable-shield terminations. The latter are raised by resistances due to imperfect contacts in the connectors, and inductances arising from asymmetrical attachment of the shields to the connectors (e.g. pigtails). Threaded coaxial connectors (such as N- or TNC-types) typically make a smaller contribution to the total Zt than BNC-connectors. Shielding by copper braid will inevitably be imperfect, because of the numerous openings in the braid weave. Amongst the various kinds of commercial cable there are large variations in the cable area covered by braid. This will dramatically affect the shielding ability of the cable. For single-layer cables, braid coverage of less than about 85% is virtually worthless, and in such cases the cable can be considered unshielded. Braid coverage of 95% may be considered good, and certain double-braid cables (with two layers of braid) can have coverages of 99%. The latter are sometimes employed when EMI is especially troublesome. At frequencies of more than a few megahertz, double-braid cables can provide transfer impedances lower by a factor of almost 102 than those of single-braid types (see Fig. 12.11), and about 30 dB greater attenuation of interference [47]. The shielding performance above about 1 MHz is better if there is a layer of insulation between the two braid layers [90]. Double-braid cables are stiff compared with single-braid ones, and are also relatively heavy. Furthermore, extra attention should be given to the quality of the connectors, in order to
459
12.4 Cables and wiring
take advantage of the extra shielding. (N-connectors, or perhaps TNC-types, are preferred over BNC-connectors [47].) Even better shielding performance is provided by semi-rigid coaxial cables, but these are normally used only in microwave work. Cable shielding is extensively discussed in Refs. [45], [47], and [90].
12.4.4.3 Difficulties caused by foil and wrapped shields Some types of shielded cable make use of very thin aluminum foil as the shield material. Since the foil is solid, it provides almost 100% coverage, which makes it more effective at shielding against low-frequency electric fields than 95% coverage braided copper [45]. However, foil shields have comparatively high transfer impedances, and are not as good at shielding against magnetic fields (this is of particular relevance at radio frequencies) [65]. This is partly because of the relatively high resistance of the aluminum foil, and also because the foil is difficult to terminate properly, so that most of the shield currents flow through a “drain wire,” which runs alongside it. Also, foil shields do not withstand repeated cable flexure very well, compared with braided copper types [87,90]. The foil is very fragile, and tends to split near the connectors or other regions where the cable is flexed. Cables that are to be used as patch cords (and therefore subject to regular handling) should not be shielded with aluminum foil alone. Cables are available in which the shield consists of a copper braid that is surrounded by aluminum foil. Openings in the braid are thereby covered by the foil, and hence the shield coverage can approach 100%. Shields of this type are more robust than all-foil kinds, and the braid can be terminated properly. Nevertheless, the shielding provided by 95% coverage braided-copper shields (without aluminum foil) is generally adequate for most purposes. Cables with braided shields are preferable in the majority of applications – especially when they are used as patch cords. High-coverage braid-shielded cables are also particularly useful in situations in which the cable forms part of an unavoidable ground loop, because of their low shield resistance (see page 367). Shielded twisted-pair cables are often made with combination braid/foil shields. These may be acceptable for use in patch cords. Some cables are provided with “wrapped wire” (“swerved” or “spiral”) shields. Although they are very pliable, the shield wires tend to open up with repeated flexing [51]. As a result, the effective shield coverage is diminished. Also, because the shield wires are wrapped in a spiral, they can exhibit inductive behavior that prevents the shield from working at high frequencies [87]. In high-impedance circuits, increased microphonic noise can also be a problem. Such cables should generally be avoided in favor of those with braid or braid/foil shields.
12.4.4.4 Attachment of shielded cables to their connectors (“pigtail” problems) A common reason why shielded cable assemblies do not perform to expectations in the presence of high-frequency interference is that the cable shield is not joined correctly to the connector shell. It is important that the shield braid or foil has complete 360◦ attachment to a conductive connector shell, and is not routed off to one side, so that the circular symmetry is lost. The latter condition, which is known as a “pigtail”, can lead to
Interconnecting, wiring, and cabling for electronics
460
(a)
Chassis Cable Connector
Length of Pigtail Cable Chassis Connector
(b) Shield
Fig. 12.12
Chassis
(a) Pigtail formed by joining a cable shield to chassis ground by means an auxiliary pin on a connector. (b) Equivalent schematic (see Ref. [65]). a significantly increased transfer impedance, and correspondingly higher susceptibility to electromagnetic interference. In a multi-pin connector, the worst situation is when the cable shield is not directly attached to the connector shell at all (see Fig. 12.12). Here, the shield is connected through a pin to the chassis ground on the inside of the apparatus (this is often done for convenience). The long wire that comprises the ground lead (the “pigtail”) may have an inductance of several tens of nanohenries. This results in a large transfer-impedance at frequencies much higher than about 105 Hz. As a result, the arrangement can behave almost as if the cable shield is not grounded, and the shielding effectiveness thereby greatly compromised [65]. Problems can still arise if the cable shield is connected to the connector shell directly, but does not have 360◦ attachment symmetry. The correct method of joining is shown in Fig. 12.13. Certain kinds of multi-pin connector make it easier to achieve this condition than others, and this should be kept in mind when selecting them (see Fig. 12.7). Some connectors of this type are supplied with flat cable clamps that provide only two points of contact with the cable shield, which results in a large transfer impedance.
12.4.4.5 Rapid fixes for cables with inadequate shields A number of commercial products are available for patching-up poorly shielded or unshielded cables and cable assemblies. For instance, shielding jackets are made that can be zipped-up over individual cables or cable bundles. These comprise either aluminum foil,
461
Fig. 12.13
12.4 Cables and wiring
Disassembled ferrule-clamp BNC connector, showing the desired 360◦ contact between the cable braid and the connector shield (see Ref. [45]).
tinned steel mesh, or monel as the shielding material, along with a PVC protective layer and a zipper. Shielding mesh bands are also available for this purpose. Field attenuation levels of 20 dB or possibly more can be achieved with such items. The use of these is discussed in Ref. [47]. Ordinary aluminum foil (about 25 µm thick) can be useful as an improvised cable shielding material.
12.4.4.6 Use of twisted-wire pairs in the presence of low-frequency interfering fields If interference from low-frequency (100 kHz) electromagnetic fields is an issue, ordinary shielded cable may not be adequate. This is because copper-braid or aluminum-foil conductive shields are unable to screen magnetic fields very well at these frequencies, so that magnetic pickup can become a problem. (Electric fields are screened by conductive shields, even at the lowest frequencies.) A good solution to this is to use cables containing twisted-wire pairs. The induced voltage produced by a given loop in the twisted pair will be canceled out by a neighboring one (assuming that the magnetic field is uniform), and hence the magnetic pickup can be made very small. Another advantage of using twisted pairs is that capacitively coupled pickup (from interfering electric fields) in each wire is roughly equal, and so if the cable is used in a balanced circuit (see page 381), the effect of such fields can also be very small. (Capacitive pickup is a particulary serious problem in high-impedance circuits – see Section 11.4.1.) Noise rejection due to balancing is never perfect, however. This problem becomes more serious as the frequency rises (and is especially so at radio frequencies), owing to unavoidable capacitive imbalances in the circuit. Hence, twisted-pair cables usually incorporate braid and/or foil shields.
462
Interconnecting, wiring, and cabling for electronics
The question sometimes arises as to how many twists per unit length should be used. If the end connections of the cable are included in the interfering magnetic field, this parameter is unimportant [65]. In this case, efforts should be devoted to reducing the area of the open loops at the ends of the cable (e.g. between connector pins). If, on the other hand, the field is concentrated in a particular region on the cable, the pickup will decrease as the number of twists per unit length rises. The twisting should be uniform along the cable. Overtwisting can lead to insulation or conductor damage. This is sometimes a problem with the enameled single-strand wires used in low-temperature apparatus.
12.4.4.7 Interference and inter-conductor crosstalk in multi-conductor cables Unused twisted pairs in a multi-pair cable should not be allowed to float. Radio-frequency interference can be considerably reduced by shorting together the wires in each unused pair and grounding them at one end of the cable [84]. Capacitively coupled interference can also be reduced by this method. Generally, the voltages and currents on different conductors within a multi-conductor cable should be the same to within roughly ±10 dB in order to minimize crosstalk [65]. In those situations where it seems likely that crosstalk between conductors in a cable will be a problem, the best approach is to avoid it by placing the potential culprit and victim conductors in completely separate cables. However, this is not always practical. The use of twisted pairs in balanced circuits is usually very effective at reducing crosstalk. Nevertheless, problems can still arise. Sometimes the circuit balance is not adequate. Crosstalk is often caused by the connectors, where the cable wiring opens up and coupling takes place between adjacent contacts. Steps can be taken to remedy this – see pages 449– 450. If the circuits are unbalanced, a good approach for preventing crosstalk is to use cables in which the conductors themselves are miniature coaxial cables. (This requires the use of special connectors.) Methods of identifying and reducing magnetic and capacitive crosstalk between wires in cables are presented in Ref. [47]. Crosstalk issues are also discussed on pages 382 and 464.
12.4.5 Some comments concerning GP-IB and ribbon cables 12.4.5.1 GP-IB cable assemblies Although the GP-IB (or IEEE-488) interface bus is, in many respects, a good design (see Section 13.4.5), the cable hardware can be troublesome. GP-IB employs a parallel (as opposed to serial) data transmission scheme. As a result, the cables are relatively thick, stiff, and difficult to handle. During installation, it is easy to take them below their minimum bend radius. GP-IB connectors are also very bulky compared with most other connectors. Hence, attaching cables to instruments in the (often cramped) areas behind equipment setups frequently leads to overbending. Bad GP-IB cables are not unusual [49]. Unfortunately, connector bend-reliefs (“strain reliefs”) are not common, and “low cost” cable assemblies many not even have an adequate cable clamp. Bent or broken connectors contacts, and stripped jackscrews, are also a relatively common problem with GP-IB hardware. Because
12.4 Cables and wiring
463
of these issues, and their high cost, GP-IB cable assemblies should be treated with extra care. The normal GP-IB design involves having the cables exit their connectors from the side (rather than the back, as is usual for most non-GP-IB connectors), in order to permit the connectors to be stacked. This can cause difficulties as a result of cables getting in the way of other cables (or other items) emerging from an instrument – thereby contributing to overbending problems. Special GPIB cable assemblies are available in which the cable emerges from its connector in some other direction. The use of these can help to ease problems due to overcrowding of cables and other items near an instrument. GP-IB cable assemblies are expensive, so there is often a temptation to get the cheapest ones available. This should be strongly resisted.
12.4.5.2 Ribbon cables Ribbon cables (flat multi-conductor cables) are appealing mainly because it is very easy to join them to their connectors using an insulation-displacement method. Also, they can have lower inter-conductor crosstalk problems than cables with a circular cross-section, and the crosstalk behavior is more consistent from one cable to another [45]. However, ribbon cables are easily damaged (crushed and kinked), in comparison with round types [49]. The connectors that are normally used with ribbon cables (IDCs) are prone to intermittent failures. (This tends to occur if they become warm, because the female connectors enlarge due to thermal expansion.) Furthermore, the connector locking and strain relief features are poor. Although shielded ribbon cables are available, it is difficult to terminate the shields with an adequate 360◦ connection [45]. Defective ribbon cables are a frequent cause of PC failures (see page 490). For these reasons, it is generally best to avoid using ribbon cables, if possible.16 (Cryogenic ribbon cables are an exception to this – see page 468.) They should especially not be used outside instrument enclosures.
12.4.6 Additional points on the use of cables 12.4.6.1 Installation Cables should generally be laid (not pulled) into position, if possible. Some commercial instrument racks are purposely designed to make this easy to do. If a cable is to be secured to an object with a cable tie, it is desirable to use the widest available tie and the smallest possible amount of tension on the tie that will hold the cable firmly without deforming it. This is particularly important in the case of high-frequency coaxial cables and high-voltage ones. In these cases, it is actually preferable to avoid using cable ties, and employ instead toothed rubber-lined P-clamps, or similar items, for this purpose. Cables should be routed so as to make them easy to trace, remove and replace. They should not be located in areas where they are vulnerable to common abuses, such as being 16
NB: This does not mean replacing existing ribbon cables in a commercial device with round ones, because crosstalk problems may ensue.
Interconnecting, wiring, and cabling for electronics
464
used as footrests or handholds. The routing should also keep cables away from heat sources, and places where they could get pinched or snagged (e.g. if they are attached to equipment that must be moved during use17 ). In order to avoid electromagnetic interference, low-level signal or data cables should not be placed alongside mains-power conductors, and loose cables of this type should never be supported on conduit containing power conductors. Such cables should also not be routed next to telephone cables, where they may suffer interference from the ring voltages (e.g. 90 V RMS at 20 Hz) carried by the latter. It is desirable to ensure that low-level signal or data cables are well separated from fluorescent lighting fixtures, which can be significant sources of interference at twice the mains frequency and its harmonics – well into the radiofrequency range. Other items, such as power transformers, may also be prone to producing interference in cables – see Chapter 11. Sensitive cables should not be routed immediately beneath drop ceilings, unless it is known what is in the space between the drop ceiling and the true ceiling. Such spaces often contain electromagnetically noisy equipment (e.g. power transformers and motors) and cables. Methods of laying out cables in order to minimize crosstalk between them are discussed in Section 11.2.4. Cable trays (“cable raceways”) may be useful for protecting long cables, as well as grouping and organizing large numbers of such. A range of other accessories for protecting cables, such as flexible conduit and cable wrap, is also available commercially. The advantage of cable trays over tubular conduit is that the former leave cables fully accessible. Metallic cable raceways (especially galvanized-steel types) can also screen poorly shielded or unshielded cables from crosstalk and ambient electromagnetic fields. In order to do this effectively, the raceways must be correctly assembled, electrically bonded together, and grounded to equipment racks and enclosures (see Ref. [47]). Coupling reductions of up to about 38 dB can be achieved on an uncovered raceway, depending on the frequency. Metal covers do not significantly improve screening – only by about 6 dB. If a flexible section is needed, cables can be strapped to a companion braid (a wide tinned-copper braid). In general, and if possible, cables that produce electromagnetic interference, or are susceptible to it, should be routed next to a grounded sheet metal surface of some kind. Unshielded high-voltage cables should not be allowed to touch grounded surfaces, since a corona discharge could take place in the gap between the cable conductors and the surfaces [74]. Cables should not be stretched tightly between fixed points, if relative movement of these (due to the shifting of instrument racks, for instance) could take place. Lengthening a cable (e.g. by about 20% of its stretched length), so that there is some slack, can be very beneficial in preventing fatigue failure of the cable or its terminations [88]. If the cable terminations are fixed, on the other hand, the cable length should be reduced, if possible. This has the beneficial effect of raising the natural frequency of the cable span (thereby minimizing potential vibration-induced fatigue problems), and reducing the weight and inertial load that must be supported by its terminations. NB: Connectors are generally not designed to bear the weight of a long cable.
17
The cryogenic inserts used in low-temperature research are a good example.
465
12.4 Cables and wiring
It is seldom a good idea to route cables across a floor. A better technique for carrying cables across a room is to place them in a cable tray which is suspended from the ceiling. If cables must be put on the floor, they can be guarded (at least from mechanical damage) by being placed inside “cable protectors”, which are made for this purpose.
12.4.6.2 Cable identification Except for the most simple and temporary interconnection arrangements, some method should be used to provide cables with reliable markings that will last their lifetime. (Pen on masking tape is not sufficient for this.) A large variety of purpose-made cable markers and marking systems are commercially available. Markings should indicate the type of shielding used (e.g. “single-” or “double-braid,” or “foil”), the presence of shield pigtails, and the shield grounding arrangement (such as “grounded at one end” or “grounded at both ends”).
12.4.6.3 Storage Cable and wire should always be stored in stable indoor environments – protected from large temperature and humidity fluctuations.
12.4.6.4 Cable inspection and replacement The visual inspection of connectors, which are generally the most vulnerable parts of cable assemblies, is discussed on page 450. With regard to the cables themselves, damage is most likely to occur near where the cable enters the connector. In the case of shielded cables in particular, one should look out for damage to the shield in this location, or disengagement of the shield from the connector. Other forms of cable damage include abrasion, cuts, burns, pinch marks, twisting, or flattening of the cable. Wrinkling of the cable jacket is evidence that the cable has been bent beneath the minimum allowable bend radius. Whether damage will severely compromise the performance of a cable depends on how it is used. For instance, slight pinching or flattening of a cable, which might not be severe enough to be a problem for undemanding applications, can ruin a coaxial cable used, for example, in radio (and especially microwave) frequency or high-voltage ones. It is possible for a coaxial cable to be severely damaged on the inside, without showing any evidence of this externally. Generally, electrically faulty cables, or ones with visible damage, should be replaced. One should not normally attempt to repair them. It is usually a good idea to destroy a damaged or faulty cable assembly by cutting the ends off, in order to prevent it from re-entering service (this can cause major problems if intermittent faults are present). Cable assemblies that are often moved (especially patch cords) should be replaced frequently. The inevitable damage and degradation that occurs during use means that their lifespan is generally short. In reality, such cables are essentially consumable items (like batteries) and should be removed from service without hesitation.
Interconnecting, wiring, and cabling for electronics
466
12.4.7 Wire issues – including cryostat wiring 12.4.7.1 Advantages of stranded versus solid hookup wires18 Stranded wires are generally more flexible than solid ones, and hence are usually easier to work with. As mentioned on page 455, they are also more reliable under conditions of repeated bending. Furthermore, if a solid wire is nicked or scratched (e.g. during stripping of the insulation), this may propagate as a crack across the entire cross-section of the conductor, under conditions of continuous or cyclic stress. In the case of stranded wire, however, only a few of the strands will generally be affected by such damage [89], and cracks cannot propagate. (Although it is nevertheless desirable to avoid harming the strands.) For these reasons, stranded wire is preferred in most applications, including cable construction, and for use as hookup wire. Techniques for stripping insulation from the latter are discussed in Ref. [12]. Exceptions to this are the winding of solenoids, magnetic pickup coils and the like, and the wiring of cryogenic instruments. For such purposes, single strand wires with thin varnish-like insulations are employed (see below).
12.4.7.2 Enameled single-strand wire Introduction For the creation of solenoids and the wiring of cryogenic apparatus, single-strand wire coated with a varnish-like material (enamel) is used. In the case of copper conductors, this is often called “magnet wire.” Because of the thinness and fragility of the insulation, the frequently small diameter of such wires, and their single-strand construction, they tend to be very delicate compared with hookup wire. (Furthermore, except in cryogenic apparatus, enameled single-strand wire should never be used as hookup wire.) Enameled copper wires with diameters of less than about 80 µm (AWG No. 40) need special handling and are less reliable than larger types [91]. Low-quality magnet wire smaller than about 160 µm diameter (AWG No. 34), and particularly that approaching 50 µm (AWG No. 44), can break very easily when it is used to wind coils, because it is often too brittle. A key way of avoiding this is to use good quality wire of the highest possible softness [92]. This usually means (somewhat counter-intuitively) that the wire must have the lowest possible yield strength.
Selection and removal of enamel19 Enameled-wire insulations should be chosen carefully, since mechanical, chemical and thermal damage of these is very common. Minor openings, such as chips or cracks, in insulation may lead to the entry of contaminants and localized corrosion of the underlying 18 19
Hookup wire is used to make connections inside electronic devices, and usually consists of small-to-mediumgauge copper wire (possibly tin-plated) with thick plastic-sleeve insulation. Portions of this section were previously published in Ref. [94].
12.4 Cables and wiring
467
metal. If the wire is under stress this can cause stress corrosion cracking and eventual parting of the wire [93]. Corrosion effects are particularly relevant for small-diameter wires. Thermal failure of insulation, caused by soldering or Joule heating, is a frequent problem. The temperature rating of the insulation should be considered when selecting enameled wire. Generally, it is not a good idea to use wires for a critical application with unknown insulation. A spool of wire should be marked with the type of insulation and its build (single or double thickness), along with the conductor material and the diameter of the wire. The date of manufacture should also be recorded on the spool, since insulation will degrade with age. (One sometimes finds spools of wire in laboratory storage areas that have been around for decades, with very unreliable enamels.) One insulation, called “Formvar” (polyvinyl formal) is an old type that is still in common use. It has the virtue of being fairly resistant to mechanical abrasion, but is susceptible to chemical attack. In particular, it is vulnerable to dissolution by one of the constituents of a varnish that is widely employed in low-temperature research (IMI 7031TM ).20 (NB: Generally it is a good practice to dilute such varnish only with ethyl alcohol if it is to be used on enameled wire, of any type.) Moreover, Formvar is not resistant to elevated temperatures, and in particular those associated with soldering [5]. The use of Formvar-coated wire should be avoided, if possible. Polyester modified by cyanuration (THEIC-modified polyester) is a good insulation. Improved performance can be obtained by using a polyester-imide composition, and even better characteristics are found when an outer polyamide-imide coating is applied to a THEIC-modified polyester base layer. Alternatively, one can use a “modified” polyamideimide overcoat with the aforementioned base layer. The latter two insulations are very tough and abrasion-resistant types that are also quite resistant to heat and chemical attack. In particular, they are significantly more resistant to dissolution by solvents than Formvar. Polyimide is very resistant to attack by chemicals and heat, but has a rather low abrasion resistance. Teflon is also highly resistant to attack by chemicals and heat, but also has a low abrasion resistance and may have cold flow problems. Some kinds of insulation, and in particular those consisting of polyamide, polyurethane, acrylic, and some polyesters, completely depolymerize and evaporate at sufficiently high temperatures. These may be approximately 350–450 ◦ C. Such insulations are said to be “solderable,” since wires coated with them may be soldered directly, without the need to remove the insulation beforehand. They are referred to as, for example, “solderable polyester,” or “solderable polyurethane-nylon”. One such insulation in particular, plain solderable polyurethane, is found very frequently on magnet wire. Although wires with such insulations are very convenient to use, they can be unreliable under many conditions. These include especially situations in which they may be accidentally heated by large currents [94]. The robustness of enameled wires can be improved by using ones with a heavy coating (double coating) of insulation. If such wires are used to wind coils, they do have the
20
Formerly called GE 7031TM varnish, and still commonly known as “GE varnish.”
468
Interconnecting, wiring, and cabling for electronics
disadvantage of slightly reducing the conductor density, and leading to the need for larger magnetic devices. Insulations other than solderable types can be difficult to remove without damaging the wire (especially if the diameter is small), unless special methods are employed. The common practice of using razor blades or scalpels to scrape off insulation poses a high risk of nicking, and hence greatly weakening, the conductor (particularly in the case of copper). Another problem with this method is that it may be difficult to completely remove the insulation. Residual insulation will reduce the quality of the connection made to the wire. A discussion of various enamel-removal techniques is presented in Ref. [94].
Wiring for cryogenic systems The enameled single-strand wires of very small diameter that must be used in cryogenic systems are notorious for having reliability problems. The low tensile strength of such fine wires is an obvious vulnerability. Also, enamel layers tend to be very thin, and hence susceptible to mechanical damage due to puncture or abrasion. For example, the thickness of the insulation on a 100 µm diameter wire (frequently used for cryostat wiring) may only be about 5 µm on the radius. Since this is comparable with the surface roughness of many common lathe-turned metal objects, the heat sinking of such wires to low-temperature stages in the cryogenic apparatus is likely to cause problems, unless an insulating protective layer can be placed between them. Good materials for this purpose are lens-cleaning tissue and cigarette paper, which are fixed to the metal surfaces with varnish or epoxy. Lens-cleaning tissue has the advantage of being porous and readily absorbing the adhesive [5]. Rather than routing individual wires (or twisted pairs) through a cryostat, it is usually much better to bundle them together into ribbon-like groups. The earlier remarks about ribbon cables notwithstanding, such “cryogenic ribbon cables” are strong (compared with individual wires), easy to handle, and can be heat-sunk without much difficulty. Furthermore, the flat construction makes it possible to reduce any crosstalk between wires in the group by alternating active wires with grounded ones [45]. An excellent way of creating cryogenic ribbon cables involves weaving the wires together R R ) into a kind of composite fabric. Nomex is a strong with insulating fibers (such as Nomex material, which protects the wires and their insulation (to a certain extent) from abrasion, while giving considerable strength to the cable as a whole. It is also possible to incorporate miniature coaxial cable into the weave. If such cables are attached to connectors, protection to the soldered terminations can be provided by potting the end of the cable into the connector with epoxy. (NB: Insulation-displacement connectors are not used with cryogenic ribbon cables.) This method of making cryogenic cables is discussed in Ref. [95]. Such cables (also called “woven cables” or “cryogenic woven looms”) are available commercially (see Fig. 12.14). For the purpose of making it easy to replace faulty cryogenic cables, or make repairs or modifications to structural sections of a cryostat, it is a good strategy to divide such cables up into modular units, in the form of cable assemblies. Although it may seem at first sight that the placement of connectors in a cryogenic environment would not be a good idea,
12.4 Cables and wiring
469
Fig. 12.14
Cryogenic woven loom.
some connectors behave reasonably well under such conditions. Suitable types are listed in Ref. [5]. NB: Integrated-circuit (IC) sockets do not make good cable connectors [49]. It is generally a good idea to use redundant defensive measures to protect cryostat wiring from mechanical damage. Thus, one can employ in combination: wires with robust insulations, cryogenic ribbon cable arrangements, and varnished-soaked paper layers. In addition, it is desirable also to round off sharp corners, remove burrs, and smooth rough surfaces on any metal objects in the cryostat over which the wires must pass. Note that PTFE protective sleeving is a very useful material for safeguarding cryostat wiring. Certain cryostat wiring practices frequently lead to wire damage. For instance, individual wires are sometimes tightly stretched from point to point within a cryostat. Alternatively, wires may be continuously affixed to some non-metallic cryostat member over a long distance by using varnish. These steps are often taken for good reasons, such as preventing microphonic noise due to movement of the wires in a magnetic field. Nevertheless, stresses produced by differential thermal contraction and expansion, or other sources of relative movement of the structural parts of the cryostat, often cause wires to break. It is preferable to leave a certain amount of slack in the wiring, and to attach wires to support structures only at isolated spots, using small amounts of varnish. Whenever wires are placed in a vacuum environment (as is often done in the case of cryostats), one should be wary of the possibility of accidentally passing excessive current through them. As discussed earlier, convection cooling (which is the primary way by which heat ultimately escapes from wires under normal conditions) is absent in a vacuum. The result of overheating can be difficult-to-trace insulation damage (especially in the case of solderable insulations) that may lead to intermittent faults. The use of currentlimiting resistors, or perhaps electronic circuit breakers, may be helpful in situations where overcurrents are possible.
470
Interconnecting, wiring, and cabling for electronics
It is an especially good practice with cryogenic wiring to install more wires than are initially thought to be necessary, in order to provide spares or additional capacity in the event of breakages, short circuits, or later modifications to the apparatus. Once twisted pairs or cables have been assembled, and again after they have been installed in a cryostat, the integrity of the insulation (wire-to-wire, and wire-to-ground) should be checked with an insulation tester (see page 474). Because of their extreme fragility, lack of accessibility during operation, and since they are often employed to carry low-level signals, cryostat wires and cables must generally be made and installed to very high standards of workmanship. These tasks require considerable skill, care, and patience. Beginners should learn the art of cryostat wiring from someone with experience, and build up their skills by working on non-critical equipment. It is not uncommon for manufacturers of cryogenic equipment to use poor wiring practices when constructing their cryostats. If there is any uncertainty on this point, the correct methods should be clearly indicated in the specifications when such items are ordered. In some cases, it may be necessary for the user to do the wiring. Much useful information on the wiring of cryogenic apparatus is provided in Ref. [5].
Care and handling of enameled single-strand wire It is hard to overemphasize the importance of careful handling and use of fine (and especially “ultrafine”: <46 µm diameter) enameled wire. For example, spools of such wire should be stored indoors, in a clean and dry environment, and out of direct sunlight. Generally, these wires should not be touched directly by the hands, because acidic secretions from the skin can cause deterioration. Particular care is needed to avoid mechanical damage. For example, very minor impacts against spools of such wire (even ones that normal multi-strand wire could readily withstand), can easily cause destruction of the spool. Large spools should be grasped by the pickup holes, and not by the flanges, since slight movement of the flanges can cause damage [96]. When such wire is ordered in quantity, it is a good idea to divide the wire up amongst several small spools (rather than using one large spool) in order to reduce the risk of total loss in the event of an accident. Enameled wire should be worked on in clean areas. Precautions should be taken to ensure that metal particles do not get on wires during coil-winding operations [97]. This can be a problem near mechanical workshops. The swarf produced in these areas may be transferred to the wire, where it can puncture insulation and cause short circuits. It is possible for swarf to be carried on a worker’s clothing into areas where coils are being wound. Detailed information on the proper handling and use of magnet wire can be found in Ref. [96].
Soldering small enameled wires Active fluxes (such as RA and RMA) in electronic-grade solders can attack fine and ultrafine enameled wire – completely dissolving the insulation and (in the case of magnet wire) the copper over time even at room temperature [98]. Even comparatively mild “no-clean” fluxes will weaken insulations, and possibly destroy them after they have been activated
471
12.5 Diagnostics
by the heat of soldering. Nevertheless, one should generally use solders with no-clean fluxes (i.e. halide-free, with zero activation) when soldering such wire, and make sure that the flux is confined to the soldered joints. Pure rosin-core (R) solder has also been recommended [5]. Soldering ultrafine copper wires using tin–lead solders can lead to thinning and weakening of the copper – see pages 419–420.
12.5 Diagnostics 12.5.1 Introduction As with other aspects of reliability in general, the dependability of contacts, connectors, cables, and wiring is determined by the design and workmanship of these. It is not possible to improve their reliability by electrical testing. It must also be kept in mind that visual inspection is at least as important as electrical testing in determining the soundness of these items. This is especially true of solder joints, but is also relevant for other things discussed in this chapter. Crimp joints are often subjected to a mechanical pull test, in order to ensure that the wires are securely held by the crimp terminal. In some situations (as discussed on page 427) it is also desirable to subject solder joints and similar items to such a test.
12.5.2 Detection of contact problems 12.5.2.1 Resistance The most basic electrical method of detecting bad contacts is to find out whether their resistances are excessive. Actual connector contact resistances are usually around 1 m or less, although the measured value may be larger than this, owing to the resistance of the associated bulk conductor materials [99]. The contact resistance of crimp joints is typically 0.05–0.15 m [27]. Good solder joints also have a very low resistance, which depends on the bulk resistivity and dimensions of the solder making up the connection. Ordinary eutectic or near-eutectic tin/lead solders have resistivities of 15 µ·cm. If contact resistance measurements are made using a two-terminal method (with a normal ohmmeter, for example), measured values are normally much less than an ohm, and generally smaller than 0.2 . If a resistance is significantly larger than this, the joint is probably faulty. A systematic resistance measurement strategy for testing cable assemblies has been provided [100]. One must know beforehand the resistance per unit length of the wire or cable conductor. An ordinary multimeter is usually sufficient for this test, which has the following criteria. (a) Unless the length of the conductor is enough to yield a calculated resistance of more than 0.3 , the wiring can be considered satisfactory only if the measured resistance is less than 0.5 .
Interconnecting, wiring, and cabling for electronics
472
(b) If the calculated resistance is between 0.3 and 0.9 , a test tolerance of 0.2 is added to the calculated value to determine the maximum allowable one. (c) If the calculated resistance is greater than 0.9 , a test tolerance of 1.0 should be used. (d) In the case of shields, one need only ensure that the measured shield resistances do not exceed 0.3 ·m−1 (this test will not, of course, indicate whether the shield is capable of functioning adequately as an electromagnetic shield).
12.5.2.2 Non-ohmic contacts The linearity of a contact can be easily checked by changing the measurement range of the multimeter or micro-ohmmeter that is being used to measure its resistance [7]. Changing the range normally causes the excitation current to change. Hence, if the readings on two settings are significantly different (taking into account the altered resolution of the instrument on the two settings), it is possible that the contact is non-ohmic. Another, and potentially more sensitive, way of detecting contact nonlinearities is to measure the harmonic distortion of a sinusoidal excitation signal applied across the contact. The presence of a third harmonic provides a particularly useful indication of the existence of non-ohmic contacts.21 Third harmonic distortion measurements can be made either with a component linearity tester [101] or a lock-in amplifier. It is possible to measure harmonic distortion due to contact nonlinearities at levels of 130 dB below the fundamental. Issues concerning the disturbance of low-level measurements due to the combination of non-ohmic contacts and electromagnetic interference (audio rectification effects) are covered in Ref. [7]. Audio rectification is also discussed on page 372.
12.5.3 High-resistance and open- and short-circuit intermittent faults The sources of contact- or continuity-related intermittent problems are normally found by one of two general methods. The first of these involves mechanically disturbing the suspect parts, while the second consists of exposing them to temperature variations. Changes in the resistance of the part can be monitored by using a convenient excitation-detection arrangement (see below). In the case of solder joints, one can press on the suspect connection, tap on the circuit board or the solder terminal with the handle of a screwdriver, or gently flex the circuit board while looking for a response. Cables assemblies are dealt with using a similar method. The connector that is attached to the cable is held firmly in its mating counterpart, while the cable itself is flexed in various directions, in order to reveal broken or disengaged wires, shields, or solder connections in or near the connector. (Most conductor problems will usually be found within about 30 cm of the connector.) The connector can also be wiggled relative to its mating 21
In principle, a process that involves electron tunneling across an insulating film generates no second harmonic. In practice also, the third harmonic is (next to the fundamental) usually the strongest one created by poor contacts.
473
12.5 Diagnostics
counterpart, in order to reveal problematic contacts. Also, the cable as a whole can be inspected by moving along it with a hand-over-hand motion and gently tugging on it at regular intervals. Temperature changes are also responsible for intermittent problems. Solder joints can be checked for such faults by cooling them using commercially available “freezer aerosol sprays” or “freeze sprays,” which are made for troubleshooting problems of this kind. This action can be alternated with warming using a hair dryer. In the case of solder joints that get cooled to cryogenic temperatures during normal use, a controlled spray of liquid nitrogen can be useful for testing. The cooling of solder joints using freeze sprays or liquid nitrogen can cause local condensation, which may result in leakage currents if such joints are part of a high-impedance circuit. Freeze sprays can also generate large electrostatic voltages, and so should not be used on circuits containing ESD-sensitive electronic components (see Table 11.1 on page 391). Intermittent events due to defective contacts can have a very short duration – usually on the order of a millisecond. However, for example (in the case of tin on tin), the time scale for high-resistance events can vary from a few nanoseconds to several hundred milliseconds [102]. For cable conductors, a common criterion is to consider these as having failed if they exhibit open circuits of 1 µs duration or longer [103]. The results of applying mechanical or thermal stresses to an item under test can be monitored by providing the item with a suitable excitation current and sensing any disturbances on an oscilloscope. Alternatively, an audio-frequency oscillator can be used to inject a signal into the item, while the response is monitored with some headphones. The use of simple electronic circuits that are designed for detecting short-duration intermittents in cables is another possible approach. These are very easy to use, and can be helpful if large numbers of cables must be evaluated. Inexpensive cable testing devices (“cable testers” or “lead testers”) are available commercially, some of which can detect intermittent faults. In selecting such a device, keep in mind that intermittent problems often take the form of excessively high-contact-resistance conditions (perhaps only a few hundred milliohms, or even less) – not just complete open circuits. Short circuits are also a possibility. More sophisticated cable testing devices (sometimes called “cable test systems”) are available for detecting intermittent faults in multi-conductor cables. The swapping of doubtful cable assemblies with known good ones in an experimental setup can be an effective method of troubleshooting these components. Particularly elusive intermittent problems are often best remedied by repairing or replacing suspect items (e.g. re-melting solder joints or replacing cable assemblies), even in the absence of firm evidence of failure. (See also the discussion in Section 1.4.3
12.5.4 Use of infrared thermometers on high-current contacts Faulty high-current contacts reveal themselves by becoming hot. Touching the conductors directly with the fingertips in order to ascertain their temperature is generally not a good idea. For instance, high-current copper bus-bars frequently reach 100 ◦ C or more, even under normal operation [28]. A good way of detecting faulty high-current contacts is to use an “infrared thermometer,” which measures temperatures by sensing their thermal
474
Interconnecting, wiring, and cabling for electronics
radiation. If a given contact is more than about 3 K hotter than similar ones working under the same conditions, it is probably defective [104].
12.5.5 Insulation testing Degradation of wire or cable insulation is normally checked with an insulation tester, which is also referred to as a “high-voltage megohmeter,” and also frequently as a “Megger,” after one of the firms that makes this type of device. These insulation testers work by applying a relatively large d.c. voltage – usually in the range from 100 V to 1000 V – to the item under test, measuring the leakage current, and displaying the equivalent resistance. This is normally in the megaohm to gigaohm range. High-voltage megohmeters are frequently used to test a.c. mains wiring, and hence are very common devices. The voltage used depends on the type of wire being tested and its voltage rating. Lowvoltage cables, such as those carrying low-level analog signals, are often tested at 100– 200 V. High-voltage cables are tested at the higher end of the scale. It is possible to obtain insulation testers that are capable of producing up to 10 000 V. The reason for using relatively high voltages (100–200 V) on cables that are used to carry small signals is to reveal latent cable problems that might not routinely manifest themselves at lower voltages. For example, a stray strand of wire in a connector may be in contact with another wire in the circuit, but separated from it electrically by a thin layer of oxide or corrosion. At low voltages, such a condition might result in a short circuit only intermittently under certain circumstances. Hence, it would be hard to detect using an ordinary ohmmeter. However, the high voltage produced by the insulation tester may allow the oxide layer to be punctured, and the presence of the fault established, immediately. For low voltage wires and cables, one would expect the resistance of insulation in good condition to be 1 M or more. Despite the high voltages used by insulation testers, the currents are limited to very small values. The dielectric breakdown of insulation in good condition under high voltage stress is not normally a problem. Even fine enameled wires often have quite surprisingly high insulation-breakdown-voltages (check with the manufacturer). Of course, electronic devices are susceptible to damage from the voltages involved, and one must ensure that the wires or cables are not attached to any such items during a test.
12.5.6 Fault detection and location in cables The “time-domain reflectometer”, also referred to as a “TDR,” “cable fault locator,” or “cable radar,” detects impedance changes in cables by transmitting pulses along them, and timing and recording any reflections. Not only are such changes sensed, but their position along the cable is also determined. This allows faults, such as open- or short-circuits, or even “minor” damage or problems such as indentations, corrosion, or loose connectors, to be located. This capability is particularly useful in the case of long cables, and especially in situations where sections of the cable are not easily accessible. TDRs can be used with either coaxial or twisted-pair cables. While in principle a TDR can be made up from a pulse generator and an oscilloscope, in practice it is usually much better to use one of the commercially available dedicated
475
Summary of some important points
instruments, since the latter have features that make them much easier to use. (The interpretation of information provided by homemade TDRs can be difficult.) A wide variety of commercial TDRs are available. Versions that send signals into the cable in the form of voltage steps are generally preferable (but more expensive) than those that use pulses. Some TDRs are able to detect intermittent faults. Special kinds also exist for testing high-voltage cables. Testing devices that are made for checking coaxial and twisted-pair cables in digital local area networks (LANs) often contain TDRs. These instruments, called “cable scanners,” are capable of performing a range of tests. For instance, in the case of the more sophisticated instruments, the detection of crosstalk between cables is possible. Along with many other types of electronic test equipment, TDRs can be rented.
12.5.7 Determining cable-shield integrity Leaking radio-frequency fields produced by damaged cable shields, incorrectly installed connectors, etc., can be sensed and localized by using a near-field electromagnetic field probe, in conjunction with a spectrum analyzer or an oscilloscope, and perhaps an RF signal generator. Such probes can be obtained commercially, or are easily made using short lengths of miniature coaxial cable. The construction and use of homemade RF probes is discussed in Ref. [47].
Summary of some important points 12.2 Permanent or semi-permanent electrical contacts 12.2.1 Soldering (a) Poorly made or degraded solder joints may have permanent or intermittent high resistances or open circuits, and may be non-ohmic. (b) The ability to solder well is a very important one in a laboratory, and requires adequate training and practice. (c) Bad workmanship is a major cause of solder-joint problems. (d) Problems caused by defective joints may take up to several years to become apparent. (e) Cleanliness of the items to be joined, the soldering iron and other tools, and the solder itself, are critical to the formation of a dependable solder joint. (f) Items being soldered must be supported so that the solder in the joint is not exposed to any significant mechanical loads in service. (g) Always use the highest quality electronic-grade solder. Only tin–lead solders (preferably eutectic kinds) should be used for ordinary soldering of electronics – avoid “lead free” types. (h) Avoid allowing different types of solder to mix during the creation of a solder joint, since this can ultimately result in fatigue failure of the joint.
476
Interconnecting, wiring, and cabling for electronics
(i) In high-performance applications, or when RA fluxes are used, remove flux from printed circuit boards and other insulating surfaces when the entire soldering operation is complete. (j) The tin in ordinary solders tends to rapidly dissolve thin gold, silver and copper wires and films, when the solder is molten. (k) Solder joints made to gold, with ordinary solder, tend to be brittle and unreliable. (l) In high-voltage circuits, solder joints should have a ball-shape, in order to prevent high electric fields (and possible corona or arcing problems). (m) For long-term reliable service, ordinary tin–lead solder joints should not be exposed to temperatures above 60 ◦ C. (n) Solder joints should not be required to carry currents of more than 50 A.
12.2.2 Crimping, brazing, welding, and the use of fasteners (a) When stranded wires are being joined to other stranded wires, lugs, or connectors, the use of crimping is a fast and relatively easy way of creating very-high-reliability connections consistently, with little skill needed. (b) Contacts made by crimping, welding and brazing are suitable for high-temperature or high-current applications that would cause solder joints to degrade. (c) Pressed-conductor high-current contacts, made using mechanical fasteners, are susceptible to rapid thermal-runaway failure if they are poorly made or allowed to deteriorate.
12.2.3 Summary of methods for making contacts to difficult materials (a) Ultrasonic soldering is a very useful technique for making contacts to materials (such as aluminum, stainless steel, and niobium) that are difficult to solder using normal methods because of their tenacious oxide layers. (b) Friction soldering, while not as effective as ultrasonic soldering, is an inexpensive method for making contacts to difficult materials. (c) Indium solders, such as the 52/48 In/Sn eutectic, are particularly useful, but have a tendency to corrode in humid environments and in the presence of halide ions. (d) Difficult materials can be made easy-to-solder by electroplating or vacuum depositing a more solderable metal (e.g. palladium) on their surfaces. (e) Resistance welding (e.g. spot welding) is a useful complement to normal soldering methods, since it allows good contacts to be made in situations in which soldering is unsatisfactory (and vice versa).
12.2.4 Ground contacts (a) Good-quality contacts to hardware grounds (e.g. equipment enclosures, instrument racks, etc.) are often not easy to achieve. (b) Poor-quality or accidental ground contacts can cause problems that are extremely difficult to troubleshoot. (c) Hardware ground conductors should not be used to complete an electrical circuit.
477
Summary of some important points
(d) Hardware ground connections formed using soldering, brazing, or welding are preferable to those made using fasteners (i.e. screws, etc.). When fasteners must be used, star washers (or “toothed lockwashers”) should generally be employed. e) Accidental or incidental ground contacts should be avoided in sensitive analog or digital systems, and deliberate grounds should be made where necessary using a reliable method.
12.2.5 Minimization of thermoelectric EMFs in low-level circuits a) Temperature differences across junctions between dissimilar materials are the most common cause of disturbances during low-level d.c. voltage measurements. b) Soldering should generally be avoided as a way of making contacts for the purpose of making such measurements. c) The best ways of joining copper conductors in low-level d.c. circuits are by welding, crimping, or by creating press contacts using mechanical fasteners.
12.3 Connectors 12.3.1 Introduction Connectors are often the leading cause of failures in electronic systems in general. The most common problems are permanent or intermittent high-contact resistances or open circuits.
12.3.2 Failure modes Failures are often due to: (a) improper wiring of the connector, (b) contact damage (e.g. bent pins) and the buildup of oxidation, contamination, or corrosion on contacts; (c) fatigue failure of the contact-cable-wire junctions (e.g. solder joints) inside the connector shell, (d) loosening or corrosion of shielded connector shell subsections, (e) problems with the cable shields where they enter the connector (i.e. damage, improper attachment, pigtails, etc.), and (f) contamination or moisture on insulator surfaces.
12.3.3 Causes of connector failure (a) The main reason for connector failures is misuse, such as: (i) improper wiring and attachment of the connector to a cable or instrument enclosure (very common), (ii) joining connectors with their counterparts without checking to see whether they are compatible or oriented correctly, and (iii) exposing connector contacts or insulator surfaces to contaminants or moisture. (b) Connectors often fail because of normal wear and tear – many common connectors (such as BNC and SMA coaxial types) are rated for only 500 mating/demating cycles, and sometimes even less. (c) At elevated temperatures, connector spring contacts will relax over time, so that the contact force is diminished. Brass contacts cannot be used above 75 ◦ C.
478
Interconnecting, wiring, and cabling for electronics
(d) Corrosion by atmospheric gases and pollution can be a problem for connector contact materials, except good-quality gold-plated ones. This can occur even in apparently benign environments. (e) Particulates on connector contacts can cause open circuits, wear of contact plating metals, and corrosion.
12.3.4 Selection of connectors (a) Connectors should be of high quality. Many common consumer-grade connectors are poorly designed and manufactured. (b) Well-designed connectors should have: (i) a good-quality cable clamp (for cable mounted connectors – this is essential), (ii) a bend relief (for cable mounted connectors), (iii) a locking mechanism, and (iv) protection for vulnerable contacts (e.g. protective hoods for pins in male connectors). (c) Where electromagnetic shielding is needed, cable clamps that make 360◦ contact with the cable are desirable. Flat cable clamps should be avoided. (d) Coaxial connectors in which the outer cable clamp is crimped onto the cable are more robust mechanically and electrically than ones involving a ferrule-clamp and backnut. (e) Crimp joints between cable wires and connector contacts are generally superior to soldered ones, and crimping is easier than soldering. (f) Connectors that are attached to cables by using a twist-on action (as in the case of some coaxial types) are highly unreliable, and should be completely avoided. (g) Connectors with metal shells (housings) are frequently of considerably higher quality than those with plastic ones, and metal shells are important for electromagnetic shielding. (h) Although circular multi-pin connectors are more robust than D-type ones, they can have considerably worse inter-pin crosstalk problems. (i) The most reliable contact plating material for connectors carrying low-level signals and data is generally electroplated gold. (j) Silver electroplate is often preferred for contacts carrying power and high currents. (k) Contacts that are to be mated should be plated with the same materials. In particular, gold-plated contacts should not be joined to ones that are plated with non-noble metals (e.g. nickel, silver or tin). (l) In situations where connector voltages and currents are likely to approach the design limits, derating is necessary. m) Connector shield contacts should not be used for distributing current-carrying grounding in an electronic system – dedicated connector pins should be provided for this. Coaxial connectors are the only exception to this rule.
12.3.5 Some particularly troublesome connector types (a) Microwave connectors are very sensitive to even tiny amounts of dirt and damage, and must be used, protected, inspected and stored with extraordinary care.
479
Summary of some important points
(b) High-voltage connectors are seldom completely satisfactory, and should be avoided in favor of permanent connections using ceramic standoffs or feedthroughs whenever possible. (c) Connectors and cables to be used above about 2 kV should be coaxial types. (d) High-voltage connectors should be cleaned before being mated. (e) The current in high-current connector contacts should be restricted (i.e. derated) to no more than 50% of its maximum allowable value. (f) Connectors used in the CAMAC system of modular electronic instruments are delicate, have numerous contacts, and are easily damaged.
12.3.6 Some points concerning the use of connectors (a) Delicate panel-mounted connectors should be physically and visually accessible. (b) Connectors savers can be very helpful in extending the life of connectors (especially microwave ones), such as those located on instrument panels. (c) Lubricants can be useful, in some cases, in reducing connector contact wear and preventing chemical, galvanic, and electrolytic corrosion. Their use is essential with tin-plated contacts. (d) Crosstalk in multi-pin connectors can be reduced by providing a barricade of grounded pins between the culprit and victim pins. (e) Connectors should occasionally be inspected for damage, corrosion or contamination. Damaged or significantly corroded ones should be replaced. (f) The cleanliness of interface areas is especially important for connectors that are part of high-voltage, high-current, or very-high-impedance circuits, or high-frequency RF and microwave systems.
12.4 Cables and wiring (a) Along with connectors, cables are usually the most unreliable components in electronic systems (b) Cable faults include: open circuits, short circuits (esp. short circuits to ground), excessive transmission loss, electromagnetic interference (including crosstalk), microphonic noise, signal reflections, arcing and corona, and overheating. Problems are often intermittent.
12.4.2 Cable damage and degradation (a) Cables used in RF and (particularly) microwave systems, high-voltage apparatus, highimpedance circuits, and in experimental setups employed in low-level or high-precision measurements, are particularly vulnerable to damage and degradation. (b) Manual handling and mistreatment of cables are among the most important causes of cable faults. Cables should not be: (i) left on the floor, (ii) bent to a small radius, or (iii) tugged in order to demate their connectors.
480
Interconnecting, wiring, and cabling for electronics
(c) Do not coil cables for storage by winding them around the hand and elbow – cables should be rolled up in loose loops after use, and unrolled prior to use. (d) The shielding ability of a plain- or tinned-copper braid shield on a coaxial cable will gradually deteriorate if the cable is left undisturbed for a long period. Cables with silver-plated copper braid shields do not have this problem.
12.4.3 Selection of cables and cable assemblies (a) One should know the provenance of cables (especially patch cords) used in critical applications. (b) The fabrication of cable assemblies is a skilled task, and should be left to those with experience. (c) High-quality cable assemblies (especially patch cords) should be obtained from commercial sources, whenever possible, rather than be made by the user. (d) Firms exist that will custom-make specialized cable assemblies. (e) If it is necessary for the user to make up a cable assembly, it is highly advisable to follow the connector manufacturer’s instructions for joining the connectors to the cable.
12.4.4 Electromagnetic interference issues (a) Cable shields must always be connected to equipment enclosures and circuit commons at one or both ends of the cable. (b) Although the quality of a shield in a cable has an important influence on rejection of electromagnetic interference, most difficulties are actually caused by the way the shield is connected at the ends (i.e. lack of 360◦ contact, etc.). (c) Cables used as patch cords should be shielded with braid (preferably), or braid and aluminum foil. All-foil shields, and wrapped-wire ones, should be avoided. (d) The fractional area of a cable covered by a braided shield should be at least 85%. (e) When shielded cables are joined to their connectors, the creation of shield pigtails should be avoided. (f) The number of twists per unit length in a twisted-wire-pair cable (used to reduce inductive pickup from an a.c. magnetic field) is of no importance if the cable endconnections are included in the field. (g) To reduce radio-frequency and capacitively coupled interference, unused twisted pairs in a multi-pair cable should be joined and grounded at one end.
12.4.5 Some comments concerning GP-IB and ribbon cables (a) GPIB cable assemblies are readily damaged by overbending. Special cables are available that can reduce such problems.
481
Summary of some important points
(b) The use of ribbon cables should generally be avoided (except for special types designed for cryogenic applications), especially outside apparatus enclosures.
12.4.6 Additional points on the use of cables (a) Cables should be routed so as to make them easy to trace, remove and replace. (b) Low-level signal or data cables should not be placed alongside mains-power conductors or telephone cables, and should not be routed near fluorescent lights, power transformers, or (without investigation) next to drop ceilings. (c) Metallic cable raceways provide physical protection for long cables, and can be used to screen poorly shielded or unshielded cables from crosstalk and ambient electromagnetic fields. (d) In general, and if possible, cables that produce, or are susceptible to, electromagnetic interference should be routed next to a grounded sheet-metal surface. (e) Unshielded high-voltage cables should not be allowed to touch grounded surfaces. (f) Cables should be supplied with reliable and long-lived markings. Purpose-made cable markers are available from commercial sources. (g) Cable and wire should be stored in stable indoor environments – protected from large temperature and humidity fluctuations. (h) Electrically faulty cables, or ones with visible damage, should be replaced, and subsequently destroyed in order to prevent them from accidentally reentering service. One should not attempt to repair them.
12.4.7 Wire issues – including cryostat wiring (a) Enameled single-strand copper wires (“magnet wires”) of diameter less than about 80 µm (40 AWG) require special handling and are less reliable than larger types. (b) The insulation on single-strand enameled wires should be selected with care. Modified polyester is a relatively robust type. Double insulation coatings can be beneficial. (c) The solvents in IMI 7031TM varnish (or “GE varnish”) can attack enamel-type insulations (particularly Formvar). (d) Wiring in cryogenic apparatus is notorious for causing reliability problems. (e) Woven cables (“cryogenic woven looms”) should be used in cryostats. (f) Use redundant defensive measures (e.g. robust enamels, woven cables, varnish-soaked lens-cleaning tissue) to protect cryostat wiring from damage. (g) It is often necessary to secure cryostat wires to prevent movement and consequent noise. However, in order to prevent wire breakages, some slack is needed, and wires should be attached to support structures only at isolated spots, using small amounts of varnish. (h) Cryostat wiring should be assembled, handled, and installed only by experienced people. (i) Active fluxes in electronic-grade solders can attack fine and ultrafine enameled wire. Solders with no-clean fluxes (i.e. halide-free, with zero activation) should be used for soldering such wire, and the flux should be confined to the soldered joints.
482
Interconnecting, wiring, and cabling for electronics
12.5 Diagnostics (a) The reliability of contacts, connectors, wires and cables is dependent on good design, workmanship, and proper handling – not on testing. (b) The primary method of testing contacts and cables is by measuring their resistances, and comparing them with calculated values. (c) Intermittent faults can be detected by producing mechanical or thermal disturbances in suspect items, while monitoring electrical properties with a device (e.g. an oscilloscope) capable of revealing transient electrical effects. (d) Defective high-current contacts can be found by looking for evidence of overheating with an infrared thermometer. (e) Cable insulation can be checked with an insulation tester, which measures insulation resistance using voltages of typically 100–1000 V, at very low currents. (f) The detection and locating of faults along a long cable may be done using a time-domain reflectometer, which sends electrical pulses down a cable and times and records any reflections. (g) Shield problems on cables or connectors can be detected and located with a near-field electromagnetic field probe.
References 1. R. Ulrich and L. Schaper, IEEE Spectrum 40, #7, 27 (2003). 2. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn., Cambridge University Press, 1989. 3. K. D. Akin, Connector Specifier 15, 26 (1999). 4. P. C. D. Hobbs, Building Electro-Optical Systems: Making it all Work, John Wiley and Sons, 2000. 5. J. W. Ekin, Experimental Techniques for Low-Temperature Measurements: Cryostat Design, Material Properties, and Superconductor Critical-Current Testing, Oxford University Press, 2006. 6. M. Lee, in The ARRL RFI Book, E. Hare (ed.), The American Radio Relay League, 1999. 7. Low Level Measurements Handbook, 6th edn, Keithley Instruments, Inc., 28775 Aurora Road, Cleveland, Ohio, USA, 2004. www.keithley.com 8. www.youtube.com 9. video.google.com 10. www.curiousinventor.com 11. www.astro.umd.edu/∼harris/docs/WellerSoldering.pdf 12. H. Smith, Quality Hand Soldering and Circuit Board Repair, 4th edn., Thomson Delmar Learning, 2004. 13. J. Moore, C. Davis, M. Coplan, and S. Greer, Building Scientific Apparatus, 3rd edn, Westview Press, 2002.
483
References
14. ECSS Secretariat, ESA-ESTEC Requirements & Standards Division, Space Product Assurance: The Manual Soldering of High-Reliability Electrical Connections (ECSSQ-70–08A), ESA Publications Division, 1999. 15. R. Strauss, SMT Soldering Handbook, Newnes, 1998. 16. P. T. Vianco, Soldering Handbook, 3rd edn, American Welding Society, 1999. 17. M. Hansen and K. Anderko, Constitution of Binary Alloys, 2nd edn, McGraw-Hill, 1958. 18. J. L. Marshall, in Solder Joint Reliability, J. H. Lau (ed.), Van Nostrand Reinhold, 1991. 19. M. Judd and K. Brindley, Soldering in Electronics Assembly, 2nd edn, Newnes, 1999. 20. J. M. Kolyer and D. E. Watson, ESD from A to Z: Electrostatic Discharge Control for Electronics, 2nd edn, Kluwer Academic Publishers, 1999. 21. Indium/Copper Intermetallics (application note), Indium Corporation of America, 1676 Lincoln Avenue, Utica, NY, USA. www.indium.com 22. Totoku Electric Co., Ltd., 1–3-21 Okubo, Shinjuku-ku, Tokyo, Japan. www.totoku. com 23. B. R. Schwartz and G. R. Gaschnig, in Handbook of Wiring, Cabling, and Interconnecting for Electronics, C. A. Harper (ed.), McGraw-Hill, 1972. 24. R. A. Bulwith, Circuits Assembly 9, #4, 36 (1998). 25. High-Voltage Terminations, in Inspection Pictorial Reference, NASA Workmanship Standards. workmanship.nasa.gov/ 26. J. Budnick, T. Bertuccio, K. Kirk, et al., IEEE Nuclear Science Symposium and Medical Imaging Conference 1991, Vol. 2, pp. 974–978. 27. F. Keister and E. Varga, in Handbook of Wiring, Cabling, and Interconnecting for Electronics, C. A. Harper (ed.), McGraw-Hill, 1972. 28. Electrical Design Guidelines for Electronics to be used in Experimental Apparatus at Fermilab, Electronics/Electrical of the Particle Physics Division, December 22, 1994. http://gate.hep.anl.gov/thron/elec docs/ElecDesignGuide.html 29. L. Ekman, in Space Vehicle Mechanisms: Elements of Successful Design, P. L. Conley (ed.), John Wiley & Sons, Inc., 1998. 30. C. Kindell, T. Kingham, and J. Riley, in Electronic Engineer’s Reference Book, 5th edn, F. F. Mazda (ed.), Butterworths, 1983. 31. Smithells Metals Reference Book, 6th edn, E. A. Brandes (ed.), Butterworths, 1983. 32. Metals Handbook – Ninth Edition; Volume 6 – Welding, Brazing and Soldering, American Society for Metals, 1983. 33. P. G. Slade, in Electrical Contacts – Principles and Applications, P. G. Slade (ed.), Marcel Dekker, Inc., 1999. 34. M. Braunovic, in Electrical Contacts – Principles and Applications, P. G. Slade (ed.), Marcel Dekker, Inc., 1999. 35. D. Atherton, Rev. Sci. Instrum. 38, 1805 (1967). 36. D. A. Cardwell and D. S. Ginley, Handbook of Superconducting Materials, CRC Press, 2003. 37. F. J. Fuchs, Electronic Production, 10, #4, 10 (1981). 38. J. Fuchs, Printed Circuit Assembly, 3, #7, 42 (1989).
484
Interconnecting, wiring, and cabling for electronics
39. I. R. Walker, Rev. Sci. Instrum. 66, 247 (1995). 40. Indium Corporation of America, 1676 Lincoln Avenue, Utica, NY, USA. www.indium. com 41. A. M. Cruise, J. A. Bowles, T. J. Patrick, and C. V. Goodall, Principles of Space Instrument Design, Cambridge University Press, 1998. 42. E. N. Smith, in Experimental Techniques in Condensed Matter Physics at Low Temperatures, R. C. Richardson and E. N. Smith (eds.), Addison-Wesley, 1988. 43. H. W. Denny, Grounding for the Control of EMI, Don White Consultants, Inc., 1983. 44. Department of Defense Handbook: General Guidelines for Electronic Equipment, MIL-HDBK-454B, 15 April 2007. www.assistdocs.com 45. H. W. Ott, Noise Reduction Techniques in Electronic Systems, 2nd edn, John Wiley & Sons, 1988. 46. R. A. Pease, Troubleshooting Analog Circuits, Butterworth-Heimann, 1991. 47. M. Mardiguian, EMI Troubleshooting Techniques, McGraw-Hill, 2000. 48. P. D. T. O’Connor, Practical Reliability Engineering, 4th edn., Wiley, 2002. 49. T. A. Yager and K. Nixon, IEEE Trans. Components, Hybrids, Manuf. Technol. CHMT-7, #4, 370 (1984). 50. W. H. Abbot, in Electrical Contacts – Principles and Applications, P. G. Slade (ed.), Marcel Dekker, Inc., 1999. 51. G. Davis and R. Jones, Sound Reinforcement Handbook, 2nd edn, Hal Leonard, 1989. 52. R. B. Comizzoli, G. R. Crane, M. E. Fiorino et al., J. Mater. Res., 12, #3, 857 (1997). 53. C. Herrmann, P. J. Miller, and W. D. Prather, 21st Annual Connectors and Interconnection Technology Symposium, Int. Inst. Connector & Interconnection Technol., 1988. 54. Microwave Connector Care, HP Part No. 08510–90064, April 1986, Hewlett Packard Company (now Agilent). www.agilent.com (Another, more accessible source of information on the care of microwave connectors is: Operating and Service Manual, Agilent Technologies 85131E/F/H NMD-3.5 mm – f – to 3.5 mm Flexible Test Port Return Cables, February 2008, Agilent Part Number: 85131–9009. http://cp.literature.agilent. com/litweb/pdf/85131-90009.pdf ) 55. D. M. Dobkin, RF Engineering for Wireless Networks, Newnes, 2004. 56. C. A. Harper, Passive Electronic Component Handbook, McGraw-Hill, 1997. 57. R. S. Timsit, in Electrical Contacts – Principles and Applications, P. G. Slade (ed.), Marcel Dekker, Inc., 1999. 58. R. B. Comizzoli and J. D. Sinclair, in Encyclopedia of Applied Physics – Volume 6, G. L. Trigg (ed.), VCH Publishers, Inc., 1993. 59. M. Antler, in Electrical Contacts – Principles and Applications, P. G. Slade (ed.), Marcel Dekker, Inc., 1999. 60. J. D. Guttenplan, in Metals Handbook – Ninth Edition; Volume Thirteen – Corrosion in the Electronics Industry, ASM International, 1987. 61. H. C. Shields and C. J. Weschler, HPAC Engineering, May 1998, p. 46. 62. W. Shawlee, Defense Daily Network, August 1, 2000. www.defensedaily.com 63. Positronic Industries, Inc., 423 N Campbell Ave., Springfield, MO, USA. www. connectpositronic.com
485
References
64. N. Ellis, Electrical Interference Technology: A Guide for Designers, Manufacturers, Installers and Users of Electrical and Electronic Equipment, Woodhead Publishing Limited, 1992. 65. T. Williams, EMC for Product Designers, 2nd edn, Butterworth-Heinemann, 1996. 66. F. F. Mazda, Discrete Electronic Components, Cambridge University Press, 1981. 67. Information from: Brush Wellman Inc., 17876 St. Clair Avenue, Cleveland, OH, USA. www.brushwellman.com 68. R. S. Mroczkowski, in Electronic Connector Handbook, R. S. Mroczkowski (ed.), McGraw-Hill, 1998. 69. W. Shawlee, Aviation Today, July 1, 2003. www.aviationtoday.com 70. J. R. Barnes, Robust Electronic Design Reference Book – Volume I, Kluwer, 2004. 71. F. Watt, in Handbook of Reliability Engineering and Management, 2nd edn, W. G. Ireson, C. F. Coombs, Jr., and R. Y. Moss (eds.), McGraw-Hill, 1996. 72. Suhner RF Connector Guide, Huber+Suhner AG, 9100 Herisau, Switzerland. www. hubersuhner.com. 73. M. Mayer and G. H. Schr¨oder, IEEE Electrical Insulation Mag. 16, #2, 8 (2000). 74. J. Reason, Power 123, #9, 73 (1979). 75. M. S. Naidu and V. Kamaraju, High Voltage Engineering, 2nd edn, McGraw-Hill, 1996. 76. J. R. Perkins, in Handbook of Wiring, Cabling, and Interconnecting for Electronics, C. A. Harper (ed.), McGraw-Hill, 1972. 77. S. Dove, in Handbook for Sound Engineers, 3rd edn, G. M. Ballou (ed.), ButterworthHeinemann, 2002. 78. B. Whitlock, in Handbook for Sound Engineers, 3rd edn, G. M. Ballou (ed.), Butterworth-Heinemann, 2002. 79. K. R. Fause, J. Audio Eng. Soc. 43, 498 (1995). 80. W. R. Leo, Techniques for Nuclear and Particle Physics Experiments: a How-To Approach, 2nd edn, Springer Verlag, 1994. R 243; Henkel Corporation; www.henkelna.com 81. Loctite 82. T. G. Grau, IEEE Trans. Components, Hybrids, Manuf. Technol. CHMT-1, #3, 286 (1978). 83. D. Tomal and N. S. Widmer, Electronic Troubleshooting, 3rd edn, McGraw-Hill, 2004. 84. P. Krieger, in The ARRL RFI Book, E. Hare (ed.), The American Radio Relay League, 1999. 85. The Yamaha Guide to Sound Systems for Worship, J. F. Eiche (ed.), Hal Leonard, 1990. 86. S. Ahmad and B. W. Smithers, Shielding and Ageing Effects with Flexible Coaxial Cable, IEEE 1985 International Symposium on Electromagnetic Compatibility (Cat. No. 85CH2116-2), 1985, pp. 287–295. 87. G. M. Ballou, in Handbook for Sound Engineers, 3rd edn, G. M. Ballou (ed.), Butterworth-Heinemann, 2002. 88. E. G. Fischer and H. M. Forkois in Shock and Vibration Handbook, 4th edn, C. M. Harris (ed.), McGraw-Hill, 1996. 89. T. Stearns, in Electronic Connector Handbook, R. S. Mroczkowski (ed.), McGrawHill, 1998.
486
Interconnecting, wiring, and cabling for electronics
90. R. Morrison, Grounding and Shielding Techniques, 4th edn, John Wiley & Sons, 1998. 91. P. Lee and P. L. Conley, in Space Vehicle Mechanisms: Elements of Successful Design, P. L. Conley (ed.), John Wiley & Sons, Inc., 1998. 92. L. J. Payette, IEEE Electrical Insulation Magazine, 2, #3, 29 (1986). 93. J. R. Devaney, G. L. Hill and R. G. Seippel, Failure Analysis Mechanisms, Techniques, and Photo Atlas – a Guide to the Performance and Understanding of Failure Analysis, Failure Recognition & Training Services, Inc., 1983. 94. I. R. Walker, Rev. Sci. Instrum. 75, 1169 (2004). 95. C. R. Cunningham, P. R. Hastings and J. M. D. Strachan, Cryogenics 35, 399 (1995). 96. E. J. Croop, in Handbook of Wiring, Cabling, and Interconnecting for Electronics, C. A. Harper (ed.), McGraw-Hill, 1972. 97. M. N. Wilson, Superconducting Magnets, Oxford University Press, 1983. 98. D. Gerstle, Reliability, Maintainability, & Supportability 8, #2, 2 (2004). 99. E. F. Godwin, in Handbook of Wiring, Cabling, and Interconnecting for Electronics, C. A. Harper (ed.), McGraw-Hill, 1972. 100. General Specification for the Design and Testing of Space Vehicle Wiring Harnesses (Military Specification) – DOD-W-83575A (USAF), December 22, 1977. 101. E. Takano, Electrical Contacts – 2000. Proceedings of the Forty-Sixth IEEE Holm Conference on Electrical Contacts, 169 (2000). 102. C. Maul, J. W. McBride, and J. Swingler, IEEE Trans. Comp. Packag. Technol., 24, #3, 370 (2001). 103. K. W. Moll and D. R. McCarter, Electronic Packaging and Production 16, #6, W29 (1976). 104. Fluke Corporation. www.fluke.com
Computer hardware and software, and stored information
13
13.1 Introduction The ubiquity of computers in just about every aspect of experimental work, from their use in preliminary literature searches, through the collection and analysis of data, to the final publication of results, means that the consequences of their imperfections are unavoidable. Although hardware and software problems are both a possible cause of frustrations, the latter can be particularly insidious, and harder to solve. The failure of hardware is sometimes heralded by unusual physical phenomena such as excessive vibrations or noises [1]. On the other hand, failures of software take place suddenly and without warning. Also, again unlike the situation with hardware, the concept of safety margins (or “derating”) normally does not exist in software design.1 The enormous complexity of many types of software when compared with hardware makes the former particularly susceptible to unforeseen failure modes. (For example, operating systems and large applications often contain millions, or even tens of millions, of lines of source code.) These characteristics can all make software failures especially troublesome.
13.2 Computers and operating systems 13.2.1 Selection Compared with many other electronic devices, computers (including their hardware, software, data storage, security and interconnection aspects) are generally very unreliable. Hence, and since they play such an important part in so many facets of research, it pays to use high-quality computer equipment and software. The quality of computer hardware varies over a wide range – from systems of marginal dependability, to ones that can be counted on to give years of reliable service. Profit margins on personal computers are generally extremely small, and manufacturers therefore have a great incentive to do everything possible to reduce costs. The poor quality that often results is particularly striking at the low end of the price scale. In particular, consumer-grade PCs are generally cheaply constructed and unreliable [2]. Business-class machines are 1
487
However, in numerical calculations, double-precision numbers are often used (instead of single-precision ones) to provide a margin-of-safety against excessive roundoff errors.
Computer hardware and software
488
normally a much better choice for laboratory work. Alternatively, it is straightforward to build a high-quality PC at relatively low cost using top-quality components (i.e. a power supply, motherboard, hard drive, etc.), and this is widely done. Reviews of consumer-grade personal computer hardware can be found at the Consumer Reports2 website [3]. Useful general information about PC hardware is provided in Ref. [4]. The choice of operating systems is also an important issue. As is well known, many computer reliability problems stem from the presence of viruses, Trojan horses, and other harmful software agents (malware). From this point of view, Mac OS is a very good operating system. The number of viruses that can infect it is thought to be very small [5,6]. Furthermore, since (at the time of writing) only a small percentage of computers are Macs, they are not likely to become targets for the creators of malware. For the same reasons, Linux-based systems are also regarded as being relatively trouble free. Such “securityby-obscurity” is not a result of any superior intrinsic qualities of these operating systems (although they are acknowledged to be very good in this respect). Nevertheless it is real, and an important advantage of Mac OS and Linux. Security issues are discussed further in Section 13.7. The stability of the operating system (i.e. intrinsic resistance to crashes and other such problems) is also an issue. For example, Mac OS is based largely on the FreeBSD version of UNIX. The UNIX operating system has undergone continuous refinement over several decades, and is renowned for its stability [7]. The result is that Mac OS is usually a very stable operating system. Similarly, the quality of the code in Linux (Linux is closely related to UNIX) is also very high, and Linux-based systems have generally shown themselves to be very stable [8]. In comparing and judging various operating systems (and indeed all software), one must look at them in similar stages of their development cycles. (For example, most new software tends to be buggy – see Section 13.8.1.) One advantage that Mac computers have over Windows-based PCs (and PCs that use Linux) is that a single company (Apple) makes both the hardware and the operating system. They are therefore in a position to get the two working together smoothly [9]. In fact, Mac OS will (in practice, if not in principle) operate only on computers made by Apple, whereas Windows and Linux-based systems can run on machines made by many different manufacturers. (These include ones made by Apple.) Such flexibility is not necessarily an advantage, because of the possibility for compatibility problems. The hardware used in Mac computers is also generally of high quality. From the standpoint of reducing human error, ease of use is an important consideration. Mac OS has a reputation for being extremely easy to set up and use. The installation of new operating systems or third-party software is normally very straightforward and trouble-free. Linux-based systems have historically been considered to be relatively difficult from this perspective. This is one factor that has probably limited the widespread adoption of Linux. However, the situation has improved greatly. Putting Linux on a PC with normal hardware has become very straightforward. Furthermore, it is possible to buy a number of PCs that come pre-installed with Ubuntu and other Linux distributions, along with various useful Linux-based applications. The graphical user interfaces provided in many Linux desktop environments are essentially as user-friendly as those in Mac OS or Windows. 2
Consumer Reports is an independent, non-profit organization that tests and evaluates consumer products.
13.2 Computers and operating systems
489
However, some problems remain. Depending on what is needed, getting things to work properly under Linux may still require considerable effort, and perhaps involve some compromises. In particular, finding drivers for plug-in hardware (such as audio cards) can often be difficult, especially if the hardware is uncommon. (Drivers are pieces of software used by the operating system to run the hardware). If Linux is used, it is particularly important to find out whether drivers exist for any proposed hardware. Issues involved in changing over from Windows to Linux are discussed in Refs. [7] and [9], and at various online sites. Although the amount of software intended for use under Mac OS is somewhat limited in comparison with that which can run under Windows, this is in practice not often a major problem in many types of research. In any case, virtualization software is available that allows programs designed to operate under Windows to run, albeit at a reduced speed, under Mac OS. (Unfortunately, with Windows running on a Mac in virtual form, substantial security issues are reintroduced. Also, the virtualization software itself may have reliability problems.) A large amount of software has been developed to operate under UNIX, and this can run directly on a Mac, but perhaps without the graphical user interface provided by Mac OS. Because of the growing popularity of Linux, many software applications that are of interest in scientific research are available for systems based on it. Virtualization software is also available that makes it possible to run Windows programs under Linux.
13.2.2 Some common causes of system crashes and other problems Operating system misbehavior is not necessarily caused by viruses and the like, or a lack of robustness of the operating system itself, but may be the result of, for example [10]: (a) (b) (c) (d) (e) (f)
buggy, corrupted, or improperly installed applications, conflicts between applications running simultaneously, incorrect or incomplete operating system upgrades, corrupted core files, too little hard-disk space, and inadequate disk maintenance (using a disk checking utility and defragmenting the disk).
If too many applications are running simultaneously, this may place an excessive burden on a computer’s memory and processing power, which can cause slowdowns or even system crashes. Software that runs unseen in the background whenever the computer is switched on (i.e. “background services” or “background programs”) should be treated with particular caution [10]. These include anti-virus programs set to run continuously, and disk-checking utilities, for example. Background programs can cause insidious system instabilities if they are not well behaved. Adware and spyware are types of unwanted software that operate in the background, and may be installed on a computer unintentionally or surreptitiously. (See the discussion of security issues in Section 13.7.) Instabilities are also often caused by hardware or hardware-driver software problems. Many problems that are erroneously attributed to physical failures of hardware are actually caused by corrupted or out-of-date drivers [10]. In diagnosing hardware faults, driver
Computer hardware and software
490
problems should be suspected before attention is turned to the hardware. Some common hardware-related sources of trouble are [4,10]: (a) overheating of the computer, (b) irregularity of the a.c. mains power (see Section 13.4.3), (c) dislodged, defective, or improperly connected internal or external connectors and cables3 (the most frequent cause of non-functioning computers), (d) low-quality, overloaded, or failing power supplies (see Section 13.4.2), (e) hardware, driver software, or the operating system are not compatible with each other, (f) driver software is corrupted or incorrect, (g) different pieces of hardware are in conflict, and (h) hardware (e.g. memory or the motherboard) is failing, perhaps because it is old, has been exposed to high heat and humidity for a long time, or has been subjected to large power-line overvoltages.
13.2.3 Information and technical support In case of difficulties, the “help and support” or “help” facility provided with an operating system or application is a convenient place to start looking for guidance. Another good way of obtaining information about computer problems is to make use of technical-support Web sites. These include sites operated by manufacturers of computer hardware or operating systems, ones operated by companies that are not involved in manufacturing, and peer-to-peer sites, at which individual computer users may answer questions. Carrying out a Web search (using as keywords “support sites” and the name of the operating system) can turn up useful lists. A review of online technical support resources is provided in Ref. [10]. In return for a one-off or annual fee, some firms can provide telephone and email support for computer equipment and software. Also, it may be possible to purchase telephone and email support along with the computer as part of a multi-year support contract. (Telephone support in particular can be extremely useful.) It is often worth having a guidebook on the operating system, especially for those occasions when the computer is not working at all, and the Internet is therefore inaccessible. Compact “pocket guides” are also available, and can be helpful in providing easy access to commonly needed information. One useful book on software and hardware problems in Windows XP-based PCs (but with information pertaining to computers in general) is Ref. [10]. See also Ref. [11].
13.3 Industrial PCs and programmable logic controllers Industrial (or “ruggedized”) PCs are computers that are provided with features which make them more reliable, especially in harsh industrial and laboratory environments involving 3
Most defective cables in PCs are “ribbon cables.”
491
13.4 Some hardware issues
conditions of high heat, dust, humidity, and electromagnetic noise [12]. These features may include particularly robust hardware, a high-performance cooling system, air filters, an oversized power supply, and /or shock mounts on the hard drives. Some ordinary desktop PCs have poor electromagnetic shielding, which can lead to interference with sensitive instrumentation (see page 371). Industrial PCs can be much better in this regard. It is also possible to obtain enclosures that are sealed to prevent the entry of dust and liquids. The operating systems of industrial PCs are the same as the ones used on conventional computers, and hence such computers are susceptible to the usual software vulnerabilities. Similarly, ruggedized laptop computers are also available for use in harsh conditions. A range of different types, made to different levels of robustness, is sold. So-called “fully rugged laptops” are suitable for the worst conditions. These computers are designed to withstand large drops, extreme operating temperatures, high humidity, spills, high altitudes, dust, vibrations, and so on. Most researchers would not find such devices to be worth the cost (they are very expensive). However, “semi-rugged laptops” are available that, while not as robust, are cheaper, and suitable for severe everyday use. The terms “fully rugged” and “semi-rugged” are not used uniformly by the various computer manufacturers. Programmable logic controllers (PLCs) are special-purpose computers that are used for controlling industrial and laboratory equipment and processes. Compared with PCs, both the hardware and the software are designed for the highest levels of reliability. The hardware is made for use in harsh environments, as in the case of industrial PCs. The software achieves its reliability mainly by being very simple. For example, these devices do not have operating systems in the usual sense. This restricts their possible range of uses, but helps to ensure that they can be relied upon under conditions where failure would be unacceptable. PLCs are not suitable for experimental data collection, analysis, or many of the other tasks that PCs are used for. However, PLCs can be interfaced with PCs (e.g. using an Ethernet connection), so as to take advantage of the capabilities of both devices. PLCs have the additional advantage of operating in real-time. For example, they will produce a change in an analog output voltage, in response to a change in an analog input, within a definite time span. This is not the situation in the case of PCs, where delays of uncertain duration are possible. These unpredictable delays can cause problems in some applications. PLCs are discussed in Ref. [13].
13.4 Some hardware issues 13.4.1 Hard-disc drives 13.4.1.1 Risks and causes of hard-drive failure As electromechanical devices, hard-disc drives are among the most unreliable parts of a computer. Because they are used for information storage, their failure is of particular concern. Regular backups of the information on hard drives are an essential part of operating a computer (see Section 13.5).
492
Computer hardware and software
Hard drives are especially vulnerable to failure as a result of prolonged overheating [10]. Every effort should be made to ensure that cooling of the computer is not inhibited, and that environmental temperatures are not excessive (see the comments in Section 3.4.1). It is possible to obtain add-on cooling fans for hard-drives (“hard drive coolers”). Hard drives are also particularly susceptible to failure as a result of mechanical shock. This is, of course, mainly a concern for portable computers (such as laptop PCs), which are vulnerable to being dropped, but can also affect desktop devices. Such computers, and external hard drives, should not be moved while they are operating. Some portable computers are provided with acceleration sensing devices, which detect when a large impact is about to take place, and put the hard drive in a safe state. Dust can cause hard drives failures if it is able to find its way inside these devices. Computers should not be operated in dirty environments – especially those where ash and dust are present [10]. If it is necessary to do this, consideration should be given to using an industrial PC, rather than an ordinary type designed for office conditions (see Section 13.3). Hard drives should not be used in high-humidity environments. Large a.c. mains overvoltages can cause damage to the components in a hard drive. This is one good reason always to connect a PC to the mains power through a surge suppressor (see page 495). Hard drives may also be damaged by electrostatic discharge (ESD). Elementary precautions should be taken to prevent such events while handling internal types (see page 393). The reliability of hard drives is dependent on the manufacturer. Even the better producers sometimes have unreliable drive product lines [4]. However, only a very few firms can be counted upon to make dependable devices with reasonable consistency. Also, as with other computer hardware, profit margins on hard drives are very small, and low-cost ones are likely to be troublesome.
13.4.1.2 Use of redundant disc (RAID) systems A good method to avoid loss of critical information in the event of a hard-drive failure is to use a RAID (Redundant Array of Independent Discs) system [11]. RAID setups can be controlled using either software or hardware. A hardware system involves a special board (a “RAID controller”) that plugs into a computer expansion slot. The normal hard drive is attached to this board, and another drive (preferably identical to the first) is also connected to it. Information that is usually directed only to the normal drive is taken by the board and sent simultaneously to both drives. This duplication is done continuously, in real time, so that there is always a second copy of the contents of the normal hard drive. If the normal hard drive should fail, the redundant drive automatically takes over. Although it is possible to execute the RAID controller function in software, a hardware implementation is usually a better approach, because of the higher speed that it allows. Software RAID controllers have the advantage of being free and simple to set up, since they are usually provided with the operating system. The use of any kind of RAID setup is not a substitute for regular backups of the hard drive. This is because destruction of information on the normal hard drive by a process not involving drive failure (perhaps as a result of the erroneous deletion of a file, or the actions of a virus) is immediately mirrored on the redundant drive. RAID strategies are discussed in Ref. [14].
13.4 Some hardware issues
493
Several different types of RAID are available. The kind described above is referred to as a “RAID 1” scheme. Another version, called “RAID 5” (which involves a minimum of three hard drives) also provides security against hard-drive failure, and makes more efficient use of storage space than RAID 1 [14]. Yet another type of RAID, which is used only to increase the speed of hard-disk operations, and does not provide redundant information storage, is “RAID 0.”
13.4.1.3 Symptoms of impending drive failure Some symptoms of impending hard-drive failure are [11]: (a) (b) (c) (d)
(e) (f) (g) (h)
the drive is producing strange noises, such as squeals or ticking sounds, it takes an unusually long time for the computer to start up, the drive is quiet for long intervals following attempts to open files or folders, the computer generates error messages with unexpected frequency, especially following attempts to execute tasks such as deleting files and folders, or copying and pasting, strange changes are being made to file or folder names, files are inaccessible or missing, the accessing of files takes an unusually long time, information in files is disordered.
If any of the above events occur except (a), the disk should be tested using some hard-disk diagnostics software, such as a checking utility provided with the computer. If the drive is making unusual noises that might indicate physical damage, it should be switched off immediately.
13.4.1.4 Recovery of data from failed hard drives If a hard drive fails, and for some reason there are no backups of the information on it, several options are available. Data-recovery software is available that can retrieve information from troublesome hard drives under circumstances in which the drive is not physically damaged [10]. If the drive is physically damaged, or if the use of data-recovery software has not been successful, it is possible to send the device to a company that uses special techniques to recover information from faulty drives. Even if a drive has been very severely physically damaged (e.g. by being exposed to fire or water), it is often possible to recover at least some of the information. However, these firms (called “data-recovery services”) charge extremely high fees for their efforts. Furthermore, they are not always successful. Hence, every effort should be made to ensure that hard-drive information is properly duplicated, by backing up regularly, and possibly by employing a RAID arrangement. The use of data recovery services is discussed in Ref. [10].4 4
Data-recovery firms can also retrieve data from faulty storage media of other types, such as CDs, magnetic tapes, memory cards, etc.
Computer hardware and software
494
13.4.1.5 Solid-state disks as an alternative to hard drives It seems very likely that semiconductor-based devices called “solid-state disks” or “solidstate drives” will eventually supplant mechanical hard drives in many applications, especially where vast amounts of storage are not required. Since solid-state disks contain no moving parts, they are much more reliable than their mechanical counterparts, and last longer as well. Solid-state disks are also considerably faster than mechanical ones.
13.4.2 Power supplies Low-quality computer power supplies can cause operating system crashes, or may prevent the system from booting [4]. This can be the result of poor voltage regulation, large ripple levels, inadequate power, and other problems. Also, the power supply is responsible for cooling the computer as well as providing regulated power to all its parts. Computer crashes that are often attributed to the operating system are frequently due to the power supply. Outright failure of the power supply is another concern. These devices are the most frequently replaced components in PCs [10]. Some cheap power supplies can fail catastrophically, and in doing so damage or destroy the computer’s motherboard and other devices [4]. Some signs of impending power supply failure are [10]: (a) (b) (c) (d) (e)
several attempts are needed to switch on the computer by pressing the power-button, the power-button must be pressed continuously in order to boot-up the operating system, the computer spontaneously resets itself, unusual sounds are heard emerging from the power supply, the overall noise level from the computer drops for a while, then returns to normal (as the power supply stops and restarts), (f) the flow of air from the power supply fan at the back of the computer is smaller than usual, or absent, and (g) hardware that is directly connected to the power supply, such as CD drives, vanishes from the monitor (even in the absence of device conflicts, or other such problems), only to reappear later. Faults other than incipient power-supply failure can also cause these problems. For example, operating-system instability or motherboard damage can cause the computer to spontaneously reset itself, and a faulty power switch can make it difficult to turn on the machine. Nevertheless, and especially if two or more of the above problems occur, consideration should be given to obtaining a new power supply before the existing one fails completely. Only good-quality power supplies should be used if reliable, crash-proof computer systems are required [4]. Also, a power supply should be capable of providing substantially more current than the maximum amount needed to operate all the devices in the computer. If there is any doubt about whether a given power supply can handle the load, choose a larger one.
495
13.4 Some hardware issues
Unfortunately, in practice, it is extremely difficult to buy a computer off-the-shelf with a high-quality power supply. Even top-of-the-range PCs are generally provided with midrange power supplies. The quality of power supplies in low-cost PCs can be very poor indeed. One way to avoid this problem is to have a PC custom-made, and to specify that a high-quality power supply be provided. Alternatively, the power supply in a standard PC can be replaced with a high-quality type obtained as an aftermarket item. (NB: Some computer manufacturers use proprietary power supplies that cannot be replaced with ones made by other firms.) The selection of power supplies is discussed in Ref. [4].
13.4.3 Mains-power quality and the use of power-conditioning devices One important source of computer-system misbehavior, or possibly damage, is poor-quality a.c. power. Events such as such as brownouts, blackouts, swells, and transient overvoltages can be a frequent cause of trouble. For example, even very brief losses of power (called “drops”), which may not be long enough to cause lights to flicker, can cause a computer to lock up [4]. Such drop-induced lockups are a particular problem for computers with midand low-quality power supplies. Drops are by far the most common form of blackout. It has been estimated by some that over 80% of all unexplained temporary computer malfunctions are caused by power-quality problems [15]. Even if they do not cause immediate damage to the components in a computer, repeated large transient overvoltages can cause them to degrade over time, which may result in premature failure [4]. At the very least, computers and their associated equipment should be supplied with surge suppressors and power-line filters. (Power-line filters also protect other equipment from electromagnetic interference by a computer’s switching power supply.) For the highest levels of reliability, it is also worth considering the use of an uninterruptible power supply (UPS). This is an external device (separate from the PC’s own internal power supply), which prevents the loss of power during brownouts or blackouts (for a limited time) with the aid of batteries. Some UPSs are capable of communicating with a computer, and shutting it down in a controlled way during a long blackout. They can instruct the computer to save and close documents, shut itself off, and restart itself when power has been restored [11]. Because of their large load currents, printers (especially laser printers) and monitors should not be connected to a UPS that supplies power to a computer. To a certain extent, except for extreme events such as long blackouts or large transient overvoltages, having a high-quality power supply in a PC will offset the need for external power-conditioning devices [4]. For example, good PC power supplies may be able to rideout brief blackouts (i.e. drops, of perhaps up to 20 ms duration) that would cause problems for low-quality ones. The use of a combination of power-conditioning devices of this type (i.e a surge suppressor and a UPS) can result in major improvements in computer-system reliability [10]. Power quality issues are discussed further in Section 3.6. The selection of UPSs for use with computers is discussed in detail in Ref. [4].
Computer hardware and software
496
13.4.4 Compatibility of hardware and software Computer hardware and software are sometimes purchased and subsequently found to be incompatible with each other. For example, it may turn out that a data-acquisition program is unable to control a plug-in multiplexer board. Before buying any hardware or software, it is essential to make sure that they will work correctly together. Even a particular version of the operating system itself (as well as third-party software) can be incompatible with a given piece of hardware [10]. In the case of Windows, hardware/software compatibility can be checked by looking at the “Windows Hardware Compatibility List” at the Microsoft website.
13.4.5 RS-232 and IEEE-488 (GP-IB) interfaces 13.4.5.1 RS-232 standardization problems Connecting computers and instruments can sometimes be a major source of problems. Most instruments used in research are provided with an interface link to facilitate the transfer of commands, status information, and data between a control computer and the instrument. The most ancient of the various kinds of interface, which is still frequently found on PCs and laboratory equipment, is the RS-232 serial type. It is a simple design, which is inexpensive and easy for a manufacturer to incorporate into a product. The simplicity of RS-232 leads many beginners to imagine that using it to set up a communications link will be easy [16]. Sometimes it is easy, particularly in the case of modern equipment provided with good documentation, and if the driver software is well written. However, in general, connecting a computer to another device by using RS-232 can be an arduous undertaking. The main problem with RS-232 is lack of standardization. For example, the cable configurations used in many devices (especially older ones) often do not conform to any standard, and may in fact be essentially arbitrary. Trying to determine what type of cable to use can be very time consuming. The wiring and rewiring of cables and connectors is sometimes a necessary part of getting things to work. (The need to make up cable assemblies can itself lead to reliability problems – see page 455.) Furthermore, the communications protocols to be used with RS-232 are not well defined. Generally, it is not possible to get two devices to communicate over an RS-232 link by guesswork – it is necessary to find out exactly what is required. Having good documentation for the instrument being controlled is essential. (Sometimes, however, complete and comprehensible documentation is not available.)
13.4.5.2 Data errors on RS-232 links RS-232 links are prone to transmission errors under certain conditions. For example, this can happen because the link (as implemented in a particular device) has no “handshaking”5 5
In the case of RS-232 interfaces, “handshaking” or “flow control” is a provision in the interface hardware which ensures that the receiving device is sent information only when it is ready.
497
13.4 Some hardware issues
arrangement, so that a receiving device may ignore characters (perhaps because it is doing something else at the time) without the transmitter ever being aware of it. Also, RS-232 lines are unbalanced, and hence noise due to ground-loops (see Section 11.2.1) and other effects can cause transmission errors in some circumstances [17]. This can be a particularly serious issue if the link terminations are widely separated. Ground loop problems can be prevented by using “digital isolators.” A separate approach for eliminating noise problems generally, especially when long links are involved, is to use a balanced serial interface, such as RS-485. However, this is relatively uncommon on computers, in comparison with RS-232. Modules that convert between RS-232 and RS-485 can be obtained. (However, differences in wiring configurations may again be a problem, and one still has to deal with the other idiosyncrasies of RS-232.) It is also possible to get RS-485 PC expansion cards. Although the transfer of erroneous information in general can be prevented by using a good error-checking procedure (e.g. by performing a “checksum,” or using a protocol such as “Kermit” or “XMODEM” for file transfers), the achievable data-transfer rate may be lowered considerably, or even reduced to zero, if transmission errors occur. If such protocols are not used, the information may be corrupted. Even when using a checksum, there is a chance that errors will get through.
13.4.5.3 RS-232 driver issues Another problem with RS-232 is that the driver software is often buggy.
13.4.5.4 Advantages of GP-IB and USB Usually, the best approach is to avoid using RS-232, if possible. The IEEE-488 parallel interface (also known as GP-IB) is much easier to set up and operate. Unlike RS-232, GP-IB makes it possible to easily operate multiple instruments from a single computer port, without the need for any additional external electronics. This interface really does in all cases conform to a standard (at least with regard to the mechanical hardware and electrical protocol). All cable assemblies have the same configuration, and there is never any need to make up cables. As long as a few elementary precautions are taken, surprises are much less frequent than they are in the case of RS-232. Also, unlike RS-232 cable connectors (which are fragile D-types – see page 434), GP-IB connectors are relatively robust. An additional advantage of GP-IB over RS-232 is that handshaking is always provided, so that data-transmission errors are less likely. The GP-IB arrangement uses unbalanced lines, so that ground-loop-induced errors are possible, at least in principle. However, since the maximum separation between devices is only 4 m (in comparison with 15–30 m for RS-232 links), this is less likely to be a problem. In any case, GP-IB digital isolators can be obtained. The only major difficulty with GP-IB is the high cost of the cables and plug-in interface boards. Nevertheless, the cost is generally greatly outweighed by the advantages of this
Computer hardware and software
498
interface over RS-232. It is possible to get relatively low-cost USB-to-GP-IB converters, which in many situations makes it possible to avoid the use of GP-IB cables. Unless large communications distances are involved, exceptionally high datatransmission rates are needed, or a large number of devices (>15, including the computer) must be interconnected, one should make sure that all new instruments are provided with GP-IB capability.6 Devices that conform to the IEEE-488.2 (as opposed to the original IEEE-488.1) standard are particularly recommended. The IEEE-488.2 specification defines not only the mechanical hardware and electrical protocol, but also data formats, error handling, status reporting, controller functionality and certain common instrument commands. This helps to ensure that the interconnection of devices using this interface will not involve many uncertainties. If USB connections are available, this interface can also be a good alternative to RS-232. The USB serial interface is very easy to use, and shares many of the advantages of GP-IB. It is completely standard on computers and most computer peripherals, and uses inexpensive and widely available cable assemblies. Unfortunately, it is (at the time of writing) not a very common interface option on electronic instruments.
13.4.5.5 Cable-length issues Whichever type of interface bus is used, the reliability of data transfer can be improved by keeping the cables as short as possible. This is especially important when high datatransfer rates are involved. Upper limits will exist on the length of the cable for any type of interface, no matter what transfer rate is used. If a long RS-232 cable is required, it is best to use a high-quality product comprising wires with a low capacitance [4]. In the case of GP-IB, not only is there an upper limit on the length of cable between any two devices (4 m), but there is also an upper limit on the total length of cable in the network. This is 20 m, or 2 m × the number of devices in the network, whichever is smaller. USB 2.0 cables should be no longer than 5 m.
13.4.5.6 GP-IB cable robustness issues GP-IB cable and connector hardware concerns are discussed on pages 462–463.
13.4.5.7 References on RS-232, RS-485, and GP-IB Some particularly useful sources of information on RS-232 difficulties and solutions are Refs. [16], [18] and [19]. More information can be found in Ref. [17], and Ref. [20]. References. [17] and [19] also cover RS-485. GP-IB interfaces are discussed in Ref. [12]. The troubleshooting of GP-IB systems is covered in Ref. [21]. 6
It is possible to obtain devices that will greatly extend the range of, or increase the number of possible instruments on, a GP-IB network.
499
13.5 Backing-up information
An overall discussion of data communications for instrumentation, covering several types of interface, can be found in Ref. [19].
13.5 Backing-up information 13.5.1 Introduction and general points The regular and systematic backing-up of information on hard drives and other storage devices is, of course, an important cornerstone of operating a computer system. Data can be lost for many reasons, including [22]: (a) accidental or deliberate overwriting or deletion of files, due to human error or malicious intent, (b) the actions of viruses and other malware, (c) faults in the operating-system or third-party software, (d) failed software or hardware upgrades, (e) mains-power problems, such as blackouts, brownouts and transient overvoltages (surges), (f) abnormal hardware operation due to overheating or electrostatic discharge (ESD), (g) physical damage of storage devices and media. The frequency with which backups should be done depends on the value of the information. For example, if important experimental data are being collected on a continuous basis, then daily backups are probably appropriate. The same holds true for anything (such as papers being written for publication) that could be characterized as personal intellectual property. Computer settings, email messages and Web browser bookmarks are other items that should be backed-up regularly. Redundant backup copies should be made of the most important information. It is particularly important to backup before upgrading system software or hardware.
13.5.2 Some backup techniques and strategies The use of an external hard drive allows easy backups of information on a computer’s internal drive. Some external drives are designed with backing-up in mind, and offer features that make them very convenient to use for this purpose. For example, some devices are provided with a special button that initiates a backup when pressed [23]. Commercial backup software is available that makes the backup process (to a hard drive, or other media such as CDs) very straightforward, and possibly automatic. Since the neglect to backup can often be attributed to a lack of convenience, it is important to pay attention to this issue. Of course, backing-up to a nearby hard drive may not help if a catastrophic event (such as fire, flood, theft, or very large transient overvoltage) results in destruction or loss of the PC and its hard drives, and perhaps other devices in the surrounding area. Hence, it is also necessary to regularly make copies of important information on removable media, and
Computer hardware and software
500
either place these in a fire-resistant safe (in particular, a “fire data safe”7 ), or (better) take them to a secure location off-site. CDs or DVDs are convenient media for this purpose if the amount of information is not too large. Very large quantities of data (terabytes) can be stored on magnetic tapes. Portable hard drives can also be used for this purpose. (It turns out that iPods are well suited for making backups [14]). If a particular set of information is undergoing constant change (e.g. in the case of a scientific paper that is being revised), CDs or DVDs can act as an archive of the set at various stages in its evolution. The information on a backup medium should be verified after recording to ensure that the backup has been done correctly. It is often possible to do this automatically.
13.5.3 Online backup services One way of ensuring that copies of important information are stored at a safe off-site location is to make use of a commercial “online-,” “remote-,” “Internet-,” or “managed-” “backup service.” This involves the use of special software, which runs on the user’s computer. The software automatically collects the information on the computer according to some schedule. It then compresses and encrypts the information, and transfers it over the Internet to servers belonging to the company hosting the online backup service. Online backup services have the important benefit of greatly reducing the burden of ensuring that safe and secure backups of important information are made on a regular basis. The need to manipulate, label and transport storage media is eliminated, and backups take place without human intervention. It is even possible in some cases to have backups made continuously. Limitations on the maximum data-transfer rate over the Internet may reduce the usefulness of online backups when very large amounts of information are involved. The use of online backup services is discussed in detail in Ref. [14]. Strategies for backing-up and restoring information are discussed in Refs. [4], [11] and [14]. (Reference [14] mainly concerns magnetic-tape backups.)
13.6 Long-term storage of information and the stability of recording media Hard drives should not be relied upon for long-term information storage [24]. They are typically not guaranteed to work for more than about three years, and the storage medium (the hard disk) cannot be used independently of the hard-drive mechanism and electronics. Problems such as degradation of the drive’s bearings can make the device unusable if it is put in storage for a long time. Hard drives are designed to be kept running, not switched off for very long periods (more than a few months) [25]. They are not intended for long-term information storage, and using them for this purpose amounts to an act of faith. It is much 7
Waterproof safes of this type are available.
501
13.6 Long-term storage of information and the stability of recording media
better to use an arrangement in which the medium can be separated from the device used for writing and reading the information. Media such as CDs and DVDs are preferable from this standpoint. If they are stored correctly, the physical lifespan of good-quality write-once CDs and DVDs (CD-Rs and DVD-Rs) is likely to be 100–200 years or more [26]. (Such figures are based on accelerated aging tests by the manufacturers.) The most stable write-once CDs use a gold–silver alloy for the reflective layer, and a phthalocyanine dye for the data-recording layer [27]. “Archive-grade” CDs and DVDs are sold commercially. Such disks use pure gold for the reflective layer, and phthalocyanine dye for the data-recording layer. Disks with a gold reflective layer are to be preferred for very long-term information storage [26]. Even if great longevity is not the main objective, archive-grade disks are likely to be more stable in harsh environments than ordinary types. Rewritable CDs and DVDs (e.g. CD-RW types) are not as long-lived as the write-once kinds. Proper storage of CDs and DVDs involves not exposing the disks to extremes of temperature or humidity, rapid changes of temperature or humidity, direct sunlight or other sources of ultraviolet light, pollution, etc. [26]. CDs and DVDs should be stored in their shipping containers or purpose-made caddies. They should be kept upright, like a book, and not stored in a horizontal position for long periods (years). Adhesive labels should not be applied to the disks. Especially in the case of CDs, it is important not to scratch the label side of the disk, or to write on disks using pencils, pens, or fine-tip markers. Disks should not be written upon with solvent-based markers. The physical lifespan of properly stored magnetic tapes is at least 10–20 years [28]. (However, if they are regularly used, this figure is greatly reduced.) As with CDs and DVDs, it is important that magnetic tapes be stored in mild environments. Unlike optical media, magnetic tapes are susceptible to erasure and possible permanent damage by stray magnetic fields [4]. Hence, they should not, for example, be placed next to CRT computer monitors or power supplies. Magnetic tapes and tape drives, and their care, are discussed in Ref. [4]. Usually, a more important issue than physical degradation of the storage medium is obsolescence. Many types of storage media (such as 5.25 inch floppy disks) have long been phased out of common use. Hence, it can be difficult to find devices that will read them. Another potential problem is obsolescence of the way in which the data are stored on the media (the file format). In order to avoid such problems, it is desirable to migrate information from older storage media to newer ones as the latter become available. (One advantage of using optical media, such as CDs, is that machines designed to read newer media (e.g. DVDs) are also often capable of reading the older ones.) The use of commonly used and stable data formats is also desirable. For example, images stored as uncompressed PDF or TIFF files are likely to remain readable in the foreseeable future [29]. The same is true of text stored as uncompressed ASCII or RTF files. Sometimes it is necessary to access information on an old storage medium, for which no reading device is available and/or which uses an obsolete data format. Specialist companies (“media conversion services”) exist that can transfer data between old and modern media, and convert between old and modern data formats.
502
Computer hardware and software
13.7 Security issues 13.7.1 General points The viruses, Trojan horses, spyware, and other infectious diseases of the computing world (i.e. malicious software, or “malware”) are often a source of problems. So are efforts by hackers to gain unauthorized access to computers over the Internet. Such problems may take the form of direct damage to files or disruption of computer operation, or as a burden of ensuring that security software is up to date and running correctly, and that secure operating procedures are used. Security is a particular concern with Windows-based PCs. In fact, an excellent way of largely avoiding security difficulties is to run Mac OS or Linux. (This is not intended to imply that one should ignore proper security measures when using such systems. It just means that security issues are much less of a nuisance and source of worry with Mac OS and Linux.) A very important aspect of protecting computers from security threats is to promptly apply patches to the operating system as they are released by the vendor [30]. This closes security holes that could be exploited by malware or Internet attackers. Attacks by malware or hackers often make use of vulnerabilities for which patches have long been available. Third-party applications may also have security holes that can make an otherwise secure computer vulnerable to such problems. Hence, updates for applications should be installed when they become available. This is particularly important in the case of applications that are in very widespread use.
13.7.2 Viruses and their effects Computer viruses are small pieces of software that have the ability to replicate [11]. Like ordinary viruses, they can spread. This can occur over the Internet, possibly as a consequence of downloading a file from a Web site or opening infected email attachments. Another route is through storage media, such as CDs. Viruses are also often attached to freeware, shareware, and pirated software. Internet-based peer-to-peer (P2P) file sharing networks are yet another avenue for bringing viruses into a computer. Macro viruses are a class of virus that can be embedded in the macros that accompany Microsoft Word or Excel documents to automate tasks. These may cause problems even if Mac OS is being used [6]. Most viruses can cause some form of damage to the computer system. Usually this takes the form of software damage, such as corrupting or deleting information (which may possibly be an essential part of the PC’s operating system), or by filling the hard drive with worthless files. However, it is also possible (although unlikely) for viruses to cause damage to hardware. This may be done, for example, by overwriting the computer’s BIOS chip [31]. As a result, it may be necessary to replace the computer’s motherboard. (An outbreak of such a virus has been documented.) An alternative possibility is to repeatedly operate some
13.7 Security issues
503
mechanism in the computer until it wears out. The latter form of damage is probably very rare. Some viruses attack and disable antivirus software. A few viruses are harmless, and some have even been written to fight other viruses or repair security holes in operating systems. A large fraction of virus reports, spread by email and word-of-mouth, are hoaxes [6]. If they are followed, the instructions that often accompany such reports may cause as much harm as actual viruses [11]. It is important to find out from a reliable source whether the information is genuine before taking any action.
13.7.3 Symptoms of virus infection Some symptoms of the presence of viruses are [11]: (a) (b) (c) (d) (e) (f) (g)
unusually slow computer operation, unexpectedly high levels of computer activity, occasional freezing of the operating system or other software, the computer does not boot properly, unexpected rebooting of the operating system, strange error messages from the operating system or antivirus software, antivirus software will not start up or continue to run.
13.7.4 Measures for preventing virus attacks Although a good IT department (and its associated software and hardware infrastructure) in a laboratory facility can be invaluable in reducing malware and Internet attacks, it is not a complete solution. Active measures on the part of individual computer users are also very important. Furthermore, it is often necessary to work at home, or in some other location that is not protected by a laboratory’s security arrangements. Especially if Windows is being used, it is essential to install antivirus software from a reputable vendor, to scan harddrive files regularly (perhaps once a day), and to ensure that antivirus software is updated frequently. Most antivirus software can automatically update itself with new information on viruses (from the software company’s website), as it becomes available. It may be necessary for the user to manually enable the “automatic updates” feature. Antivirus software from major vendors often includes facilities for detecting and removing other security threats and nuisances, such as spyware and adware. The continuous monitoring function offered by some antivirus software (“real-time scanning” or “auto-protection”) is useful in principle. However, in practice it can cause problems such as interfering with the operation of other programs, raising false alarms, and making it difficult to install software or hardware [10]. Hence, it may be best not to use this feature. Other important measures for preventing virus infections include [10]: (a) not opening attachment-bearing email that has been sent by unknown parties, (b) ensuring that any email attachments are scanned by antivirus software before being opened,
Computer hardware and software
504
(c) downloading files only from trusted web sites, and scanning them before opening them, (d) being careful about mounting doubtful CDs or DVDs in a PC, and scanning any such before doing so.
13.7.5 Network security The unauthorized access to a computer over a network is another security issue. A firewall is a piece of software that prevents this, while allowing legitimate communications to take place. Firewalls are another essential part of computer security. Although a firewall is often provided with the operating system, it may be possible to get a significantly better one from a third-party supplier [30]. While important and generally effective, firewalls can actually cause problems if they are not set up properly. For example, an incorrectly configured firewall might prevent a legitimate file transfer from the computer to a remote printer. Some balance between protection of the PC and access to the network is needed. When a computer is initially connected to the network, a useful approach is to switch off the firewall until it is clear that the connection is functioning correctly, and then carefully re-enable its various functions (increasing the security level) one at a time [10]. Security issues are discussed in detail in Refs. [30] and [11], and online at Refs. [32], [33] and [34].
13.8 Reliability of commercial and open-source software 13.8.1 Avoiding early releases and beta software Most commercial software that has just been released to the public contains an inordinate number of bugs and security holes [11,35]. A major reason for this is that such software tends to be released prematurely, before it has been properly debugged. This is because of very strong commercial pressures in the software industry to bring a product to market as quickly as possible.8 Hence, it is usually a good idea to avoid “Version 1.0” of any software product, and to wait at least until the first bug-repair upgrade takes place before buying it [36]. As bugs in software are found and removed, a dramatic reduction in failure rates can occur over relatively short times. This is vividly illustrated in Fig. 13.1 [37]. Pre-release versions of software (i.e. “beta software”) should be especially avoided [10]. Such software is effectively still in a part of its developmental phase, in which members of the public are enlisted to weed out bugs. Beta software has a high potential to cause major problems, such as the destruction of a PC’s BIOS, and the corruption of information on hard drives. 8
The inherent complexity of software is also responsible. Newly released hardware does not have this problem to nearly the same degree (see Ref. [53]).
13.8 Reliability of commercial and open-source software
505
1000 800 700 600 500 400 300
Failures per 1000 hours
200
100 80 70 60 50 40 30 20
10 8 7 6 5 4 3 2
1
1
2
3
4
5
6
7
8
9
10
11 12
13
14
15
16
Age of product type (six-month periods)
Fig. 13.1
Average computer-system failure rates due to bugs in a new kind of operating system as a function of time (data are from Ref. [37]). The former diminished as the bugs were found and corrected. New upgrades produced large (up to 10×) but temporary failure-rate increases, and the effects of these have not been included.
13.8.2 Questions for software suppliers The software vendor should be able to provide a list of known bugs (sometimes euphemistically referred to as “issues”), and any workarounds for these, to prospective customers. Information on bugs that may not have been reported by the vendor can sometimes be found on the Internet, in newsgroups, product support groups, and other websites. Software that has a small base of users may be significantly more bug-prone than software that has a large one (in the thousands or more), because in the latter case there is more opportunity for bugs to be detected and reported. The number of users should be available from the vendor [36]. The vendor can be asked what development process was used to create the software [36]. That is, was the software created by using an informal and ad-hoc method, or was a well-recognized software engineering strategy employed? An example of the latter is provided in the IEEE-12207 standard for software development.
506
Computer hardware and software
Before buying software, it may be worth finding out if the vendor is willing to support older versions when a new one comes out. Research workers may have to continue using old software for many years. Furthermore, older versions of software are frequently more stable than ones that have just been released. It is often found that vendors will support only the latest versions.
13.8.3 Pirated software “Pirated” or “counterfeit” software can be a source of trouble (aside from the ethical and legal problems arising from the use of it). The modifications that the software pirates make to legitimate code in order to bypass its security measures may render it prone to crashing [11]. Furthermore, one generally cannot obtain technical support from the developer for such software. The pirates may also remove features that normally come with the product, as well as the instruction manual and any help files, in order to minimize the total file size. Moreover, viruses or spyware sometimes accompany pirated software. The issue of counterfeit products in general is discussed on page 120.
13.8.4 Open-source software The term “open-source software” refers to software in which the source code is freely available. This code is in a high-level, human-readable language such as C++, and is amenable to inspection, modification, and correction by anybody. (Most commercial software, on the other hand, is “closed source.” It is provided in the form of low-level object code, or “machine code,” which cannot be easily read and modified.) Two examples of open-source software are Linux and the FreeBSD version of UNIX. It is widely believed that open-source software is generally more reliable than the closedsource software that usually comes from commercial vendors. The reasoning behind this belief is that relatively small commercial development teams cannot match the large numbers of people (perhaps thousands, in some cases) who often participate in open-source software development. It is held that these large groups of people inspect the code more thoroughly, and find and fix bugs more quickly than would otherwise be the case. Supporters of open-source software also claim that the people who work on such software tend to produce better quality code, partly because they are not under the commercial pressures to push something into production regardless of whether it has been adequately debugged. (Not all commercial software is closed-source – some companies develop software using an open-source model.) The sociology of open-source software development has been described in Ref. [38]. This belief about the high quality of open-source software has been supported by an empirical study [39], which indicates that bugs may indeed be eliminated more quickly from open-source software than from software made according to a closed-source model. The claimed superiority of open-source software is not universally accepted, however, and there are many critics who hold that it is no better than software made using a closed-source approach.
507
13.10 Commercial data-acquisition software
13.9 Safety-related applications of computers If a system that is under computer control is vulnerable to being placed in a state that could lead to damage or personal injury, it may be very tempting to use the computer to detect this condition and ameliorate it. For example, if the pressure in a vessel is being monitored by a computer, which is also capable of controlling it by operating a valve, it may seem natural to use the arrangement to automatically detect and correct dangerous overpressures. There is nothing wrong with using such a scheme. However, there must always also be a basic safety device based on some elementary physical principle (e.g. a spring-operated pressure relief valve), which will prevent catastrophe in the event of a failure of the computer’s software or hardware. In other words, although computers can be used as a first line of defense against safety hazards, they should always be accompanied by a backup arrangement based entirely on simple hardware. This issue is also discussed in Section 8.2.4.
13.10 Commercial data-acquisition software 13.10.1 Data-acquisition applications and their properties The control of instruments, collection of data, and the preliminary processing and display of this, is often done by using a self-contained commercial application. Perhaps the most widely used software of this type is LabVIEW [40]. Others include, for example, Agilent VEE [41], LabWindows/CVI [40], and DASYLab [42]. A common feature of data-acquisition applications in general is that they have their own internal (often proprietary) programming language, which makes it possible for the user to specify in detail how the measurements are to be carried out. These applications also contain libraries of program-callable routines that make it very easy to communicate with a variety of commercial instruments, such as voltmeters, lock-in amplifiers, and signal generators. The real-time user control of the measurement system, and the display of information, is carried out through software-based controls and displays, which can be completely and easily customized by the user to suit a particular purpose. These capabilities, taken together, make it possible readily to combine the functions of a number of hardware instruments (such as voltmeters) to form, in software, instruments with unique abilities. These are sometimes referred to as “virtual instruments” (see the discussion in Section 5.7).
13.10.2 Graphical languages From the point of view of software reliability, the type of programming language that is built into these applications is an important issue. This is because any bugs that arise during the use of an application are most likely going to be the result of programming errors by the user (and probably not because of some intrinsic flaws within the application). As in the
Computer hardware and software
508
case of LabView and Agilent VEE, these programming languages are sometimes graphical in nature, rather than text-based (as in the case of ordinary languages such as C). Boxes that represent data, or some logical operation or procedure, are joined together with lines (“wires”) to create the code. The resulting programs look very similar to flowcharts, or the block diagrams used during the design of electronic circuits. While it is sometimes claimed (perhaps implicitly) that these graphical programming languages in some sense eliminate the need for programming, at the most fundamental level this is not really true. Graphical programming languages contain elementary constructs (such as conditionals and loops) that are conceptually the same as the ones used in ordinary text-based programming languages. The nature and use of these constructs must be learned before a program can be written in a graphical, or any other, programming language.
13.10.3 Some concerns with graphical programming Graphical programming languages such as the one used in LabView are best suited for writing very simple data-acquisition programs. For such purposes, they are very useful and can simplify many things, especially for those without programming experience. However, as they increase in size, programs written in such languages tend to become disorderly and hard-to-interpret much more easily than those written in text-based languages. As a result, large graphical programs are inclined to be relatively difficult to debug and maintain – especially if someone else has written them. A problem with writing programs using graphical programming languages is that much foresight is needed during the layout of the program and the formation of connections between the various functional boxes [43]. If insufficient attention is given to this, it is difficult to prevent the wiring from forming a kind of “visual spaghetti”9 that leads to confusion, and increases the likelihood of introducing bugs. The presence of such disorderly code also makes it difficult to maintain the program at a later time. A compounding difficulty is that making local changes to software written in such languages tends to be hard. This means that rearranging the boxes and wires in a program in order to improve its layout and minimize wire crossings can be very laborious. Another problem with graphical programming languages is that they make it necessary to perform hard mental operations while writing the code [43]. This is a particular problem when control constructs are involved. Following the paths of the wires as they wind their way around the program, and making sense of the logic thereby represented places large loads on working memory. Such a burden can be an important source of mistakes (see the discussion on reducing human error on pages 17–18). An additional complication with graphical programming is that it is relatively difficult to convert algorithms that take the form of pseudocode (see pages 514–516) into graphical programs. Finally, providing effective documentation (in the form of comments) is relatively difficult, since documentation is inherently text-based. It therefore fits naturally into the flow of a normal text-based program, but not a graphical one. 9
The term “spaghetti code” is often used when referring to disorderly programs written in an ordinary text-based language.
13.11 Precautions for collecting experimental data over extended periods
509
A detailed analysis of the usability of graphical programming languages, including in particular LabView, is presented in Ref. [43].
13.10.4 Choosing a data-acquisition application If: (a) the data-acquisition program being written is relatively small, or (b) programming time is not an issue, and it is not too important whether the program can be understood and maintained or extended (especially by other workers) in the future, then a data-acquisition application that employs a graphical programming language is often the best choice. However, in other situations it is preferable to use an application that employs a normal text-based language. One such is LabWindows/CVI. This provides all the convenient instrument-control routines and graphical user interface capabilities of LabView, but is controlled by programs written in C. The latter is a general-purpose programming language, like Fortran, which is in very widespread use. Such an approach has an additional advantage. By making use of the vast number of reliable routines that have already been written in C for general computing purposes, it may be possible to avoid writing some of the code in certain cases. This principle of code-reuse is an important one in creating reliable software. (Although in fact, most general-purpose data-acquisition applications have extensive built-in routine libraries.) An unfortunate aspect of C is that this language makes it particularly easy to commit programming errors (in comparison with many other languages, such as C#, Fortran, and Java). The results of such errors can sometimes be very strange. The use of good programming practices, and the application of a syntax and semantics inspection utility (such as “splint” – see page 525) to the code, are therefore very important when programming in C. If a program is to be written in a graphical language (and especially if it is to be a large one), it is particularly important to pay attention to program structure and programming style, in order to avoid the problems mentioned above. This issue is discussed on page 518. As in the case of operating systems, it is a good idea to examine several of the commercial data-acquisition applications before making a purchase. One should not assume that an application that happens to be in very widespread use is best suited for a particular purpose. Even amongst the applications that employ a graphical language, there are significant variations in complexity and ease-of-programming. Discussions of data-acquisition software can be found in Refs. [44] and [12].
13.11 Precautions for collecting experimental data over extended periods If experimental data are to be gathered continuously over extended periods, the long-term stability of the computer system becomes very important. It often happens that the computer
510
Computer hardware and software
or its software will fail in the middle of an experiment, perhaps greatly reducing or even nullifying the value of the results. (This can also be a problem if very lengthy numerical calculations are being carried out.) Potential trouble sources include the operating system, drivers, viruses, hardware (e.g. overheating), mains power, etc. – see Section 13.2.2. The stability of the data-acquisition application and programs written within it is also an important issue. It is a good idea to minimize the number of applications (foreground and background) that run concurrently during data collection. Steps should be taken to ensure that sufficient memory is available for the data that is to be collected, given the resources required by all the applications that will be open.
13.12 Writing software 13.12.1 Introduction In the case of hardware, reliability can be substantially affected by factors that may be outside the control of the designer, such as: manufacturing defects, transport damage, wearout or aging of components, or adverse environmental conditions (such as high temperature or humidity levels). This is not the case with software. Although it is possible for a program to fail because of some fault with a computer’s hardware, or the operating system, or the compiler, almost all program failures are in fact caused by mistakes made by the designer (i.e. the programmer) [1,45]. That is, such faults are primarily caused by errors made during the creation of code requirements, the design, and the construction of, a program. Software does not wear out or age. However, very old code may be more failure-prone, because new errors can be introduced as the original code is upgraded to enhance its function [1]. Writing a program carelessly, in the expectation that any errors can be eliminated later on during the testing and debugging phases, is likely to lead to serious problems. The general complexity of software works against this kind of approach. Usually, unless the program is very small, reliable code can be created only by taking appropriate steps to ensure its presence from the very beginning. Furthermore, debugging is the most difficult part of creating software [45,46]. Debugging a program can easily take several times as long as it did to write the program in the first place. It often happens that new bugs are introduced into a program during attempts to fix existing ones. The total number of bugs in the program may actually increase during a debugging session. For these reasons, the use of suitable software construction methods is very important, in order to ensure that bugs are not introduced unnecessarily when the program is initially created. Before embarking on a significant programming project, it is always worthwhile to see if some software already exists that performs the required task. For example if numerical calculations of physical phenomena are needed, it may be possible to use one of the many commercial applications that are available for this purpose. (These are sometimes referred
511
13.12 Writing software
to as “multiphysics packages”.) Such software requires little or no programming on the part of the user. For example, programs exist to solve the partial differential equations of: fluid flow, the propagation of electromagnetic waves, heat transport, sound and structural vibrations, and other user-defined differential equations. Similarly, numerous applications are available for analyzing experimental data. Another approach is to use a commercial integrated numerical-environment. These applications (which are sometimes referred to as “computer algebra systems”) generally have many built-in and add-on capabilities for symbolic and numerical analysis [47,48,49]. The subject of writing reliable software is a very large one, and the following sections can only touch on some of the issues involved. For more information, the reader should consult the references contained herein, and in particular Ref. [45].
13.12.2 Planning ahead – establishing code requirements and designing the software 13.12.2.1 The need for preliminary steps Writing software can be thought of as a kind of construction effort – similar to building a house [45]. In the case of house building, one would not normally start construction without first deciding what needs to be done (and writing it down) and drawing up a plan. Such a plan would take the form of structural drawings (blueprints). Similarly, except in the most trivial cases, the creation of written requirements (what it is exactly that the software is expected to do) and plans (the overall design of the software) should precede the line-by-line coding of a program. The importance of these preliminary measures increases with the size of the software. A program with no written requirements or plans is likely to grow in a piecemeal fashion, with many special cases and loopholes included as needed to account for afterthoughts and unforeseen problems. The result, especially in the case of medium- or large-scale software, will probably be a disorderly, hard-to-understand, and unstable program. The lack of written preliminaries of this kind is frequently a problem when data-acquisition programs are created, since these are often made on the spur of the moment. Usually, it is much easier to correct problems at the early stages of software development rather than later on, so the careful creation of code requirements and plans is very important. These do not necessarily have to be highly detailed and polished, especially for simple projects involving well-understood principles. A few informal diagrams and notes may suffice in such instances [45]. In cases of doubt, it is better to err on the side of greater detail.
13.12.2.2 Code requirements The written code requirements should comprise a complete and clear specification about all the tasks that the program is supposed to perform [1]. It is also desirable for these to state explicitly the things that must not take place during the operation of the software (such as the generation of particular program outputs that might be dangerous in some way). It may
Computer hardware and software
512
Fig. 13.2
A large program is made manageable by subdividing it into a hierarchical array of small modules (e.g. routines). The main program is at the top of the hierarchy, and routines occupy positions further down. be necessary to produce written documentation for the program, which tells a user what the program does, and how it is to be operated. If so, the documentation should be created before the program is coded, so that the program is modeled after the documentation, rather than the other way around.
13.12.2.3 Overall code design and the importance of modularization Architecture An important aspect of the design of a program is deciding how it is to be subdivided into component parts. The main goal of this process (and a central goal of all aspects of software design) is to minimize the complexity of the things that one has to deal with by hiding unnecessary information. Thus, a complicated program can be broken down into relatively simple modules, each of which hides some details of the programming problem from other parts of the code [46]. (See the discussion of modularization in a general context on pages 3–4.) Normally (especially in the case of software that is not too complicated), this process of subdividing a complex program involves decomposing it into a tree-like hierarchy of modules (see Fig. 13.2). Development proceeds from the top-down. That is, those parts of a program that deal with high-level (i.e. very general and abstract) information are created first, and modules that handle information at a lower level (i.e. involving more concrete ideas, and a greater number of details) come later. Each module is specified, designed, constructed, tested and debugged separately before being added to the rest of the program [50]. The main module at the top of the tree is specified and designed first. As its requirements become clear, lower-level modules can be added beneath it on the tree, and designed in their turn [50]. This process continues down to the lowest-level branches in the diagram. When procedural programming is being done (as is normally the case when small programs for laboratory use are written) these modules take the form of routines.10 Routines at the top of the hierarchy make calls to others on lower-level branches. 10
These are also known as subroutines, or subVIs, in the case of LabView programs.
13.12 Writing software
513
Object-oriented methods Object-oriented programming is a different technique, which is particularly well suited to complex programming tasks. It is especially useful if bookkeeping within a program is likely to be very burdensome, or if many people are involved in its development or maintenance. Complex programs written using an object-oriented approach also tend to be easier to debug than they would be if a procedural method were used. In the case of object-oriented programs, the main type of module is the class. A class is an abstract data type that contains routines as well as data (i.e. it has behavior and state). Generally, classes are designed so as to mirror real-world or synthetic objects. An example of this in physics would be a particle class, which has mass, energy and momentum, and which interacts with other particle classes according to certain laws.11 This direct correspondence between real-world objects and classes is a very useful and important aspect of object-oriented programming. While classes can also be arranged in a hierarchy, if the program is sufficiently complex, they are often arranged in a network. Communication between classes takes the form of message passing rather than calling [51]. Some programming languages, such as C, are best suited for procedural programming (although, with effort, they can be used for object-oriented programming). Others, such as C++, can be readily employed to carry out either object-oriented or procedural programming. Some, such as C#, are designed in such a way that only object-oriented methods can be used. Although object-oriented programming is a very powerful tool for handling complex programming problems, it is more difficult to learn than ordinary procedural programming. Furthermore, if object-oriented programs are not designed correctly, they can actually increase the complexity of software [36]. Most programming tasks that are encountered in laboratory work can usually be handled by procedural methods. Therefore, the emphasis in this chapter will be on procedural programming, and those aspects of programming that are common to object-oriented and procedural approaches. A good introduction to object-oriented programming can be found in Ref. [52]. Object-oriented design issues are discussed in detail in Ref. [45].
Properties of routines In general, routines should be loosely coupled, with the minimum possible number of interconnections between them [45]. The internal complexity of a routine should be greater than the complexity of its interconnections [53]. To this end, the program should be split up at places of minimal interconnectedness (like splitting a piece of wood along the grain) [45]. Another consideration is to avoid high fan-out [45]. Fan-out refers to the number of routines that are called by a given routine. If the fan-out is too high (greater than about seven), the routine may be excessively complicated.
11
NB: The term “class” is used loosely in the present context.
Computer hardware and software
514
Routines should not perform numerous related, but different, functions [53]. This often results in an obscuration of their internal logic. Instead, a routine should perform only a single function [46]. Routines that are allowed to get too large can be incomprehensible and buggy. Such should be broken up into smaller ones. Routines should generally comprise between about 10 and 100 executable (i.e. non-comment and non-blank) lines of code [53]. In almost all cases, they should not contain more than about 200 lines [45]. It is often helpful to put even short sections of code (e.g. two or three lines) into their own routines. This frequently makes the code easier to read.
Re-use of existing code and algorithms An important aspect of writing reliable software involves the re-use of code that has already been developed, and is known to work dependably. In particular, routines for doing many types of operation are available in commercial and free program (or code) libraries. These operations include: numerical calculations (e.g. Fourier transforms), graphing and charting, sorting and searching of lists, file manipulation, data compression, and much more. In following this route, one can take advantage of the experience of programmers who specialize in a particular type of software. Furthermore, it can be expected that the routines will have been thoroughly tested through exposure to a large number of other users. It is a good idea to be familiar with the kinds of routines that are available in program libraries. Some high-quality commercial software for doing numerical calculations (according to Ref. [54]) can be found at Refs. [55] and [56]. Free software of this type is available at Refs. [57] and [58]. Even if a routine for performing a given operation is not available in the desired language, one should try to avoid re-inventing algorithms. A well-established algorithm (perhaps expressed in another language, or a pseudocode) might already exist. These are often found in books, journals, etc. The practice of devising one’s own algorithms sometimes has dangers, which should not be treated lightly. This is especially true of those used for doing numerical calculations, in which roundoff, and other effects, often leads to serious errors. Even creating an algorithm for doing what appears to be a very simple task can involve difficulties. A good example of such a do-it-yourself approach involves the na¨ıve application of the definition of the derivative12 in a program intended for numerically differentiating a function or a sequence of data. If it is used without understanding the issues involved, this method will almost certainly give incorrect results [54]. (See also the discussion on page 41.)
13.12.3 Detailed program design and construction 13.12.3.1 Use of pseudocode for detailed design At a more detailed coding level (within a routine), program design often involves the use of either flow charts or pseudocode. Flow charts (as shown in Fig. 13.3) are sometimes used 12
That is, f (x) = lim
h→0
f (x + h) − f (x) . h
13.12 Writing software
515
(a) sequence
(b) iteration ( for-do)
block of code iteration complete?
yes
no
block of code
increment loop index
(c) selection (if-then-else)
if condition
true
block
false
else if condition
false
....
true
else if condition
false
true
block
block
else block
....
Fig. 13.3
Flowcharts of standard types of construct used in structured programming (see pages 518–519). In the case of iteration and selection, the specific examples shown are indicated in parentheses. (See Ref. [60].)
to design very simple software. However, drawing the lines and structures that comprise them can take a considerable amount of time. Making modifications can also be labor intensive. For these reasons, pseudocode is generally the preferred design format for most programming tasks. Pseudocode consists of lines of text, written informally in a natural language such as English, which describe the logic of a program. It is written at an intermediate level of abstraction, so that each line of pseudocode may correspond to several lines of actual
Computer hardware and software
516
computer code.13 The text is augmented, where necessary, by standard mathematical notation. Unlike flowcharts, pseudocode can be created and modified easily using an ordinary word processor. Pseudocode (as the name implies) cannot be compiled or interpreted by a computer. After the design has been worked out, the pseudocode description must be converted into a program written in a conventional programming language (such as C or Fortran) by a human. Pseudocode is intended to be an intermediate-level description of the logical structure of a program in an easily understood form. Hence, it should be free of variable declarations and syntactic elements of the intended programming language. It should be possible to convert well-written pseudocode into a program written in any text-based language. Pseudocode should describe the intent of the code [45]. That is, it should indicate what the program should do, rather than precisely how it will accomplish this task. The pseudocode should be written at a level of abstraction that is close to that of the final computer code. If it is written at an excessively high level (with one line of pseudocode corresponding to many lines of computer code), the pseudocode may not include important details that will be troublesome in the final code. If the pseudocode has this problem, it can be iteratively refined. The details should be filled in, and the pseudocode statements made more precise, until the point is reached where it becomes easy to convert it into computer code. As in the case of computer code, techniques such as indenting the pseudocode text, and inserting white spaces and blank lines in appropriate places, can be used to highlight logical structures and improve readability. If bugs are discovered in a program when it is run on the computer, corrections are made first to the high-level design (as illustrated in Fig. 13.2), then to the pseudocode, and finally to the program itself. An important benefit of doing intermediate-level design work in pseudocode is that the result can be left in the program to act as documentation (i.e. comments). The design of software using pseudocode is discussed in Ref. [45].
13.12.3.2 Pair programming If two people are available to write code, the pair programming method can be a very effective way of improving program quality and readability [45]. Both people sit together in front of a computer. One person (the “driver”) operates the keyboard and creates the code, while the other (the “observer”) monitors his or her activities – looking out for mistakes, considering ways of simplifying the code, and thinking ahead about what needs to be done. Periodically (perhaps every 15 min or so), the driver and the observer exchange roles. It is essential to ensure that the observer is actively involved in the programming process, and does not become a passive bystander as the driver creates the program. 13
NB: Flow charts should also have this property – they should not merely echo the actual computer code, with all its details, in a graphical form.
517
13.12 Writing software
Programmers working in pairs are more resistant to pressures to produce “quick and dirty” software than those who work singly. Furthermore, with two pairs of eyes on the screen, many mistakes are readily detected. The simplicity of the code is also enhanced. This is because both programmers must understand it, and there is therefore less of a tendency to write unduly clever constructs (see below). The method also reduces the amount of time needed to write code. It may not be necessary to use this method to create the entire program. Pair programming can be reserved for those parts of the code that are most difficult, while other parts are made in the normal way by a single programmer.
13.12.3.3 Excessively clever code constructs A major goal in software development is to make programs easy to read. Program material is generally read much more often than it is written [45]. This is true even during the initial construction process. Hence it is essential to ensure that convenience in writing software does not take precedence over one’s ability to read and understand it. To this end, code should not be too clever. Arcane tricks used to make code easier to write, more compact, or faster in execution, should be avoided. On the other hand, going too far in the other direction (i.e. making the code very lengthy in an attempt to avoid obscurity) can also reduce readability [54]. A good middle way is to write the code at a level of cleverness just slightly below that at which obscurity prevents easy comprehension.
13.12.3.4 Programming style General points It is also important to adopt a clear, consistent, and (preferably) widely accepted programming style. Such a style should involve, for example: (a) using the standard elementary flow-control constructs, and avoiding the use of statement labels and GOTOs, if possible (i.e. structured programming – see pages 518–519), (b) using pre-existing routines contained in program libraries whenever possible, (c) avoiding the use of too many levels of nested ifs and loops (i.e. “deep nesting”), which can lead to confusion – limit the number to three or perhaps four levels at the most, (d) establishing a convention for naming variables and routines, and using clear and descriptive names (see pages 519–520), (e) replacing repetitive expressions with calls to a common function or routine, (f) indenting code and using white spaces and blank lines to highlight logical structures, (g) using parentheses in arithmetic expressions to avoid ambiguity, (h) placing each decision in a program and its associated action as close together as possible (in order to make their relationship clear), (i) avoiding the use of pointers, if possible, (j) avoiding the use of global variables (see page 520), (k) avoiding the use of temporary variables,
Computer hardware and software
518
(l) using comments where this can help in understanding the code (but avoid overcommenting – see pages 520–521), (m) having the program test data inputs to ensure that they are valid and plausible (see page 521), (n) using a language’s good features, and avoiding its bad ones (examples of the latter being: pointers in C and GOTOs in Fortran). Further details about some of these issues are given below. More information about good programming style is provided in Refs. [46] and [45]. Some generic and language-specific good programming practices for a number of different programming languages are presented in Ref. [36].
Graphical programming style If a graphical programming language is being used to write a program for data acquisition (e.g. when using LabView or Agilent VEE software), other stylistic considerations are also important. In the case of LabView, good programming practice would involve, for example [59]: (a) (b) (c) (d) (e) (f) (g) (h) (i) (j)
using a display resolution of 1280 × 1024 (neither too low nor too high), ensuring that data flow through the “wires” only from left-to-right across the screen, avoiding bends in the wires as much as possible, restricting the size of block diagrams to one visible screen or, if this is unfeasible, ensuring that scrolling need be done in one direction only, not using global and local variables as a replacement for wires (the use of such variables may be tempting, in order to eliminate wires and thereby minimize wire clutter), avoiding the use of sequence structures, if their contents do not have to execute sequentially, not obstructing the view of nodes and wires, clustering wires involving related data, limiting the lengths of wires so that the source and destination are simultaneously visible on the screen, labeling long wires, and those connected to source terminals that are hidden.
Also, it is generally best to use “formula nodes” when non-trivial mathematical expressions are being represented. This is only a partial list. Stylistic issues in LabView graphical programming are discussed in detail in Ref. [59].
13.12.3.5 Structured programming When detailed line-by-line design work is being carried out, it is desirable to make use of structured programming techniques [45]. This involves the use of programming constructs (blocks of code) in which there is only one entry point and one exit point, and no other entries or exits within the block. The intent of the structured programming approach is to prevent control from hopping around unpredictably within a program, which can obscure its logic.
519
13.12 Writing software
Thus, in structured programming, one tries to avoid using “GOTOs” (and their associated statement-labels), and similar instructions that can be used to pass control to any other part of the program. (This is particularly important if the transfer of control would be to a statement that is far away.) Instead, one attempts to use one of three standard types of construct, which when used in combination should suffice for most programming work. These are as follows [45]. (a) A sequence. This is a collection of one or more statements that are executed in the order in which they are listed. (b) An iteration (or loop). This causes a sequence to repeatedly execute until some condition is satisfied. Examples of iteration-type control structures are the while-do and for-do statements in Pascal. (c) A selection. This causes the program to choose different sequences for execution, depending on some logical condition. Examples of selection-type control structures are the if-then-else and case statements used in Pascal. Flowcharts of these are shown in Fig. 13.3 [60]. One should always think about programs in terms of these standard constructs. By using them, it should be possible to write programs that can be read directly from top to bottom, essentially in the order in which their statements will be executed. Other types of construct may make programming more convenient, but will probably also make the resulting code more complicated and difficult to read. The increased complexity that results from the inadequate use of control structures has been shown to raise error rates and reduce reliability [45].
13.12.3.6 Naming of variables and routines An important way of improving the readability of computer code is to carefully select the names of variables. Such names should be [45]: (a) composed so as to provide a full and accurate description of the quantity represented by the variable (e.g. “particle_energy”, as opposed to “particle” or “E”), (b) obviously connected with the real-world problem that the program is intended to solve, rather than internal program problem-solving tasks, (c) specific enough that the variables cannot be confused with other variables used for different purposes (hence, for example, names such as “i”, “temp” and “x” can be used in many different roles, and so should be avoided), (d) sufficiently long that their meaning is clear immediately (i.e. not short and cryptic) – perhaps between about 8 and 20 characters long in general, (e) part of some consistent variable naming scheme. Each variable should be used for only one purpose within a program [36]. Having full, accurate and specific names for them helps to ensure this, because a variable that has been named for a given purpose will seem unsuited for another one. Names used for loop indexes and status variables should be chosen in such a way as to emphasize their role. For example, the use of the letters i, j, and k for loop indexes (which is very common) can cause confusion. This especially true if the index variable is to be used
520
Computer hardware and software
outside the loop, or if nested loops, involving several indexes, are involved (e.g. j may be mistaken for k). In such cases, longer and more descriptive names are preferable. A detailed discussion of choosing variable names can be found in Ref. [45]. The naming of routines should be given the same consideration as the naming of variables. The name should indicate clearly and precisely everything that the routine does, and be long enough to be readily understandable [45]. As in the case of variables, some consistent scheme should be employed for naming routines. In the case of functions, a good approach is to name these after the values that they return (e.g. sin()). For procedures, one might use a verb (describing what the routine does), followed by a noun (the object of the routine’s operation). Two examples would be “calc_energy()”, and “measure_pressure()”.
13.12.3.7 Global variables The use of data that can be accessed and changed by any part of a program (“global variables” or “global data”) is a source of numerous problems [45]. An important casualty of the use of global variables is the breakdown of modularity. While working with them, one must concern oneself with how they are going to affect the entire program, and not just a particular routine. For example, one might erroneously treat a global variable in one part of a program as if it were a local variable, by changing it and expecting that elsewhere it will remain unchanged. This change can then result in incorrect behavior in other routines that use the same variable. Another potential source of difficulties results from referring to a global variable by more than one name (“aliasing”). Aliasing can occur in the following way. Suppose that the global variable is passed to a routine. Furthermore, suppose that it is subsequently used by the routine as a parameter (which goes by a different name within the routine) as well as a global variable (which goes by its usual name). It is then found that if the parameter is changed, the global variable “mysteriously” changes in the same way. Other difficulties can also occur when global variables are employed. There are normally other, better ways of achieving a given task without using them [45]. A good way of avoiding the unnecessary introduction of global variables is to initially make variables local, and change a variable into a global one only when it is clear that there is no alternative. Global variables should be given names that clearly mark them as such.
13.12.3.8 Code documentation The documentation of programs, in the form of comments, is frequently a troublesome issue. Usually, and especially in the case of programs written in laboratory research environments, the main problem is that comments are used either too infrequently or not at all. However, it is also possible to put too many comments in a program (“over-comment”), which can reduce its readability. Computer code should be largely self-documenting [46]. Comments should not merely echo the code, or be used in an attempt to clarify poorly written code. If the code is unclear, it should be rewritten. The code and its accompanying comments should complement each other.
13.12 Writing software
521
Comments are most valuable when they are used either to [45]: (a) (b) (c) (d)
leave a temporary marker for the programmer to indicate uncompleted work, briefly summarize a section of code, describe the code’s purpose, or supply information that cannot be provided by the code.
The second and third goals could be thought of as representing the code at a higher level of abstraction (in the same way that pseudocode does this). Examples of the fourth goal would be: providing the name of the author, sources of the algorithms used in the program, and references to other documentation. If the code is part of a program used for data collection, comments can indicate how the code deals with the data-collection hardware. For example, the hardware may have some unusual behavior that must be accounted for by the software. The reason for doing this may not be obvious by reading the bare code. Studies carried out by IBM have revealed that programs with about one comment for every ten statements of code are generally the most understandable [45]. As the density of comments rises above or falls below this value, code tends to become progressively less clear. It must be emphasized that this correlation between comment density and code clarity is the consequence of good commenting practice, rather than the cause of it. That is, it is not the density of comments itself that is important, but the reasons for having a comment at a given place in a program. If software is designed in pseudocode before being converted into an actual program, the pseudocode can (as mentioned earlier) be used as comments at a necessary and sufficient level of detail. This approach will probably result in about one comment every few statements. If the code is modified in order to eliminate bugs, or for some other reason, it is very easy to forget about changing the comments as well. If the programming method involves making changes to the pseudocode before altering the actual code, and the pseudocode is used as comments, this problem should not occur.
13.12.3.9 Testing program inputs for errors Typing errors are extremely common. Therefore, if a program must read inputs from a human operator, it should be provided with a way of checking these to ensure that they are valid and reasonable [45,46]. Erroneous input should not cause a program to crash or generate invalid output. For example, if a number must be provided, and letters are typed instead of digits, the program should indicate this to the operator, and request that the data be re-entered. The same thing should happen if a number is out-of-range. The program should make it easy to proofread input by, for example, using a sufficiently large font-size, and by placing a liberal amount of space between successive data. If it is particularly important that a given data entry is correct, the computer can be instructed to request that the data be entered twice. Measures of this kind are part of what is often referred to as “defensive programming.” It may also be appropriate to guard against errors in data generated during experimental measurements (perhaps owing a faulty sensor), or retrieved from a file.
522
Computer hardware and software
13.12.3.10 Common programming errors Some particularly frequent causes of bugs in programs are as follows. (a) Uninitialized variables. The use of variables that have not been initialized is the most common mistake in most programming languages [36,61]. Care should be taken to ensure that variables are always explicitly set to some initial value before being used, and reset before they are used again in a routine or an inner loop [46]. (b) Pointer difficulties. The use of pointers (available in some languages, such as C and C++) is a major source of problems [36,45]. Employing incorrectly initialized or uninitialized pointers is one type of blunder that can lead to very subtle bugs. Also, incorrectly releasing pointers (i.e. doing this so as to fail to release memory along with the pointer, or making more than one deallocation call for a given block of memory) is also a frequent source of difficulties; (these include “memory leakage,” which can cause system crashes). The use of pointers should be avoided, if possible. (c) Clerical errors. Typographical errors (such as spelling mistakes) form a very large fraction of the errors that occur in software construction [45]. Even what may appear to be minor mistakes (e.g. typing a “.” instead of “,”) can cause major problems. Pre-existing code (e.g. programs in books) should not be transcribed by hand, if possible. Instead, one should find out whether the software is available in a digital format, which would make manual transcription unnecessary. See the discussion on transcription errors on page 20. Difficulties sometimes arise because mathematical expressions resulting from one’s own calculations are incorrectly transcribed. Some computer algebra systems make it possible to avoid this problem by automatically generating computer source code directly from expressions that have been created within the system (see page 46). (d) Off-by-one errors. An important class of errors involves doing something once too few or once too many times [46]. This often occurs because of a faulty logical comparison of some integers (such as incorrectly using “>” instead of “>=”). Alternatively, the limit of a for-do loop may be higher or lower than it should be by one. (e) Array indices are allowed to go out of bounds. It often happens that an error in the program, or incorrectly entered data, causes an index of an array to exceed the array’s dimensions [46]. The insertion of a simple test in the program can ensure that this does not happen. (f) Problems at a program’s internal boundaries. As in the case of hardware, bugs in software often occur due to mishaps at boundaries: special places in the code where decisions are made, or points during execution when data values are unique in some way (e.g. when a loop index equals 1) [46]. Problems very rarely occur in the middle regions, away from the boundaries. Hence, special attention should be given to checking the code for correctness at its boundaries. (g) Using floating point numbers for counting and program control. The use of floating point numbers in situations where these are employed for counting, and tests for exact equality between such numbers (for the purpose of making control-flow decisions) is a source of trouble [46]. This is because of small errors in computed
523
13.12 Writing software
floating-point numbers that will cause tests of this kind to fail. Integers, not floating point numbers, should be used for counting. If real numbers are to be compared for equality, this must be done on the basis of being “nearly the same,” not “exactly equal.” Useful lists of common programming blunders can be found in Refs. [46] and [61]. Some common mistakes in LabView graphical programming are listed in Ref. [62].
13.12.3.11 Manual inspection of completed code The desk checking of a program by the person who wrote it is the final step in constructing a piece of software, prior to compiling and testing it. This involves reading through the routines, and watching out for conflicts with the written code requirements, problems with the overall design, and common coding mistakes [45]. One should also mentally execute each path in a routine and look for errors. (Since the number of possible paths through a routine can grow exponentially with its size, this is another good reason for breaking up a large program into many small routines [50].) The code should be studied carefully. It is not enough to have a routine that works when it is compiled and run. One should be able to fully understand why it works [45]. In particular, one should avoid the (very common) approach of hurriedly compiling and running a routine without acquiring this understanding. This often gives rise to a sequence of hasty and error-prone alterations to the code, followed by recompilation and rerunning, which drags out the process of creating reliable software. Getting colleagues to check the code as well can be very beneficial. This is partly because different people are inclined to detect different errors [45]. Various ways of doing this (“inspections,” “code reading,” and “walk-throughs”) are discussed in Ref. [45]. Some methods are more effective than others. The more formal, structured, and disciplined ones tend to be the best.
13.12.4 Testing and debugging 13.12.4.1 Introduction Although the terms “testing” and “debugging” are often used synonymously, they actually refer to two different stages in creating software. Testing involves exposing errors in a program, while debugging deals with finding and correcting the errors that have been detected during testing. It can hardly be emphasized enough that reliable software is created primarily through the use of good program design and construction practices – not by testing and debugging. Testing and debugging can be greatly simplified by working on each module within a program in isolation from the rest (in the top-down development fashion described on page 512). In the case of routines, this is done by passing each routine dummy data from mock routines (or “stub routines”) instead of output from actual lower-level routines [45].
Computer hardware and software
524
13.12.4.2 Testing Psychology is an important element in testing and debugging. For example, it is important to have an attitude of wanting to find errors in the program, and assuming that errors will be present [45]. If the person who wrote the program is also doing the testing, this will be an unnatural state of mind. However, if one does not adopt this viewpoint, it is likely that errors will be overlooked. The data used to test routines should be chosen with care. It is particularly important to exercise those parts of a routine near boundaries, which are important locations for errors (as discussed on page 522) [45]. Test data may also be chosen to expose a routine to: (a) data values above certain maximum or minimum allowable levels, combinations of data that cause allowable levels to be exceeded, unusually long character strings, etc., (b) other types of bad data (invalid data, uninitialized data, no data, too much data, etc.), (c) good data values within the expected range. In some situations, a good way of testing software is to use random number generators to create the test data [45]. These can readily produce unusual combinations of data that might lead to faults. Such generators can be much better at exercising a routine than a human tester. It is important to carefully determine what the results of a particular test should be [50]. If these are incorrect, the program, rather than the test, might be blamed. Programs are, unfortunately, often tested with less care than is used in their construction [45]. The use of symbolic debuggers can be a very helpful supplement to other types of testing [45]. Debuggers are applications that allow one to execute a program in such as way that information can easily be obtained about details of its behavior. For example, they make it possible to execute a program very slowly, one step at a time, and see clearly how it would function if it were compiled and run in the normal way. Although the primary purpose of debuggers is for diagnosing errors that testing has already detected, with imagination they can also be fruitfully employed in testing itself. The values of variables can be monitored as they change, and the flow of control can be tracked, so that it can be seen whether the program is working as intended. The use of a debugger for this purpose is similar to mentally stepping through a program listing, or having colleagues do the same. It may allow one to detect anomalies that have been missed by the other approaches. (However, it should not be viewed as an alternative to these approaches.)
13.12.4.3 Debugging General approach The best way of debugging code is to use a systematic, scientific method. This involves [45]: (a) (b) (c) (d) (e)
stabilizing the fault, so that it occurs predictably, locating the cause of the fault, correcting the code that caused the fault, testing the corrected code, and looking through the program for other errors of a similar kind.
525
13.12 Writing software
Stabilizing and locating faults The stabilization of faults can be a very difficult part of the debugging process [45]. Most intermittent faults are caused by: (a) not initializing variables properly, (b) pointer problems, or (c) issues arising because the operation of the program is sensitive to timing. Stabilization is often best done not by just reproducing the fault, but by finding the simplest possible set of test conditions that will do this. (That is, so that the behavior of the fault is changed by modifying any feature of the conditions.) One then carefully changes the test conditions in a controlled way and observes how the program behaves. This should make it possible to determine the problem. Locating the cause of a fault is done by framing hypotheses about it, and conducting experiments to test them. (This is the scientific method.) A knowledge of common programming blunders can be useful in framing such hypotheses (see pages 522–523). In pinpointing the location of the defective code, it is often useful to try and reproduce the fault in several different ways.
Debugging tools As in the case of scientific investigations, one normally uses “instruments” to carry out the experiments. These software instruments include the compiler (i.e. using the error and warning messages provided by this), syntax and semantics inspection tools, and interactive symbolic debuggers. Much information can be obtained by setting the compiler so that it is most sensitive to problems with the code. Conversely, it is never a good idea to turn off the compiler’s error messages, or ignore these. All bugs reported by the compiler should be fixed before moving on to other, more subtle ones. Other syntax and semantics inspection tools (often called “static code analyzers”) can be used to highlight subtle problems that the compiler misses. These will detect anomalous code, which (although it may not be erroneous according to the standards of the language) is unusual, and perhaps symptomatic of an underlying problem. The presence of uninitialized variables is one example. Others include the assignment of values to variables that are never used, and the existence of code that cannot be accessed by the program. One of these tools is the open-source “lint” utility that is often employed to screen C programs. The use of lint (or its most recent incarnation “splint”) can help to make up for the relatively high tolerance of the C language to unsafe programming practices. Static code analyzers are also available for checking programs written in other languages. Symbolic debuggers are very powerful tools for determining the causes of bugs [45]. A good debugger can perform many useful functions. For example, one common feature is the ability to set breakpoints in a program. These can be set up to halt the program if a particular line is encountered during execution, or if the line has been encountered a given number of times, or if a specific value is assigned to a variable. As discussed in the context of testing, debuggers enable one to step through a program one line at a time and track
526
Computer hardware and software
the flow of control. The better kinds also make it possible to execute the code backwards, step-by-step (i.e. “reversible debugging”). By this means one can return to the point in the code where an error has occurred. A good debugger is almost indispensable for diagnosing defective software.
Precautions in changing code Before changing erroneous code, it is very important to ensure that one understands the actual problem, and that the proposed repair will have the intended result, without causing unwanted side effects [45]. Changes to software in general often introduce new bugs. This effect is most pronounced for small changes. In such cases the resulting errors are often particularly obscure and hard to eliminate [1]. The probability of creating errors may be more than 50%, if the change involves only a few lines of code [45]. (Of course, this phenomenon also has important implications when the intent is not to debug a program, but to alter it in order to change its function.) Before changing the code, it is very important to make an archival copy of it, so that nothing is lost if the changes do not work out. One very risky, but common, approach to program repair is to make changes to the code at random, on the basis of guesses (rather than knowledge) about the nature of the underlying defect. This is done in the hope that, by good fortune, one of these alterations will fix the problem. It may happen that, at least for a particular test case, one may hit upon a change that appears to work. However, it is likely that such a change will not cause the program to function properly in more general situations. This type of approach also undermines one’s confidence in the overall correctness of the program. A related, and even more dangerous, debugging technique involves fixing the symptoms of a program’s aberrant behavior, rather than the underlying cause. For example, if some code produces an incorrect output for one value of an input parameter, such a “repair” might involve setting the output to the correct value as a special case. However, this is likely to lead to incorrect output in other situations. Furthermore, if such an approach is used throughout a program, it makes the code very difficult to maintain. One should have confidence that making a change to the code is going to eliminate the bug without causing other problems [45]. Situations in which a correction turns out to be ineffective or counterproductive should be very unusual. If large numbers of bugs are found during testing and debugging, it is often better to redesign and rewrite the routine (or even the entire program) from scratch rather than try and patch it up [45]. If it seems necessary to hack with the code (i.e. make unsystematic, ad-hoc repairs) in order to get it to work, it is likely that the code is not understood properly. Such an approach usually leaves errors that will cause problems at some stage. In cases like this, the code should be abandoned and rewritten.
13.13 Using old laboratory software In experimental work, one is sometimes obliged to use apparatus that was originally configured by someone else. Such a setup frequently includes a computer that uses homemade
527
Summary of some important points
software (perhaps written within a commercial data-acquisition application) to control the experiment and collect data. Such software is often created in the absence of any knowledge of good programming practices, or perhaps even any experience in computer programming. Hence, the code is frequently badly structured, undocumented, and difficult or impossible to understand by anyone other than its authors. These people may have left the research group. Nevertheless, because the need to get experimental results seems so pressing, the software is frequently retained rather than rewritten. This is usually a bad idea. If one is to have faith in the experimental results, it is essential that somebody who fully understands the software be attached to the project. Otherwise, the software’s limitations, idiosyncrasies, and otherwise known bugs cannot be accounted for. Furthermore, it will not be possible to maintain or modify the software if this becomes necessary. If those who are running the experiment cannot understand the code, and there is no one else in a position of responsibility who can, the code should be abandoned. This can admittedly be a very difficult decision to make. Useful strategies for understanding and maintaining other people’s programs are discussed in Ref. [50].
Further reading Things change quickly in the computer world. At the time of writing, Ref. [4] provides a good, but somewhat dated, discussion of hardware issues. (Much material in this reference, and especially information that does not refer to specific types of hardware, is still very useful). PC hardware and software problems are covered well in Ref. [10]. Computer security is discussed in Ref. [30]. When it comes to writing reliable software, the situation is much more stable. In fact, the basic principles of creating code that is correct and reliable have not changed very much since the mid-1970s. An excellent, and very practical, book on writing high-quality software in general is Ref. [45]. Another classic work, which has the advantage of being concise, is Ref. [46]. Very useful information on graphical programming in data acquisition applications (in particular, LabView) is provided in Ref. [59].
Summary of some important points 13.2 Computers and operating systems 13.2.1 Selection (a) Because of multifaceted hardware, software, data storage, security, and interconnection vulnerabilities, and for other reasons, it pays to use high-quality computer equipment and software.
528
Computer hardware and software
(b) Consumer-grade PCs are generally cheaply made and unreliable. Business-class machines are normally a much better choice for laboratory work. (c) Mainly because of the relative scarcity of computers that use Mac OS and Linux, these machines are largely unaffected by security problems.
13.2.2 Some common causes of system crashes and other problems (a) In addition to intrinsic operating-system and malware-related problems, some other software-based causes of computer system misbehavior are: (A) buggy, corrupted or improperly installed applications, (B) incorrect or incomplete operating system upgrades, (C) corrupted core files, and (D) too many applications running (watch out especially for background programs). (b) Some hardware-related causes of computer system misbehavior are: (A) mains-power disturbances, (B) dislodged or damaged connectors and cables, (C) low-quality, overloaded, or failing power supplies, (D) bad drivers, and (E) overheating. (c) Drivers (pieces of software that control hardware) are often faulty, and should be investigated first if a problem arises that appears to be caused by hardware.
13.3 Industrial PCs and programmable logic controllers (a) If environmental conditions are particularly harsh (e.g. exceptionally high temperatures and humidity), it may be better to use an industrial PC, rather than an ordinary type that is designed for office environments. (b) For controlling equipment and processes, another alternative is a programmable logic controller (PLC). These devices are physically very robust, and also have exceptionally reliable software.
13.4 Some hardware issues 13.4.1 Hard-disc drives (a) Hard-disc drives are very vulnerable to overheating. (b) Overvoltages on the a.c. mains can be another significant source of problems for these devices. (c) The risk of data loss due to hard-drive failure can be greatly reduced by using a redundant disc (RAID) system. (d) If a hard-drive is making unusual sounds that may indicate physical damage (such as squeals or ticking noises), it should be switched off immediately. (e) If there are indications that the hard drive is suffering problems that are not the result of physical damage, data-recovery software may be useful in retrieving information. (f) Otherwise, or if the hard drive is likely to be physically damaged, it may be possible to recover data using a commercial “data-recovery service.”
529
Summary of some important points
13.4.2 Power supplies (a) Low-quality computer power supplies are often a cause of problems (such as crashes) that are frequently attributed to the operating system. (b) They are also the components in PCs that are in most frequent need of replacement. (c) A good-quality power supply should be used if a reliable, crash-proof computer is desired. Most PCs are not provided with power supplies of this kind.
13.4.3 Mains-power quality and the use of power-conditioning devices (a) It has been estimated that the large majority of unexplained temporary computermalfunctions are caused by power-quality problems. (b) Very brief losses of power can cause a computer to lock-up. (c) Large transient overvoltages can damage computer components outright or (if repeated) cause them to degrade over time. (d) Computers and their associated equipment should always be provided with surge suppressors and power line filters. (e) For the highest levels of reliability (and especially if the computer does not have a high-quality power supply), consider getting an uninterruptible power supply (UPS).
13.4.4 Compatibility of hardware and software Before buying any hardware and software that are intended to work together, make absolutely sure that they are compatible.
13.4.5 RS-232 and IEEE-488 (GP-IB) interfaces (a) Although it is sometimes easy to connect a computer to another device using RS-232, it is generally not. Lack of compatibility between different versions of RS-232 is the major problem. (b) RS-232 links are prone to data transmission errors in some cases, owing a lack of handshaking, or noise caused by ground loops and other effects. (c) If possible, use the IEEE-488 interface (GP-IB) in preference to RS-232. (d) Ensure that all new instruments are provided with IEEE-488 capability, and preferably IEEE-488.2. (e) If available, the USB interface is a good alternative.
13.5 Backing-up information (a) The importance of creating regular and systematic backups of important information, and making provisions for doing this easily, can hardly be overemphasized. (b) It is especially important to backup before upgrading system software or hardware. (c) Backups are often not done because of a lack of convenience, and hence it is important to arrange things so as to make the process as trouble-free as possible.
530
Computer hardware and software
(d) If the backup is stored in the same location as the original information, both are at risk of destruction or loss from events such as fires, floods and theft. Hence, it is important to ensure that backup copies are regularly taken to an off-site location. (e) The use of an “online backup service” allows backups to be sent over the Internet to a secure off-site location. This ensures that backups are made automatically, with no human intervention, and stored separately from the originals.
13.6 Long-term storage of information and the stability of recording media (a) Hard-disc drives should not be relied upon for the long-term storage of information (i.e. with the drives switched off and placed in storage for more than a few months). (b) For this purpose, it is much better to use a storage medium (such as a CD) that can be separated from the device used to read and write the information. (c) Accelerated ageing tests have shown that good-quality CDs and DVDs, stored correctly, should last for about 100–200 years or more. (d) Properly stored magnetic tapes can last for at least 10–20 years. (e) Obsolescence of a storage medium is usually more important than physical degradation. (f) Migrate information from older storage media to newer ones as the latter become available. (g) Store images as uncompressed PDF or TIFF files, and text as uncompressed ASCII or RTF ones.
13.7 Security issues (a) A very important aspect of protecting computers from malware and Internet attackers is to promptly apply patches to the operating system and applications as they become available. (b) Computers can be infected by viruses as a result of: downloading and opening an infected file from a Web site, opening infected email attachments, using infected storage media, and via peer-to-peer (P2P) file sharing networks on the Internet. (c) Antivirus software from a reputable vendor should be installed on a computer, and updated frequently. Hard-drive files should be scanned at least once a day. (d) Firewalls are an important and effective means of protecting computers from attack over a network, but can themselves cause problems if they are not configured correctly.
13.8 Reliability of commercial and open-source software (a) Software that has just been released tends to have a very large number of bugs and security holes. Hence, one should generally avoid “Version 1.0” of any software. (b) Pre-release versions of software (i.e. “beta software”) are often extraordinarily unreliable, and potentially hazardous to a computer system. (c) Before purchasing new software, it is often worthwhile to ask the vendor about the number of users that already have it, known bugs, the software development process, and whether support is provided for older software.
531
Summary of some important points
(d) Pirated software may be unusually faulty, lacking in documentation and support, and is a potential source of malware. (e) Open-source software, despite being free, is often of extremely high quality – perhaps rivaling or even exceeding that of closed-source commercial software.
13.9 Safety-related applications of computers If a computer is used to control a system in a way that allows it to avert a potentially hazardous condition, it should always be backed-up by a basic safety device based on an elementary physical principle (e.g. a pressure relief valve on a pressure vessel).
13.10 Commercial data-acquisition software (a) The graphical programming languages used in several types of commercial dataacquisition application are very useful for creating very simple programs. (b) However, as programs written in these languages become larger, they tend to become relatively disorderly, hard to debug, and difficult to maintain, in comparison with ones written in conventional text-based languages. (c) For larger programs (especially if they have to be maintained by other users in the future), it is better to use a data-acquisition application that utilizes a text-based language. (d) If a graphical programming language is to be used, it is very important to pay close attention to program structure and programming style.
13.11 Precautions for collecting experimental data over extended periods (a) Since experiments involving the collection of data over long periods are vulnerable to computer problems, in such cases it is very important to pay attention to potential sources of trouble. These include operating system stability, hardware problems (e.g. overheating), mains power anomalies, etc. (b) It is also desirable to minimize the number of applications (foreground and background) that run simultaneously, and make sure that sufficient memory is available, considering the requirements of all the applications that will be running.
13.12 Writing software 13.12.1 Introduction (a) Almost all program failures are the result of mistakes made by the programmer – not to other problems, such as computer hardware faults or bugs in the compiler. (b) In general, software cannot be made reliable by testing and debugging. It can be made so only by ensuring that the code is substantially correct while it is being designed and written. (c) Before embarking on any significant programming project, always find out whether any suitable software (e.g. a commercial application) already exists.
532
Computer hardware and software
13.12.2 Planning ahead – establishing code requirements and designing the software (a) Except for the most trivial cases, the construction of software should always be preceded by the creation of written requirements (what it is that the software should do) and overall plans (analogous to structural drawings for a building). These need not be elaborate or polished. (b) Always break up a single large programming problem into a larger number of small ones by using modules (e.g. routines or classes). (c) Generally, the overall program structure is represented by a tree-like hierarchy of modules. Development usually proceeds from the top-down, starting with the module at the top of the hierarchy (e.g. the main program). (d) Before writing a routine, find out whether one that performs the desired function already exists in a program library.
13.12.3 Detailed program design and construction (a) A good method of working out intermediate-level aspects of the design of a program (before creating the actual computer code) is to write it out first in the form of pseudocode. (b) The quality and readability of a program can be much improved by having two people write it, using the pair programming method. (c) Code is read much more often that it is written. This is true even during the initial construction process. Hence, writing code that is easy to read and understand is extremely important. (d) Clever but obscure code constructs (sometimes used because they make the code easier to write or more compact) should be avoided. (e) In programming, it is very important to adopt a clear, consistent, and (preferably) widely accepted programming style. (f) Programs should always be thought of in terms of certain standard constructs (sequence, selection, and iteration), the use of which ensures that control does not hop around unpredictably within a program. GOTO-type instructions and statement labels should generally be avoided. (g) Establish a convention for naming program variables and routines, and use clear and descriptive names. (h) Make the effort to use comments wherever this can help in understanding the code (e.g. for summarizing the code’s purpose). However, code should be largely selfdocumenting, comments should not merely repeat the code, and one should not overcomment. (i) If a program must read inputs from a human operator, it should be provided with a way of checking these to ensure that they are valid and reasonable. (j) Certain types of programming error are made particularly often by most programmers. The most common is the use of variables that have not been initialized.
533
References
(k) It is important to avoid the tendency to hurriedly compile a program after completing it, without first studying it carefully and making sure that one understands why it should work.
13.12.4 Testing and debugging (a) Software testing can be greatly simplified by testing each module (e.g. a routine) in isolation from the rest. (b) Some thought should be used to select data used to test routines, and what the correct results of a particular test are should be carefully determined. (c) Symbolic debuggers, although not intended for this purpose, can be very useful for testing code. (d) The best way of debugging code is to use a systematic, scientific method – i.e. frame hypotheses about the causes of problems, and test these experimentally. (e) Some very useful software tools for debugging are: the compiler (set to its most sensitive error-detection level), static code analyzers, and symbolic debuggers (very useful). (f) Before changing a defective program, it is very important to ensure that one fully understands what the true problem is, and that the proposed repair will fix it without causing other difficulties. (Making changes to software often introduces new bugs.)
13.13 Using old laboratory software Think carefully before using homemade laboratory software that has been written by someone else, especially if that person is no longer around. If it is not possible to fully understand how such software works, it is often best to abandon it and rewrite the software from scratch.
References 1. P. D. T. O’Connor, Practical Reliability Engineering, 4th edn, John Wiley & Sons Ltd., 2002. 2. R. B. Thompson and B. F. Thompson, Repairing and Upgrading Your PC, O’Reilly, 2006. 3. www.consumerreports.org 4. R. B. Thompson and B. F. Thompson, PC Hardware in a Nutshell, 3rd edn, O’Reilly, 2003. (Reference [2] is a useful follow-up to this book.) 5. ConsumerReports.org. Windows or Macintosh?, March 2006. www.consumerreports. org 6. T. Landau and D. Frakes, Mac OS X Help Line: Tiger edition, Peachpit Press, 2005. 7. C. Easttom, Moving from Windows to Linux, Charles River Media, 2004. 8. D. P. Bovet and M. Cesati, Understanding the Linux Kernel, 3rd edn, O’Reilly, 2005.
534
Computer hardware and software
9. T. Bove, Just Say no to Microsoft: how to Ditch Microsoft and why it’s not as Hard as you Think, No Starch Press, 2005. (Despite this book’s heavy bias against Microsoft, it contains useful information about migrating from Windows to Mac OS and Linux, as well as the availability of non-Microsoft applications software.) 10. K. J. Chase, PC Disaster and Recovery, Sybex, 2003. 11. J. M. Torres and P. Sideris, Surviving PC Disasters, Mishaps, and Blunders, Paraglyph, 2005. 12. H. Austerlitz, Data Acquisition Techniques using PCs, 2nd edn, Academic Press, 2003. 13. F. D. Petruzella, Programmable Logic Controllers, 3rd edn, McGraw-Hill, 2004. 14. D. J. Cougias, E. L. Heiberger, and K. Koop, The Backup Book: Disaster Recovery from Desktop to Data Center, 3rd edn, Laurie O’Connell (ed.), Schaser-Vartan Books, 2003. 15. J. Van Meggelen, L. Madsen and J. Smith, Asterisk: the Future of Telephony, O’Reilly, 2007. 16. Agilent Technologies, RS-232 troubleshooting – Application note. http://cp.literature. agilent.com/litweb/pdf/5989-6580EN.pdf. 17. J. Axelson, Serial Port Complete: Programming and Circuits for RS-232 and RS-485 Links and Networks, Lakeview Research LLC, 2000. 18. J. Campbell, The RS-232 Solution, 2nd edn, Sybex, 1989. 19. J. Park, S. Mackay and E. Wright, Practical Data Communications for Instrumentation and Control, Elsevier, 2003. 20. P. Horowitz and W. Hill, The Art of Electronics, 2nd edn, Cambridge University Press, 1989. 21. IPC Systems Limited, IEEE 488 troubleshooting – Application note, www.ipcsystems. ltd.uk/application_notes.html 22. i365 (A Seagate Company). services.seagate.com 23. Seagate Technology LLC, Pushbutton Backup External Hard Drives. www.seagate.com 24. D. Manquen, in Handbook for Sound Engineers, 3rd edn, G. M. Ballou (ed.), Butterworth-Heinemann, 2002. 25. W. C. Preston, Backup and Recovery, O’Reilly, 2007. 26. F. R. Byers, National Institute of Standards and Technology (NIST), Care and handling of CDs and DVDs – a Guide for Librarians and Archivists, NIST special publication 500–252. www.itl.nist.gov/iad/894.05/docs/CDandDVDCareandHandlingGuide.pdf 27. R. Lu, J. Zheng, O. Slattery, F. Byers and X. Tang, J. Res. Natl. Inst. Stand. Technol. 109, 517 (2004). 28. J. W. C. Van Bogart, Letter to the Editor, Scientific American, June 1995. 29. National Initiative for a Networked Cultural Heritage (NINCH), The NINCH Guide to Good Practice in the Digital Representation and Management of Cultural Heritage Materials, 2002. www.nyu.edu/its/humanities/ninchguide/index.html 30. T. Bradley, Essential Computer Security: Everyone’s Guide to Email, Internet, and Wireless Security, Syngress, 2006. 31. E. Filiol, Computer Viruses: from Theory to Applications, Springer, 2005. 32. ConsumerReports.org, Cyber-Insecurity Special Section. www.consumerreports.org/ cro/electronics-computers/resource-center/cyber-insecurity/cyber-insecurity-hub.htm
535
References
33. Microsoft Corporation, Security central, www.microsoft.com/security/ 34. Apple Inc., Apple Product Security, www.apple.com/support/security/ 35. F. S. Acton, REAL Computing Made Real: Preventing Errors in Scientific and Engineering Calculations, Princeton University Press, 1996. 36. NASA Glenn Research Center, Office of Safety Assurance Technologies, NASA Software Safety Guidebook (NASA-GB-1740.13). www.hq.nasa.gov/office/codeq/ software/docs.htm 37. R. Longbottom, Computer System Reliability, John Wiley & Sons Ltd., 1980. 38. E. S. Raymond, The Cathedral and The Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary – Revised edition, O’Reilly, 2001. 39. J. W. Paulson, G. Succi, and A. Eberlein, IEEE Trans. Software Eng. 30, 246 (2004). 40. National Instruments Corporation. www.ni.com 41. Agilent Technologies. www.home.agilent.com 42. measX GmbH & Co. KG. www.dasylab.com 43. T. R. G. Green and M. Petre, J. Vis. Lang. Comput. 7, 131 (1996). 44. J. Dempster, The Laboratory Computer: a Practical Guide for Physiologists and Neuroscientists, Academic Press, 2001. 45. S. C. McConnell, Code Complete, 2nd edn, Microsoft Press, 2004. 46. B. W. Kernighan and P. J. Plauger, The Elements of Programming Style, 2nd edn, McGraw-Hill, 1978. 47. Maplesoft, Inc., Maple. www.maplesoft.com 48. Wolfram Research, Inc., Mathematica. www.wolfram.com 49. The MathWorks, Inc., MATLAB. www.mathworks.com 50. N. J. Rushby, in Guide to Good Programming Practice, 2nd edn, B. L. Meek, P. M. Heath, and N. J. Rushby (eds.), Ellis Horwood, 1983. 51. I. Sommerville, Software Engineering, 2nd edn, Addison-Wesley, 1985. 52. J. Keogh and M. Giannini, OOP Demystified, McGraw-Hill Osborne, 2004. 53. G. J. Myers, Software Reliability: Principles and Practices, John Wiley & Sons, 1976. 54. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing, 3rd edn, Cambridge University Press, 2007. 55. Visual Numerics, IMSL Numerical Libraries. www.vni.com 56. Numerical Algorithms Group, NAG Numerical Library. www.nag.co.uk 57. GNU Scientific Library. www.gnu.org/software/gsl 58. Netlib Repository. www.netlib.org 59. P. A. Blume, The LabVIEW Style Book, Prentice Hall, 2007. (A useful summary of style rules can be found in the appendix.) 60. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, 1986. 61. G. J. Myers, T. Badgett, T. M. Thomas, and C. Sandler, The Art of Software Testing, 2nd edn, John Wiley & Sons, 2004. 62. J. Travis, LabView for Everyone, 2nd edn, Prentice Hall PTR, 2002.
Experimental method
14
14.1 Introduction The main purpose of this chapter is to examine certain trouble-prone aspects of experimental work that are often not considered, but if neglected can have major consequences. These include errors caused by the subconscious biases of experimenters, which are often the origin of mistaken conclusions in scientific research. In those areas of research that involve the study of material samples, very serious errors are commonly made because the true compositions of the samples are different from what is believed. This issue is also discussed below. The chapter also looks at the problems that can arise in reproducing the experimental measurements and techniques of other researchers. Several of the points are illustrated by historical examples. Some potential pitfalls in the analysis of data are discussed here, and also in Section 2.2.7.
14.2 Knowing apparatus and software It is important to understand (at least at the conceptual level, if not in all the details) the inner workings of one’s apparatus and software, and the theories they are based on. The effort needed to acquire this understanding is often avoided by experimenters. This tendency is encouraged by the ubiquitous presence in laboratories of highly automated commercial instruments, which are easy to treat as black boxes. Having a conceptual understanding of apparatus and software can help to reduce human errors that might otherwise occur during their use (see page 17). Also, without such an understanding, the limitations and eccentricities of these things may not be readily perceived. For example, if one is unaware that a particular power supply works by rapidly turning the power on and off (i.e. it is a switching supply), the potential for this device to cause electromagnetic interference in nearby sensitive equipment may go unrecognized. This sort of incidental non-ideal behavior is not always discussed in instruction manuals. If an apparatus has been homemade, it is likely that no manual will be available.1 As a consequence, and because such apparatus tend not to be highly engineered, all sorts of undocumented non-ideal behavior may be present. This will be mysterious to those who 1
536
Such a deficiency should be rectified, if possible – see pages 18–19.
537
14.3 Calibration and validation of apparatus
were not involved in its construction, and are unacquainted with the underlying technology. Certain fundamental limitations of the apparatus (e.g. noise in amplifiers) are also unlikely to be documented in such cases. These too will be obscure if the technology is not understood. Furthermore, having knowledge of the inner workings of apparatus (whether commercial or homemade) makes it much easier to foresee problems when they are used for purposes other than those for which they are specifically intended. Such situations are not uncommon. It is particularly important to know about apparatus and software that is responsible for making primary measurements (i.e. sensors, transducers, etc.), and processing the resulting signals and data (i.e. amplifiers, signal averagers, data-acquisition software, etc.). (These are contrasted with devices that perform auxiliary operations, such as stabilizing temperature or reducing vibration.) One should know what a measuring device actually senses, and whether the assumptions that are involved in its intended use are applicable under given circumstances. While it is usually safe to ignore the detailed mechanisms that underlie the operation of instruments and software downstream from the sensor, their major functions should be understood. For instance, although one does not need to comprehend the schematic diagram of a phase-sensitive detector, having an understanding at the block-diagram level, and knowing quantitatively how it processes the signals, is desirable. Data collection and analysis software should be subjected to particularly careful study. Such software is often homemade. Moreover, software generally (in contrast with hardware) is especially prone to having defects, and these are often insidious and hard to eliminate (see Chapter 13). Data collection and analysis software is among the first things that should be suspected if it is thought that some experimental results may be incorrect. (Some potential problems with data analysis are discussed on page 41.) If possible, it is desirable to have a quantitative understanding (at least in an order-ofmagnitude sense) of what happens to experimental information (signals and data) as it passes along the instrumental chain from the sensor to the data-acquisition software. One should be able to estimate the size of the expected signal, and how it is modified by passage through various stages of the system. The sources, strength, and perhaps frequency dependence, of noise in the system should also be known, and it should be possible to determine how this will affect the signal-to-noise ratio of the final result. (This can admittedly be very difficult to do if the noise arises as a result of external disturbances, such as electromagnetic interference or vibrations.) Having a quantitative understanding is especially important when the apparatus is being used at the limits of its capabilities.
14.3 Calibration and validation of apparatus The need to calibrate measuring devices regularly in order to ensure that they are reading accurately is usually well understood, if often neglected. It may be possible to avoid having to do this if the experiment in which they are used can be made to employ comparative, rather than absolute, measurements, and the response of the devices to the quantity being
538
Experimental method
measured is known. This is often a practical and natural approach, and is definitely to be preferred whenever a choice is available. Nevertheless, even in such cases, it is often useful to perform some sort of calibration operation in order to ensure that the experimental system is working properly. For example, in the case of an experiment that involves measuring voltages, one might want to make sure that the measuring electronics responds in a linear way (e.g. that there are no kinks or hysteresis in a plot of measured versus applied voltage). For such a purpose, one would want to have a variable voltage source with known behavior. In the case of an experiment that involves measurements on material samples, the use of a test sample with known properties (i.e. a standard) may be a necessary part of a calibration operation. A calibration also makes it possible to do an end-to-end test of the entire measurement system, so that one can ensure that all the hardware and data-collection software is behaving as expected. It is essential to ensure that peculiarities, such as abnormal offsets, nonlinearities and unexpected noise, are tracked down and eliminated, or at least understood and accounted for. (That is, apparatus and software should be validated.) Such peculiarities are often easy to downplay as being irrelevant, and ignored. However, they may be an indication that an apparatus is not understood properly, and perhaps is being used incorrectly. Alternatively, they could be a sign that the apparatus is damaged, some software is buggy, or that other problems exist. Furthermore, if this control and understanding can be achieved, it may be possible to detect subtle new phenomena that would not have been discernable otherwise. (See the example in Section 14.12.) Calibration and validation are especially important in the case of apparatus that is new, recently modified, or which has just been repaired. They are also desirable if an instrument or device has been subjected to an extreme physical condition that could affect its operation. For example, certain cryogenic thermometers are likely to undergo calibration shifts if their leads are soldered. Optical devices may go out of alignment if jarred. Signal transformers can become noisy and nonlinear if a d.c. current is passed through their windings. Calibration and validation should also be considered if an instrument or device has been used by others. Calibration arrangements should be reasonably easy to use. The properties of apparatus can change as it ages. For this and other reasons, it will probably therefore be desirable to carry out calibrations on a regular basis (see also Section 1.4.2). An illuminating historical example in which this essential operation was neglected, at the cost of the experimenter involved, is discussed in Section 14.11.
14.4 Control experiments One common source of errors in scientific research is that variables other than the ones under study may influence the outcome of an experiment [1]. Control experiments can be used to clarify matters when a given experimental result can have more than one explanation. Consider an experiment that involves observing the effects of changing a parameter on some object of study. For example, one might be investigating the effect of a magnetic field
539
14.4 Control experiments
on the electrical resistivity of a metal sample. It is possible that any changes that are seen may not be caused by variations in the chosen parameter, but by some other variables. For instance, in the example, one may observe correlations between the resistance of the metal and the magnetic field in which it is placed. However, it may be that the magnetic field actually has little influence, and the observed changes in resistance are really the result of changes in local temperature caused by heat generated by the device (e.g. a solenoid) used to create the field. To identify effects of this kind, one makes use of control experiments. A control experiment is an auxiliary experiment, which in almost all respects is identical to the primary one. An object (similar or identical to the one under study) is subjected to exactly the same conditions as the one under study, except for the parameter being investigated, which is not allowed to change. In the example, a control experiment could be done by replacing the solenoid with a device that is identical to it, except for being wound non-inductively. Such a non-inductive device would produce no magnetic field when a current was passed through it. Nevertheless, the apparatus as a whole should behave in other ways as it would if an ordinary solenoid were present. (For example, heat would be produced by the non-inductive device.) In this case, the sample could either be the one being studied in the primary experiment, or another with the same properties. This strategy would enable one to determine that the magnetic field was not the cause of the observed changes in sample resistance. Control experiments can also be employed to reveal the presence of variable systematic errors. For instance, in the example, one might have a situation in which the change in resistance due to temperature variations is a small but significant effect relative to changes caused by the magnetic field. An important concern during the design of a control experiment is that it might be difficult to ensure that it really is identical to the primary one, except for the static character of the parameter under study. For instance, in the example, it may be that the resistance of the sample appears to change, not because of magnetic field or temperature effects, but because the power supply used to provide current to the solenoid generates high-frequency noise. This noise is inductively coupled into the sample wires via the solenoid, and creates a d.c. offset in the measurement electronics due to audio rectification effects (see page 372). If the regular solenoid is replaced by a non-inductive one, this magnetic coupling of highfrequency noise will vanish. One might then mistakenly conclude that the d.c. magnetic field acting directly on the sample really is responsible for the apparent resistance changes. Another potential problem is that it may be difficult to hold the parameter under investigation constant. For instance, in the field of gravitational wave detection (see Section 14.11), the sources of these waves (which are distant extraterrestrial objects) cannot be “switched off.” Nor is it possible to shield a detector from them. Hence it is difficult (although, surprisingly, not impossible) to devise a completely satisfactory control experiment in this area of research. In the case of gravitational wave detection, this problem is discussed in Ref. [2]. The importance of control experiments is widely understood in biological and medical investigations. In such cases, the inherent complexity of the systems under study means that it is generally extremely difficult to be certain that all the variables that might affect
540
Experimental method
the outcome of an experiment have been identified and held constant. However, control experiments can also be beneficial in the physical sciences (where, perhaps, they are often insufficiently used). Even in these areas, one can be mistaken about the true cause of a variation in an experiment, and it may be difficult to be sure that all systematic errors have been accounted for.
14.5 Failure of auxiliary hypotheses as a cause of failure of experiments Experiments more often fail (i.e. give negative results), not because of failure of the main hypothesis (i.e. the hypothesis that the experiment is intended to test), but because one or more auxiliary hypotheses fail [1]. Auxiliary hypotheses are those that must be correct in order for the experiment to work, but which are not the primary item of interest. The former are usually just accepted as being true by the experimenter. The failure of a primary hypothesis may count as a genuine experimental achievement, but failure of auxiliary ones means that one has to rethink the experiment. For instance, some attempts to measure the charge on the electron using the oil-drop technique, but with mercury instead of oil, yielded erroneous results because the drops formed oxides or other coatings that changed their average density. An auxiliary hypothesis in this case was the assumption that the density of the drops was the same as that of bulk mercury, which turned out to be false. In fields of research that involve the study of material samples, problems often arise because an assumption, that the samples have the composition that is attributed to them, is actually incorrect (see Section 14.8). It is therefore very desirable to explicitly list the auxiliary hypotheses that are involved in a proposed experiment, and how the accuracy of these could affect its outcome, before proceeding.
14.6 Subconscious biases as a source of error 14.6.1 Introduction Prejudice is a very common cause of incorrect experimental work [1]. Unlike other errors, it is not widely appreciated, and generally not accounted-for when conclusions are being drawn about experimental results. The forces behind subconscious biases (e.g. the desire to get a job, support a strongly held belief, avoid losing face, etc.) are, of course, often extremely powerful. Nobody is immune to such influences, and even respected scientists have succumbed to them. (See, for example, the discussions concerning polywater and the detection of gravitational waves in Sections 14.8.2 and 14.11. A compilation of historical examples of subconscious biases in physics can be found in Ref. [3].) Therefore it is
541
14.6 Subconscious biases as a source of error
necessary to make a conscious effort to ensure that one’s personal prejudices cannot affect the outcome of an experiment. It may even be desirable to explicitly include these as an alternative hypothesis when analyzing experimental results. Subconscious biases can enter into scientific investigations in a multitude of ways, and at many different levels. In some situations the forms they take can be relatively subtle. For example, if large lists of numbers are written down, the values of any erroneous digits will usually be influenced by personal preferences [1]. At the opposite end of the scale, the intrusion of subconscious biases can sometimes be very blatant. For instance, if one set of results supports a favored hypothesis, and another (equally valid) undermines it, the first may be accepted, while the second (perhaps with a little help from some special pleading) is rejected, or perhaps just forgotten. In psychology, this sort of tendency is called confirmation bias. It is a very common aspect of everyday human behavior. A number of studies have established that preliminary hypotheses formed by people on the basis of early meager and low-quality data interfere with their interpretation of more plentiful and higher-quality data later on [4]. For these reasons, scientists should always consciously search out, and pay close attention to, experimental data that might invalidate their beliefs.
14.6.2 Some circumstances in which the effects of biases can be especially significant Subconscious biases are likely to be especially troublesome for those experiments in which the signal-to-noise ratio is low, the statistical significance of results are marginal, or the effects of experimental error are especially significant. (Nevertheless, it must be emphasized that subconscious biases can cause serious distortions even in situations involving high signal-to-noise ratios, and so on.) The collection of data is usually automated, so subjective errors from this process are not normally important. However, they can be if, for example, meter readings are taken and written down manually. In these cases, such errors are likely to be particularly problematic when the data can be directly interpreted in the context of a theory in which the experimenter has a stake [1].
14.6.3 Subconscious biases in data analysis Data analysis is a process that is highly vulnerable to subjective effects. Specifically, choosing the way in which a data set is to be analyzed after the data have been gathered and examined is likely to introduce personal biases. One example is the practice of rejecting experimental results on the basis of post-hoc criteria. For instance, it is sometimes necessary to remove wild points (or “outliers”) from a set of data that might erroneously affect the conclusions. This is a particularly serious issue when least-squares curve-fitting techniques are being used in the analysis, because they implicitly assume that the distribution of errors has a Gaussian form. Hence, very deviant points have an unduly large influence on the shape of the fitted curve. The process of deciding which points qualify for removal can be a source of problems. This is not an issue if it is obvious why the wild data were created (if, for example, the
Experimental method
542
apparatus was disturbed, or there was a loss of mains power during the experiment). In such cases the decision about removing the data is likely to be completely objective.2 However, if the origins of the outliers are unclear, and one then begins to hunt around for particular reasons to remove them, it is likely that prejudice will enter into the decisions. Another example of the post-hoc selection of data analysis methods is the practice of subjecting a particular set of data points to a series of different operations (such as frequency filtering or time differentiation), on an arbitrary basis, until a feature emerges. This is perhaps then used to support some hypothesis. Such an approach is sometimes referred to as statistical massaging. Generally, procedures for analyzing data should be established before the data are collected. The selection and setup of a data-analysis procedure (e.g. choosing filter parameters, cutoff thresholds, etc.) should not be done using a set of data that is ultimately also going to be analyzed using the procedure. In the case of removing wild data, a suitable approach would involve setting up criteria for rejecting wild data points prior to the analysis (or after seeing the first wild points), and then applying these uniformly to all the data, regardless of the outcome. This can be done automatically in the data collection or analysis software. If a curve is to be fitted to the data, an effective objective method for dealing with outliers is to use a robust parameter-estimation technique, rather than the usual least-squares method (see Ref. [5]). A general discussion of rejecting observations can be found in Ref. [1]. Visual methods of data analysis are particularly susceptible to subjective influences. One example is discussed in the section on early searches for gravitational waves (see Section 14.11.3). In this case, “signals” were visually identified in a chart recording of what amounted to nothing more than random noise. Another situation of this type may arise while determining the goodness-of-fit of a function to a set of data points. When done correctly, curve fitting should involve the generation of a numerical value for the goodness-of-fit (e.g. χ 2 ) [5]. However, the procedure is likely to be misleading if the quality of the fit is confirmed merely by looking at a graph of the data and fitted function, and noting a visual correspondence. (This practice is derisively referred to as chi-by-eye.) A discussion of subconscious biases in data analysis, with historical examples, can be found in Ref. [3].
14.6.4 Subconscious biases caused by social interactions Sometimes, social interactions amongst scientists can lead to collective prejudices (or massdelusions). This is, of course, not a peculiarity of the scientific process. Similar behavior can be seen in ordinary human activities, and sometimes leads to phenomena such as stockmarket bubbles. One example of collective scientific prejudices is the mistaken discovery of polywater, as described in Section 14.8.2. In this case, the prejudice was the belief that sample contamination was not the likely cause of the observed behavior. Interestingly, the exponential rise (and subsequent exponential fall) in the number of papers on polywater 2
The use of environmental monitoring devices can sometimes be beneficial here. For example, an accelerometer may be used to determine whether apparatus has been disturbed, and a power disturbance analyzer can sense mains-power anomalies.
543
14.8 Problems involving material samples
published per year turns out to be strikingly similar to the growth and decline of infections observed during an epidemic [6]. This is unlike the behavior observed in a normal area of scientific research that is undergoing growth or decline. Similar, albeit more subtle, behavior sometimes takes place when very precise measurements are made [7]. Suppose that an established measurement happens to be inaccurate, perhaps because of a systematic error that has not been accounted for. Other measurements of the same property made by different researchers not long afterwards will have a tendency to be unreasonably close to this value (in light of their claimed measurement errors, and better measurements made at a much later date). This can happen because very precise measurements will, at least in the initial stages of the investigations, involve many experimental uncertainties. Other workers may measure a value that is considerably different from the established one. Nevertheless, the opportunity exists for them to search for various systematic errors in their measurements, and subtract them one by one, until their own values come close to the accepted result. Then they are inclined to stop searching. In this way, precise measurements of some physical quantity made by different investigators may be close to one another, yet all in error by a considerable amount. This phenomenon has been called “intellectual phase locking” and “the bandwagon effect.” For example, this happened following a historic, but flawed, measurement of the speed of light made in the 1930s [7]. One of those who were involved with the measurement (A. A. Michelson) was a man with very considerable prestige in the field. It was therefore easy for subsequent workers to be swayed by a result bearing his name. The large error in the accepted value of the speed of light was recognized only in the 1950s, with the advent of improved techniques for making the measurement.
14.7 Chance occurrences as a source of error Mistaken conclusions are often made because insufficient attention has been given to the possibility that some event has occurred by chance [1]. This is a particular problem if the number of data points involved in making the deduction is small, and insufficient attention has been given to the proper use of statistics and experimental controls (see Section 14.4).
14.8 Problems involving material samples 14.8.1 Introduction In branches of science that entail the study of material samples (e.g. of metal alloys or chemical compounds), erroneous results often arise owing to ignorance about their true character. This is a very general and insidious problem, which affects research in a number of scientific fields. Physicists seem to be particularly susceptible to this difficulty [1],
Experimental method
544
perhaps because the creation of materials is not a part of the traditional physics culture (in contrast with, e.g., chemistry). For example, consider the following. (a) A sample may be contaminated. This common problem can occur, for instance, because the sample has picked up foreign matter from the container used in its preparation (e.g. a crucible, or a test tube). Alternatively the raw materials used to create the sample may be impure. (In the case of raw materials from commercial suppliers, perhaps only certain types of impurity are counted in the total impurity level indicated on the label.) In some measurements, surface contamination can be an important issue. This may take place as a result of improper handling or storage. Secondary sample-preparation operations, such as polishing the sample or cutting it to the required size, can also cause this problem. (b) Unbeknownst to the experimenter, the process used to create the sample may have actually produced a different material (having a different chemical composition or structure) from the one intended. This can occur, for example, because the phase diagram used in the creation of the sample is incorrect or misleading. Alternatively, it is possible that parameters that influence the formation of a material (e.g. temperature or pressure) are not controlled properly when a sample is made. (c) The sample may degrade during storage. This can take place, for example, because the sample oxidizes in the air, or is naturally unstable, and changes in composition or structure over time. (d) For the purposes of an experiment, it may have been assumed that the sample is homogeneous, whereas it turns out not to be. For instance, instead of being completely solid, a sample of some metal may contain bubbles that have resulted from it being cast in air. Alternatively, inclusions may be present in the sample as a result of a phase separation that has taken place during cooling and solidification of the molten precursor. Such problems can sometimes have very serious consequences. Not too infrequently, “important discoveries” are made and publicized that later turn out to be artifacts of sample imperfections. (These are occasionally referred to as “junk effects.”)
14.8.2 The case of polywater One infamous example of this occurred in the field of chemistry [6]. In 1962, an eminent Soviet physical chemist by the name of Boris Derjaguin (and others working at his institute) began reporting that water which had been condensed in fine glass capillaries3 could exhibit very unusual properties. These included a high viscosity, peculiar behavior upon melting, and a strange Raman spectrum. This behavior led some researchers to conclude that a new phase of water had been discovered. Eventually, news of this discovery reached the West. This led to an exponential growth in the rate of research publication on the subject, which ultimately reached a peak of almost 200 publications per year in 1970 [8]. Only tiny amounts of this new substance, called 3
The capillaries had an internal diameter of between 2 µm and 50 µm and a length of about 1 cm.
545
14.8 Problems involving material samples
Fig. 14.1
Proposed structures of anomalous water (or “polywater”), arising from a theory based on ab initio and semi-empirical quantum-mechanical calculations. From L. C. Allen and P. A. Kollman, Science 167,1443 (1970). Reprinted with permission from AAAS. “anomalous water,” could be produced. Nevertheless, researchers began to speculate about its possible nature, and surmised that it was a polymerized form of water. The substance was therefore subsequently given the correct scientific title “polywater.” On the basis of theoretical calculations, one group suggested possible molecular structures for it, as shown in Fig. 14.1. These diagrams, apparently based on very firm theoretical foundations, provided psychological support to the belief that a new form of water had really been discovered. Not everyone was able to produce polywater. There were also numerous doubters, often of very high professional standing. Many scientists raised serious questions concerning the possibility of impurities in the water. Derjaguin himself, along with his co-workers, recognized from the beginning that contamination was going to be the most likely source of difficulties [9]. However, Derjaguin and other investigators who were able to make polywater stressed the pains that were taken to avoid this problem. For instance, they had shifted from using glass capillaries to using quartz ones, since glass was known to contain substances that could dissolve into the water [6]. It eventually became clear, however, after several years of work, that the substance that was being studied was not pure water, but water laced with contaminants from the quartz.
Experimental method
546
Some spectroscopic analyses had indicated that polywater appeared to contain substantial amounts of human sweat [6,9]. Other investigations pointed to contamination by grease used to lubricate stopcocks in the vessels containing the capillaries [6]. Different studies supported the idea that impurities could be generated by dissolution of the walls of the quartz capillaries by the condensing vapor. Chemical analysis of polywater samples by using precision small-sample methods (which Derjaguin had not employed) generally revealed the presence of significant quantities of impurities, in addition to ordinary H2 O. These included sodium ions, silicon, sodium acetate, carboxylates, potassium ions, sulfate, chloride ions, nitrate, borates, sodium lactate, and phospholipids [9]. (There was a very wide variation in the composition of the contaminants as determined by different laboratories.) Evidence against the existence of polywater built up. In the end, even Derjaguin had to admit that polywater did not exist, and published the result of a detailed investigation (using small-sample analysis methods) that supported this conclusion. It is now accepted that the characteristics of polywater were mainly caused by the presence of large amounts of silica dissolved from the walls of the capillaries [6]. Hence, after more than a decade of effort involving over 400 scientists, the publication of hundreds of scientific papers, and the expenditure of millions of dollars (or roubles, pounds, etc.), “polywater” was revealed to be an artifact of contamination [6,8]. At least in some cases, the blunders were made by otherwise highly competent investigators. They were eager not to miss out on an important scientific advance, and neglected to look sufficiently closely at the most likely cause of the phenomena. It is clear that prejudice (in this case, ignoring the potential effects of contamination) was a very important factor in the widespread belief in the existence of polywater [6]. Excessive secrecy and lack of communication amongst the various research groups were other significant attributes of this episode. Although the polywater affair involved very small samples, contamination from sample containers (and other sources) is often a problem even for relatively large samples studied in scientific research.
14.8.3 Some useful measures It seems likely that many errors caused by sample problems arise because the researchers who carry out the experiments are not the same as the ones who make the samples. In fact, they may not even work in the same laboratory, or even on the same continent. Hence, those who only make measurements may be unaware of the materials science that underlies the creation of samples, and of the many ways in which things can go wrong. It is essential for such workers to find out about sample preparation issues, the things that can happen to samples after they have been made, and what precautions to take. A familiarity with samplecharacterization techniques,4 and access to facilities for doing sample characterization, is also very important. It may be worth compiling a list of known sample pathologies, their origins and consequences, and techniques for detecting their presence. Historical examples should be 4
In the previous example, these would include electron microprobe analysis and mass spectroscopy.
547
14.9 Reproducibility of experimental measurements and techniques
included for each case, if possible. Such a list can be based on evidence from the research literature, PhD dissertations and similar unpublished work, and informal recollections of materials experts and experimentalists working in a particular field. The task of creating this list and keeping it current should fall to a single person within a research group (i.e. a problem-area coordinator, as discussed on page 22).
14.9 Reproducibility of experimental measurements and techniques 14.9.1 Introduction A central canon of the scientific method is that experimental results, if they are to be recognized as scientific facts, must be reproducible by independent investigators working in other laboratories. Unfortunately, even when the initial results are valid, it is sometimes not possible to do this. The reasons are sometimes fairly clear. Perhaps most frequently, workers unsuccessfully attempting the replication lack proficiency in experimental methods that (if not already known) should be easy to master using available information [10]. Alternatively, there may be unmistakable technical problems. For example, the signalto-noise ratios of the instruments in laboratories attempting the confirmation may be insufficient, or it may not be possible to reproduce certain (easily measurable, and obviously relevant) physical conditions. Sometimes it is not so clear why results cannot be reproduced. It may have to do with subtle differences in the design of the instruments (see Section 14.9.4), or with the environments in which they are located. An example of subtle environmental problems involves early efforts to reach ultralow temperatures (≈10−3 K) at Oxford’s Clarendon laboratory in the 1940s [11]. Following the relocation of the laboratory to a new building, it turned out to be impossible to reproduce very low temperatures that were easily attainable in the old one. The people and the equipment were the same, and for some time it remained a mystery as to why the previous temperature records could not be matched. Eventually the cause was traced to heating of the low-temperature parts of the apparatus as a result of vibrations. It turned out that the newer building, which was of a steel construction, transmitted vibrations much more readily than the old one, which was made of stone. This was the key factor. It was not self-evident, since the effects of vibrations at ultralow temperatures were not known at the time. After anti-vibration measures were put in place, the previous low temperatures were easily reached. In other areas of research, similar problems are sometimes attributed to different environmental conditions, such as the presence of interfering electromagnetic fields, or impurities in the water. On other occasions, a lack of reproducibility is the result of insufficient description of the experimental technique and apparatus in the publication of the original results. This may be intentional. For example, the researchers may want to retain a lead in the field of research, and believe that withholding some information may prevent their competitors from
Experimental method
548
catching up too quickly. Sometimes journal page limitations make it impractical to provide a complete description of an experiment. (However, this problem can be circumvented by placing the extra information in an online depository – see Ref. [12].) Perhaps most often, detailed information on these points is not provided because the researchers believe that the extra effort that would be required is not likely to be rewarded by the scientific community.5
14.9.2 Tacit knowledge At other times, it may be that it is too difficult to describe in words or through illustrations all the relevant details. Sometimes important steps in the preparation or running of an experiment have a very subtle character. The original experimenters themselves may not know which of the many features of the apparatus, and actions they have taken during the course of the experiment (some of which may have been done unconsciously), are vital to its success. Expertise that cannot be articulated is sometimes referred to as “tacit knowledge” [13].6 In everyday life, the method used to ride a bicycle is a good example of this. It is almost impossible to describe in words, or through illustrations, the actions that are needed to keep a bicycle upright and stable, and to steer it effectively. One learns how to ride a bicycle by watching others doing it, and then (most importantly) trying it repeatedly oneself. The playing of a musical instrument is another situation in which the possession of tacit knowledge is essential. On the other hand, expertise that can be described in words, or through illustrations, is referred to as “explicit knowledge.” (There is no well-defined boundary separating the two types of knowledge.) In general, tacit knowledge is transferred by personal contact. The expertise involved in riding a bicycle is an extreme example of tacit knowledge, since much of it cannot be passed on even by personal contact, but must be learned through direct experience. Tacit knowledge is of fundamental importance in many skills, including those used in scientific research. (For instance, soldering is dependent on it.) An inability to replicate experiments can sometimes be the result of a lack of some particular tacit knowledge. This knowledge may be the product of considerable time and effort on the part of those who originally did the experiment, and may be far from obvious to the uniformed reader of the resulting publication.
14.9.3 Laboratory visits as a way of acquiring missing expertise Probably the best way of obtaining tacit knowledge (and explicit knowledge that has not been communicated) is to organize an exchange of scientific personnel. Researchers from the laboratory where duplication is being attempted can visit the one that originally obtained the results (or built a particular apparatus), and vice versa. In this way, necessary hands-on 5 6
NB: If the work has been done in a university laboratory, numerous details concerning experimental technique and apparatus can often be found in doctoral dissertations and the like. See the discussion on page 6. The term “tacit knowledge” was coined by the Hungarian-British polymath Michael Polanyi, who carried out early investigations of the concept.
14.9 Reproducibility of experimental measurements and techniques
549
skills, and a multitude of other information, can be transferred readily. This approach makes it practical to do the kind of real-time interactive investigations (e.g. asking “what is this material?” and “why do you do that in such a way?”), and exploration of the laboratory environment, that is difficult or impossible by other means. Close cooperation of this kind does require a good deal of mutual trust, which may be difficult to achieve in a highly competitive field of research. It may also require some swallowing of pride on the part of those seeking the information. These are often serious obstacles (but see Section 1.3.4.2).
14.9.4 A historical example: measuring the Q of sapphire A good example of how useful laboratory visits can be comes from the field of gravitational wave research (see Section 14.11.2) [14]. In one method for detecting such waves, a very large Michelson interferometer is set up in such a way that the lengths of the two arms will change perceptibly relative to one another if any gravitational waves in a certain frequency range should pass. This gives rise to changes in the optical interference between the light beams that traverse them. The ends of the arms comprise large mirrors that are suspended from fine threads or wires in a vacuum.7 These mirrors have internal vibrational resonances, which can be excited by thermal noise in the mirror material. It is desirable that the frequency range of these vibrations be small enough that they do not extend significantly into the range at which the gravitational waves are to be observed. To this end, the quality factor (Q) of the mirror resonances (which defines the width of the resonance peak) must be as large as possible. In order to achieve this, it is desirable to reduce vibrational energy losses from the mirrors (e.g. through friction within the mirror material, or in the suspensions) as much as possible. The general study of high-Q mechanical resonator materials was taken up by a group of researchers working in Moscow. This group determined the Q by causing a cylinder of the material to vibrate, and then measuring the amount of time needed for the vibration to decay to a certain level. The cylinder was suspended in a vacuum by a fine thread or wire, as shown in Fig. 14.2. They found that single crystals of sapphire appeared to offer the highest room-temperature Q-values – up to 4 × 108 . These results first appeared in English in 1985 [15]. However, attempts to reproduce them by various teams in the West (in the USA, Australia, and the UK) met with no success. For many years, the highest Qs that the Western researchers could achieve were about 5 × 107 . Although there was initially a certain amount of distrust of the Russian results, social conditions (partly having to do with the Cold War) had prevented a collaboration between the Russian and Western researchers. Eventually (in 1998), following a series of unsuccessful attempts to measure high Qs, workers from a Glasgow group arranged to visit Moscow in order to see for themselves how the results were obtained. This visit lasted for a week. Their observations of the care with which the Russian work was done, and the increased confidence in the personal integrity of the Russian experimentalists, convinced the Glasgow workers that the Russian results were genuine. (Although during their visit, they had not actually witnessed any measurements of the highest Q-values.) 7
In an actual working detector, several other parts of the interferometer would also contain such mirrors.
Experimental method
550
clamp
thread or wire
sapphire cylinder
Fig. 14.2
Experimental setup used for measuring the quality factor (Q) of sapphire (see Ref. [15]). A factor of great importance in achieving the high Q-values was the way in which the sapphire crystals were suspended. The suspension materials that were used was critical. The Russians employed silk threads or polished tungsten wires for this purpose, whereas the Glasgow group had been using steel piano wire. The whole matter of selecting, preparing and installing these items on the sapphire and its overhead support structure was something of a black art. For example, the Russians had found that a fine thread made of Chinese silk gave better results than other types of silk thread. Tungsten wires gave still better results, but were harder to prepare and use. The degree of polishing of the tungsten wires was critical, and virtually impossible to describe. Both the silk threads and tungsten wires had to be lubricated, in order to reduce frictional energy losses that occurred when they rubbed against the sapphire. The Russians used pork fat for this purpose, whereas the Glasgow workers had been using “Apiezon” grease.8 (Later, after visiting Moscow, they switched to commercial lard.) The exact method of application of this fat was also very important. The way in which it had to be applied to the thread was very subtle, and too difficult to describe. Either “too much” or “too little” fat would significantly lower the Q of the system. Grease from the skin of the human body, if applied in the right way, could also be effective. (However, some people produced more effective and reliable grease than others.) The precise way in which the thread or wire was clamped to the overhead support structure was also a very subtle matter. This was particularly important for tungsten wires. It was necessary to make many minute adjustments to the clamp, over a period of days, in order to find the arrangement that allowed the correct Q to be determined. (The Glasgow workers had been accustomed to spending only a few hours making each measurement.) The Glasgow workers could learn how to do these things, and (perhaps most importantly) about the amount of patience and care needed, only by watching their Russian counterparts. Effective communication on these issues purely by means of oral discussions, written 8
The importance of having a fatty film (such as pork fat) in order to provide lubrication is mentioned in Ref. [15]. However, the reasons for preferring this to other kinds of lubricant are not discussed. Apiezon grease is a type of commercial vacuum grease.
14.10 Low signal-to-noise ratios and statistical signal processing
551
messages, diagrams and the like, was not feasible. Furthermore, following their visit, the Glasgow workers were able to make use of the fine Chinese silk thread (supplied by the Russian group) in their own experiments. After the Glasgow group visited Moscow, a member of the Moscow team visited the Glasgow laboratory for a week. Although success was not immediate, eventually (after another visit by the Russian worker) Qs of greater than 108 were measured in the West (in Glasgow) for the first time. This occurred in 1999 – some 14 years after the first English language publication of the Russian high-Q results. This achievement was subsequently repeated in the USA (at Stanford) by one of the Glasgow researchers. The transfer of tacit and explicit knowledge by means of an exchange of personnel was critical to these successes. This is an extreme example of how tacit knowledge can play a pivotal role in the success of a research project. In this sense, it is probably atypical. The importance of such knowledge in science is not unusual, however. Also, the existence of explicit knowledge that is necessary for successful scientific-work, but which never gets written up in scientific publications or other accessible places, is very common. Whenever duplication of experimental results or apparatus is proving problematic, it is always worth considering the possibility of paying a visit to the laboratory that originated them. Issues surrounding the replication of scientific experiments are discussed in Refs. [8] and [13].
14.10 Low signal-to-noise ratios and statistical signal processing Marginal signal levels can present serious problems for the experimentalist. Increasing the signal strength, or reducing the background noise level, is often a difficult and expensive undertaking. For instance, it may be necessary to spend considerable time tracking down and eliminating vibrations or electromagnetic interference, making better samples, or upgrading instruments. Under such conditions, the use of statistical signal processing methods for reducing noise9 might appear to be an easy and inexpensive way of improving matters. With the advent of low-cost, yet extremely powerful, desktop computers and widely available signal-processing and data-analysis software, this approach seems particularly compelling. Wiener filtering is an example of such a method. Suppose that a signal has been measured which is corrupted by noise. The Wiener filter algorithm allows one to estimate the true (uncorrupted) signal, if an estimate can be made of the power spectrum of the noise. (A discussion of this algorithm is provided in Ref. [5].) Some other methods for reducing the effects of noise are: wavelet thresholding, matched filtering, and spectral subtraction. Various forms of regression analysis are also used to separate signals from noise. Tempting though these methods may be, they should always be approached with caution. Their use generally involves making some assumptions, which can easily turn out to be incorrect. For example, one might be obliged to guess about the probability distribution of 9
The use of such methods is sometimes called “denoising”.
552
Experimental method
the noise, the shape of its power spectrum (as above), or the form of the underlying signal. Furthermore, in using such methods, there is always the chance that subconscious biases will creep in (see Section 14.6). Some sort of bias will possibly be incorporated into the above assumptions. Even the selection of a particular method is a subjective process, and different methods can yield different results if the signal-to-noise ratio is small. Another very important consideration is that considerable expertise and effort may be required in order to make such methods work satisfactorily in practice. This is not to deny the utility of statistical signal-processing methods in certain applications. For example, they can be useful in preliminary work, to find out if there is any evidence at all for a particular effect, before making a more substantial effort. In some experiments, signal-to-noise ratios are naturally very low and extremely difficult to improve, and so the use of such methods is essential. However, in general the best approach for dealing with such problems is to increase the signal to noise ratio as much as possible by improving the experiment, before turning to statistical methods.
14.11 Some important mistakes, as illustrated by early work on gravity-wave detection 14.11.1 Introduction Several examples of major blunders in experimental work can be found in the first investigations into the detection of gravitational radiation (or “gravity waves”), in the period from 1960 to 1975. This episode (as described especially in Ref. [16]) is highly illuminating, as it shows the right way and the wrong way of doing research in a number of important respects. The former includes: (a) understanding one’s apparatus, preferably in a quantitative sense; (b) designing the experiment in such a way that the intrusion of subjective effects into the results can be avoided; (c) providing a means of calibrating and validating the apparatus; (d) including in published articles sufficient information about the apparatus, data analysis methods, and the experiment as a whole, that others can assess the validity of the results and, if desired, attempt to reproduce them; and (e) being reasonably open to the exchange of ideas and information with fellow researchers working in other laboratories.
14.11.2 A brief outline and history Gravitational radiation, which is predicted by the theory of general relativity, is a disturbance in the space-time continuum that takes place when bodies are accelerated. It may be thought of as the gravitational analog of electromagnetic radiation. Under normal circumstances (e.g. involving moving objects in a laboratory), such disturbances are always vanishingly
553
14.11 Some important mistakes, as illustrated by early work on gravity-wave detection
weak. They only become appreciable when the objects involved are massive celestial bodies, such as black holes or neutron stars, undergoing very large accelerations. The detection of gravitational radiation even from these objects is an extremely difficult task. It is so difficult, in fact, that despite vast amounts of effort that have been put into the problem over several decades, they have still not (as of the time of writing) been detected directly. (Although they have been shown to exist by an indirect experimental method.) In the early 1960s, a group led by J. Weber set up apparatus that they hoped would allow them to detect gravitational radiation from extraterrestrial sources. The design of the apparatus was their own, and a novel one. Its construction and debugging was a very major undertaking, which extended over five years. Many difficulties were encountered – partly because a gravitational wave detector had never been made before, and numerous technical problems had to be solved. Several years after its construction (and after building more detectors), Weber reported that he had succeeded in detecting a large number of events over a period of seven months [17]. The data, along with auxiliary supporting evidence, seemed to indicate that gravitational radiation from an extraterrestrial source had been detected. In his article, Weber took this point of view. For many, this was a surprising result. Furthermore, subsequent calculations by other workers about the expected gravitational radiation flux at the earth, resulting from very energetic astrophysical events, suggested that no such events could possibly have been large enough and close enough to account for what Weber was seeing [16]. Nevertheless, Weber’s report, and others that followed, sparked the interest of physicists around the world. The detection of gravitational radiation would have been a major discovery, and a surprising number of people were willing to believe that this had actually happened. Several experimental research groups started building their own gravitationalradiation detectors, to see if Weber’s findings could be replicated. Yet, despite having apparatus in their possession with sensitivities that equaled, or even exceeded, the sensitivities of the ones used by Weber, these groups were unable to detect anything that could be reasonably attributed to gravitational radiation. After several years of unsuccessful efforts by these groups to verify Weber’s results, and much controversy, certain flaws in the latter’s techniques began to emerge. These facts, along the presentation of a particularly careful analysis of their own and Weber’s gravitational radiation detectors by another research team ([18,19]) more or less convinced the gravitational-radiation research community that Weber had not actually detected any genuine signals. Afterwards, Physical Review Letters (the primary organ for Weber’s results) no longer accepted his papers for publication. Since that time, similar detectors have been constructed by other research groups, which provide over 4000 times greater sensitivity than the ones built by Weber in those years [16]. Nevertheless, these devices have still not been able to sense gravitational waves.
14.11.3 Origins of the problems Weber’s erroneous claims appear to have had a number of causes. Of relevance to the present discussion are the following (see Ref. [16]).
Experimental method
554
(a) Weber did not seem to fully understand, in a quantitative way, his experimental system. Moreover, the apparatus, and indeed the whole experimental technique, was configured in such a way as to make it unnecessarily difficult to understand precisely what was going on. The system included a 1.5-ton aluminium bar, which was supposed to be sent into vibrational motion if it was exposed to gravitational radiation. The bar was suspended in a vacuum, and isolated from ground vibrations and other disturbances. The system also contained piezoelectric transducers that were attached to the bar (to convert the motion of the bar into electrical signals), a low-noise amplifier, and some analog filters and detection electronics. The latter was basically a nonlinear (square-law) electronic device that rectified the signals. At least initially, the results were monitored on a strip-chart recorder, so that they could be analyzed visually. Weber’s use of highly subjective and ad-hoc visual data analysis methods was itself a major shortcoming. Such an approach may be defendable in the case of experiments that have demonstrably high signal-to-noise ratios. However, in the present situation, in which any signals will almost certainly be competing directly with noise, it invites the entry of subconscious biases and other misleading effects into the analysis. Subconscious biases in particular can be a very insidious and dangerous cause of errors in research (see Section 14.6). (The correct approach is to employ a computer for this purpose, and apply the data analysis technique consistently to all the data.10 ) Weber did not seem to appreciate how his combination of aluminum bar (which acted as a narrowband mechanical filter), electrical filters and nonlinear detector could take random thermal noise in the system and covert it into correlated peaks on the recorder paper. These could look as if they were genuine gravity-wave signals (see Fig. 14.3). The original thermal noise would have consisted of featureless incoherent voltages, which look somewhat like grass when displayed on an oscilloscope. It is not likely that these would be mistaken for real signals. Moreover, some isolated peaks caused by thermal noise would occasionally be large enough to stand far above the background, further cementing the impression that something was there. That is, chance occurrences arising from statistical effects could produce what would appear to be a very convincing “event.” (In this case, the long-term statistics are those of the Boltzmann distribution governing the thermal energy in the system.) The disregarding of the possibility of chance occurrences is a common mistake in scientific research (see page 543).
10
It must be emphasized that there is nothing wrong with examining raw data visually. In fact, this is often desirable, and can be a very useful means of identifying false anomalies (e.g. outliers), and other flaws in an otherwise good data set. Such anomalies can cause major errors if fed blindly into a mechanical data analysis algorithm. Alternatively, visual examination of raw data may reveal interesting results that may otherwise be obscured (e.g. smoothed over) by a mechanical data-analysis technique. (It has been remarked that pulsars would not have been discovered if the radio astronomers who were responsible had turned the task of analyzing their data over to a computer [11]. The discovery was actually made by visually inspecting chart-recordings.) But the final analysis should always be done objectively and consistently, and this is normally best achieved by using an automated arrangement.
555
14.11 Some important mistakes, as illustrated by early work on gravity-wave detection
Fig. 14.3
Strip-chart recording of gravitational-wave detector system noise. The voltages at times of less than 6 h are caused by a combination of thermal noise in the aluminium bar and amplifier noise. The ones at times greater than 6 h were caused by amplifier noise. Adapted with permission from Fig. 2 of: J. Weber, Phys. Rev. Lett. 17, 1228 (1966). Copyright (1966) by the American Physical Society.
Furthermore, a nonlinear element in the signal path is liable to introduce excess noise owing to the mixing of thermal noise and interference voltages at frequencies other than the one used by the apparatus. As well as reducing the sensitivity of the apparatus, this arrangement effectively made it impossible to determine the strength of any gravitational radiation signals. That is, it was only possible to know whether or not a signal was present, but not how large it was. (Effectively, useful information was being thrown away.) Of course, this greatly reduced the ability of Weber, and other researchers, to make sense of his results. The experimental systems used by other researchers employed linear detectors, which would not have presented these problems. A mathematical model of Weber’s experiment was eventually developed by another group, using scraps of information that he had provided in his publications [19]. (b) Very significantly, Weber initially made no provisions to allow his apparatus to be routinely calibrated and validated [2,13]. Other researchers employed an arrangement that permitted the aluminum bar in their apparatus to be set into vibrational motion in a highly controlled way. This was done by creating a brief force on the bar by applying a voltage pulse between it and a nearby metal plate of known area. The resulting electrostatic interaction between the bar and the plate could be easily calculated. This arrangement made it possible to test the entire system (including the bar, transducer, electronics, and data-analysis software), in order to make sure they were working according to expectations, and to determine the sensitivity. The sensitivity of a gravitational wave detector is, of course, a parameter of fundamental importance. In forgoing the use of this convenient and easy-to-interpret calibration arrangement, not only did Weber give up the possibility of knowing this, but he also relinquished the ability to compare his apparatus with those of other groups. Furthermore, as discussed below, a computer-based data-analysis scheme that he
556
Experimental method
eventually implemented turned out to be flawed. Since he was not able to test his system in this way, a method that would have allowing the computer problem to be easily detected at an early stage was not exploited. Weber eventually did have one of his students calibrate his apparatus by using the electrostatic technique. He also allowed researchers from another group to come to his lab and perform a calibration. This enabled them to confirm that their equipment was at least as sensitive as Weber’s own, and therefore fully capable of detecting any gravity waves that might have been present. (c) When a computer-based data-analysis scheme was eventually implemented, the dataanalysis program that was written for this task contained an error [16]. Furthermore, the move to a computer method failed to eliminate subjective biases. In order to eliminate the possibility that spurious peaks due to chance events would be identified as gravitational radiation, a second aluminum-bar detection system was constructed at a considerable distance (1000 km) from the first. Initially, the data from the two detectors were recorded on strip-charts and examined visually for nearsimultaneous events. These would be strongly suggestive that gravitational radiation had actually passed both detectors. In principle, the implementation of this arrangement was an important step forward. Numerous coincident events were subsequently identified. There existed a remarkable correlation between these events and sidereal time, which supported the notion that they were caused by gravity waves of extraterrestrial origin. These results were probably largely responsible for convincing other researchers to build their own detectors. Later on, Weber arranged for the data from the two systems to be digitized, stored on magnetic tapes, and analyzed by a computer. This was possibly done in response to frequent complaints from other investigators about the intrinsic unreliability of his visual analysis methods. Many coincident events were subsequently detected using this new arrangement (much more than with the chart-recorder scheme), and a later one that also used a computer technique. However, sidereal correlations were no longer reported. Unfortunately, the program that was used to analyze the data had an error, which was associated with the way that the information was stored on magnetic tapes. This was responsible for identifying at least some, and perhaps all, of the excessive number of coincident events. Weber had ignored clues (external to the program itself) that something was amiss. Hence, along with the detection system itself, the data-analysis program (which was developed by or for Weber) was not properly understood. The error was finally discovered by a researcher from another laboratory, who had been able to obtain, and analyze for himself, Weber’s magnetic tapes. After the bug was reported to Weber, the program was corrected, and the very large number of coincident events no longer occurred. Yet, there were still problems. When Weber switched over from his chart-recorder scheme to a computer-based arrangement, he passed up the opportunity to use a completely objective and consistent data-analysis method. Weber tinkered with the criteria employed by the computer algorithm to identify coincident events, using the same
557
14.11 Some important mistakes, as illustrated by early work on gravity-wave detection
magnetic tape, in order to find those that he found most useful. This increased the probability that an event would be found that was merely due to chance. Effectively, he transferred his personal biases into his data-analysis program. This resulted in the identification of spurious coincident events. In one notorious case, these were found under circumstances in which (as it later became obvious) gravitational waves could not possibly have been the cause [2,16]. Weber’s conclusions appear to have been, to a large degree, the result of prejudice – an almost unshakable a-priori belief that his apparatus was capable of detecting gravity waves. There is little evidence that this belief derived from well-founded calculations of the properties of his apparatus and the likely strength of gravity waves from plausible sources. His chosen experimental design, methods of analyzing data, and his errors, made it relatively easy for him to sustain this belief. It probably did not help that Weber had a highly (probably excessively) individualistic, and also somewhat secretive, way of doing things. He did not like to share his raw data, or specifics of his statistics and experiments, with other investigators [2]. Detailed descriptions of his experimental methods and dataanalysis technique were sparse, and sometimes presented unclearly in his publications [16]. Nor was he very receptive to the (very sensible) suggestions from other investigators for improving such arrangements (e.g. changing his square-law detection system to a linear one) [13].
14.11.4 Conclusions Hence, after the expenditure of considerable money, effort, and time, the efforts of other researchers to reproduce Weber’s results resulted in disappointment. (These independent investigations lasted several years, and the construction of the detectors was expensive.) It was not entirely Weber’s fault. Other researchers apparently did not look closely at Weber’s publications, and notice the shaky foundations of his claims [16]. These publications contained very little information on the experimental apparatus or the data-analysis methods. In light of the low probability that Weber’s apparatus would be sensitive enough to detect gravity waves from plausible astrophysical events, it would have been sensible for other researchers to demand to see more details. (Extreme claims should be supported by extremely strong evidence – see pages 29–30.) The drive by other investigators to build detectors and get results was similar in some ways to the speculation that occurs during a stock-market bubble. They were overly keen to observe gravitational radiation for themselves, and did not make the effort to carry out a thoughtful preliminary evaluation of the information that had been presented to them. One should not be too hard on Weber, since his mistakes have been amply discussed in the literature. He appears to have been an ingenious scientist in many respects. For instance, he introduced the use of widely separated multiple-detectors as a way of determining whether an event was caused by spurious phenomena. This technique is still used in the latest detection systems. However, there is little doubt that in tackling (virtually alone) the problem of detecting gravitational radiation, he was out of his depth. There was another
Experimental method
558
C
D
0 B'
B
A'
A 5
2.65 3.0
10
4.0
APPROXIMATE T in MK
P Max – P(t) in 10-2 atm
1.8
5.0 15
Fig. 14.4
0
10
20 30 40 TIME in MINUTES
50
60
70
Pressure and temperature as a function of time in a low-temperature cell containing liquid He3 and solid He3 ice. The small kinks in the curve at A, B, A and B correspond to transitions between normal and superfluid phases in the liquid helium. Figure reprinted with permission from Fig. 2 of: D. D. Osheroff, R. C. Richardson, and D. M. Lee, Phys. Rev. Lett. 28, 885 (1972). Copyright (1972) by the American Physical Society.
major positive outcome of his work. He was largely responsible for inspiring the worldwide development of gravity-wave detectors, which is an ongoing area of research. A sympathetic account of Weber and his work can be found in Ref. [2].
14.12 Understanding one’s apparatus and bringing it under control: the example of the discovery of superfluidity in He3 The pursuit of quality in experimental technique, even if done almost for its own sake, can yield large dividends. Efforts to improve the behavior of apparatus and standards of measurements are seldom wasted. Through these, one may be able to avoid prematurely rejecting or accepting hypotheses. In some situations, they may pave the way for important discoveries. One example of such a case comes from the field of low-temperature physics. This involved work on the properties of a rare isotope of helium (He3 ) in the low millikelvin temperature range. The experiments were carried out by D. D. Osheroff, R. C. Richardson, and D. M. Lee at Cornell University in the early 1970s [20]. During measurements of pressure versus time in a sample of He3 in the low millikelvin temperature range, one of the team (Osheroff) noticed very small and unexpected kinks in the curve (see Fig. 14.4) [21]. Such kinks are an excellent example of the type of thing
559
Further reading
that would, in many types of experiment, have indicated that the apparatus was improperly configured, or faulty, or perhaps had been disturbed by some kind of noise. Most of the time, unexplained behavior of this type does indeed turn out to have such origins. The kinks in the He3 pressure versus time curve could easily have been regarded as inexplicable problems with the equipment. For example, it was initially thought that they might have been caused by plastic deformation of solid He3 by the parts of the apparatus that applied pressure to the sample. (Although the apparatus had been cleverly designed in order to reduce such effects.) However, the pressure gauge was of a recently developed type [22] that permitted the pressure to be measured with great precision. The resolution achievable with this new design was far higher than what had previously been possible. It was found that the pressures at which the kinks occurred were reproducible to within one part in 50 000, despite very different starting conditions. This seemed to be a highly unlikely result of plastic deformation, and the experimenters began to regard them as genuine effects. They also suspected that the kinks may have been artifacts of the pressure transducer. However, the application of a magnetic field to the measurement setup ruled this out as a possible source of the phenomena. It was not expected that the pressure transducer would be affected by magnetic fields, whereas the pressure at which the kinks occurred did depend on the field. The detailed effects of the strength of the magnetic field on the kinks provided further evidence that they were caused by phase transitions in the helium. Subsequent investigations ultimately showed that this notion was correct. The kinks resulted from phase transitions in the helium liquid. These corresponded to the appearance (at A and B in Fig. 14.4) and disappearance (at B and A in Fig. 14.4) of previously unknown types of superfluid. (A superfluid is a non-classical liquid that has no viscosity, and has other unusual properties.) In fact, the A and B transitions correspond to the formation of two distinct types of superfluid, which are now known as the “A phase” and “B phase.” Later on, a third superfluid phase was found. The discovery of these new states of matter resulted in their finders being awarded the Nobel Prize in 1996.
Further reading Much useful information about pitfalls in experimental work, and other important aspects of scientific research, can be found in Ref. [1]. Every scientist should read this book at the beginning of their career, and even experienced investigators would do well to reread it from time to time. A general discussion of errors in measurement is provided in Ref. [23]. In particular, it contains an unusually good guide to recognizing and eliminating systematic errors. Useful introductions to error analysis, probability distributions, and the modeling of data (i.e. fitting a function to a set of data points) are presented in Refs. [1] and [24]. A more advanced discussion of data modeling can be found in Ref. [5]. Some pitfalls in data modeling are discussed briefly on page 41.
560
Experimental method
Summary of some important points 14.2 Knowing apparatus and software (a) Human errors that might occur during the use of apparatus or software are easier to avoid if the experimenter understands how they work, at least at a conceptual level. (b) Having a conceptual understanding of apparatus and software also makes it possible to recognize incidental non-ideal behavior of such, which may not be discussed in manuals or other documents. (c) It is generally desirable to have a quantitative understanding (at least in an order-ofmagnitude sense) of what happens to experimental information (signals and data) and noise, as it passes along the instrumental chain from the sensor to the data-acquisition software. (d) Having this understanding is especially important when the experimental system is being used at the limits of its capabilities.
14.3 Calibration and validation of apparatus (a) Regular calibration of instruments is an important, but often neglected, aspect of experimental work. (b) The need to calibrate can be reduced by devising experiments so that only comparative, rather than absolute, measurements are needed. (c) In general, calibration can also serve to validate an experimental system – i.e. to ensure that unwanted behavior (such as nonlinearities or software errors) is absent, or at least understood and accounted for. (d) Calibration and validation should be carried out if (A) the apparatus is new or has recently been repaired or modified, (B) it has been subjected to some extreme conditions that could affect its behavior, or (C) it has been used by others.
14.4 Control experiments (a) One common source of errors in scientific research is that variables other than the ones under study may influence the outcome of an experiment. (b) Control experiments can be used to clarify matters when an experimental result can have alternative explanations, and are also helpful in revealing variable systematic errors.
14.5 Failure of auxiliary hypotheses as a cause of failure of experiments (a) Experiments more often give negative results, not because of failure of the main hypothesis, but because one or more auxiliary hypotheses fail.
561
Summary of some important points
(b) It is very desirable to explicitly list the auxiliary hypotheses that are involved in a proposed experiment, and how the accuracy of these could affect its outcome, before proceeding.
14.6 Subconscious biases as a source of error (a) Prejudice is a very common cause of incorrect experimental work. (b) It is always necessary to make a conscious effort to ensure that one’s personal biases cannot affect the outcome of an experiment. (c) One way of reducing the effects of prejudice is to explicitly include it as an alternative hypothesis when analyzing experimental results. (d) Data analysis is particularly vulnerable to subjective influences – procedures for analyzing data should be established before the data is collected. (e) Prejudices resulting from social interactions within a scientific community can also lead to erroneous conclusions (thereby reducing the effectiveness of supposedly “independent” verifications of results).
14.7 Chance occurrences as a source of error Incorrect deductions are often made because insufficient consideration has been given to the possibility that an experimental event has occurred by chance.
14.8 Problems involving material samples (a) In branches of science that involve the study of material samples, erroneous conclusions are often drawn because the samples are not what the experimenters think they are. (b) For example, samples may be contaminated, have a different chemical composition or structure from what was intended, or have unintended inhomogeneities. (c) It is essential for experimentalists to find out about sample preparation issues, the things that can happen to samples after they have been made, and what precautions to take. (d) It may be worth compiling a list of known sample pathologies, their origins and consequences, and techniques for detecting their presence.
14.9 Reproducibility of experimental measurements and techniques (a) Sometimes, very subtle and non-obvious differences in experimental procedures, instrument design, or the experimental environment may be responsible for an inability to replicate valid experiments, experimental techniques, or apparatus (which have been successfully done, practiced, or created in other laboratories). (b) Adequate descriptions of experimental procedures or apparatus (even if they are nonstandard) are often omitted from scientific publications and other accessible information sources. (c) Tacit knowledge, which is generally transferred through personal contact, is often an essential element in scientific research.
562
Experimental method
(d) The most effective way of transferring tacit knowledge, and explicit knowledge that has not been communicated, is to organize an exchange of scientists between laboratories.
14.10 Low signal-to-noise ratios and statistical signal processing (a) Although the use of statistical signal-processing methods may appear to be an easy and inexpensive way of solving problems caused by poor signal-to-noise ratios, the use of such methods generally involves making assumptions, which can easily turn out to be incorrect. (b) The use of such methods also opens the possibility for the introduction of subconscious biases into the data analysis. (c) It is usually best to try increasing signal-to-noise ratios by improving the experiment (e.g. removing sources of interference, using quieter amplifiers, etc.) before turning to the use of statistical techniques.
References 1. E. Bright Wilson, Jr., An Introduction to Scientific Research, Dover, 1990. (Except for some minor modifications, this is a republication of a work that was originally published by McGraw-Hill in 1952. Hence some of the material is dated (particularly with regard to certain details about experimental apparatus and computing), but most is fundamental and therefore still relevant.) 2. H. Collins, Gravity’s Shadow: the Search for Gravitational Waves, University of Chicago Press, 2004. 3. M. Jeng, Am. J. Phys. 74, 578 (2006). 4. J. Reason, Human Error, Cambridge University Press, 1990. 5. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes: The Art of Scientific Computing (Fortran version), Cambridge University Press, 1989. 6. F. Franks, Polywater, MIT Press, 1981. 7. E. R. Cohen and J. W. M. DuMond, Rev. Mod. Phys. 37, 537 (1965). 8. J. Ziman, Reliable Knowledge: an Exploration of the Grounds for Belief in Science, Cambridge University Press, 1978. 9. L. C. Allen, New Scientist, p. 376, 16 August 1973. 10. J. Giles, Nature 442, 344 (2006). 11. R. V. Jones, Instruments and Experiences: Papers on Measurement and Instrument Design, John Wiley & Sons, 1988. 12. For example, the American Institute of Physics operates an electronic depository for information (including video and audio files, data tables, lengthy printed matter, etc.) which is supplementary to that provided in its published articles. See Electronic Physics Auxiliary Publication Service (EPAPS). www.aip.org/pubservs/epaps.html 13. H. M. Collins, Changing Order: Replication and Induction in Scientific Practice, University of Chicago Press, 1992.
563
References
14. The example is from: H. M. Collins, Social Studies of Science 31, 71 (2001). 15. V. B. Braginsky, V. P. Mitrofanov, and V. I. Panov, Systems With Small Dissipation, University of Chicago Press, 1985. 16. J. L. Levine, Phys. Perspect. 6, 42 (2004). 17. J. Weber, Phys. Rev. Lett. 25, 180 (1970). 18. J. L. Levine and R. L. Garwin, Phys. Rev. Lett. 31, 173 (1973). 19. R. L. Garwin and J. L. Levine, Phys. Rev. Lett. 31, 176 (1973). 20. D. D. Osheroff, R. C. Richardson, and D. M. Lee, Phys. Rev. Lett. 28, 885 (1972). 21. D. D. Osheroff, Nobel Lecture, December 7, 1996. http://nobelprize.org/nobel prizes/physics/laureates/1996/osheroff-lecture.pdf 22. G. C. Straty and E. D. Adams, Rev. Sci. Instrum. 40, 1393 (1969). 23. S. Rabinovich, Measurement Errors: Theory and Practice, AIP Press, 1995. 24. P. R. Bevington and D. K. Robinson, Data Reduction and Error Analysis for the Physical Sciences, 3rd edn, McGraw-Hill, 2002.
Index
1 K pots see cryogenic devices: 1 K pots a.c. line filters, 62, 85, 86, 87, 356, 374, 376, 495 a.c. power adapters, 88, 380, 396 accelerometer, 74, 75 acceptance trials, 124 acoustic enclosure see acoustic isolation chamber acoustic isolation chamber, 82 acoustical sealant, 82 activated alumina, 191 air conditioners, 64, 67, 83, 86 for equipment enclosures, 65 air conditioning, 64, 65, 68, 311, 333 and human error, 16–17 air scrubbing, 325 air turbulence, 310, 311, 324 air-duster, 328 amplifiers differential, 362, 363, 364, 365, 368 isolation, 366 capacitively-coupled, 366, 367 transformer-coupled, 366, 367 lock-in, modular, 130 power see power amplifiers pre- see preamplifiers amplitude spectral density, 74 amplitude spectrum, 74 analytical balances, 71 anti-fog coatings, 326 APDs see light detectors: avalanche photodiodes apparatus, 1, 2 calibration and validation of, 537–38, see also calibration and gravity-wave detection (historical example) appropriate time for, 538 design and construction of, 127–37 aesthetic appearance of, 136 computer programs for modeling, 134 designing for diagnosis and maintainability, 134–35 ergonomic design of, 135–36 fail-safe design of see fail-safe design of apparatus modularity, use of, 130–31 quantitative analysis of, during design, 133–34 time needed to develop, 128 purchase of commercial, 116–26
564
acceptance trials, 124 conservative technology, use of see conservative technology, use of contracts, reliability incentive, 122 documentation for see documentation and manuals experience lists, 119 place of origin, 119 price and quality, relationship between, 118–19 specifications for see specifications standards, importance of, 117 testing see testing tagging of defective, 25 transport of see transport unauthorized removal of, 23 understanding, importance of, 536–37, see also gravity-wave detection (historical example) and superfluidity in He3 , discovery of (historical example) arcing, 226, 371, 397, 398, 447, see also high-voltage: electric discharges: arcing audio rectification see RF interference: in low-frequency devices auto-calibration, 25, 173 automation, and reduction of human error, 13–14 avalanche photodiodes see light detectors:avalanche photodiodes backlash in mechanisms see mechanisms: backlash in backstreaming, of vacuum pump oil, 172 bakeout, 142, 171, 181, 227, 228, 232, 235, 240 selection of temperatures to avoid damage, 211–12 balanced circuits see cables: electromagnetic interference issues involving: balanced circuits barometric pressure, 61 batteries, 63, 87, 88, 89, 90, 203, 401–03 failure of, 88, 89 fire and explosion issues, 402, 403 nonrechargeable, 401 alkaline manganese, 402 lithium iron-disulphide, 402 lithium-manganese, 402 shelf life, 402 zinc-carbon, 402 rechargeable, 361 gel-type lead-acid, 361, 402
565
Index
lifespan, 89, 402 nickel-cadmium, 402 sealed lead-acid see batteries: rechargeable: gel-type lead-acid unexpectedly-rapid discharge problems, 402 bearings, 69, 70, 71, 94, 220, 221, 224–27 antifriction see bearings: rolling element ball, 66, 69, 78, 80, 81, 199, 200, 225, 231, 315, 400 ceramic, 199 for vacuum environments, 225, 226, 233 magnetic levitation, 80, 199, 200, 231 pivot, 78 plain, 78, 224–25, 228, 233, 400 robustness of, 225 roller, 223, 225 rolling element, 66, 131, 219, 225–27, 231, 232, 233 brinelling of, 91, 223, 225 ceramic, 226 degradation caused by electric currents, 226 precision, 226 sensitivity to particulates, 226 side shields for, 226 sleeve see bearings: plain Belleville washers, 299 bellows, 69, 71, 78, 91, 104, 133, 141, 164, 166, 168, 169, 170, 175, 191, 201, 288, 291, 292 as leak sources in vacuum systems, 167–70, 287 cycle-test of, 169 edge-welded, 141, 148, 167, 168, 170, 179, 181, 247 fatigue failure of, 102, 103, 104, 265, 287 for mechanical feedthroughs, 247 joining of, 153 rolled, 167, 168 minimum bend radius of, 170 stainless steel, 144, 147 stress corrosion cracking and corrosion fatigue of, 101 bend reliefs, 301 biases, of information sources, 29 biases, subconscious calculations, as a potential source of error in, 38 confirmation bias, 541 experimental work, as a source of error in, 22, 540–43 biases caused by social interactions, 542–43, 546 biases in data analysis, 541–42, see also gravity-wave detection (historical example) general points (introduction), 540–41 types of experiment prone to bias problems, 541 binary searching, to locate faults, 27, 177 blackouts, 83, 84, 85, 87, 88, 89, 90, 197, 198, 200, 202, 203, 205, 206 bonding of materials benefits of surface coatings, 152
cleanliness of surfaces, 152 joints exposed to thermal stresses, 153 books, laboratory collection of, 19 brazing, for vacuum systems, 146, 147, 151, 152, 154, 157–58, 163, 177 advantages of eutectic braze alloys, 153 flux, 141, 157 flux-less joints on copper, 157 important considerations for, 157–58 joints with superleaks, 140 mechanical support, 158 repair of leaky joints, 180 torch-brazing, 157, 158 vacuum-brazing, 157 worker qualifications, 150 breakdown voltages, 59, 175, 384 breakdown voltages, at low air pressures, 384 brownouts, 83, 84, 87, 88 bus, computer interface, 86 IEEE-488 see computer interfaces: GP-IB C (programming language) see software, writing of: C (language) C# (programming language) see software, writing of: C# (language) C++ (programming language) see software, writing of: C++ (language) cables, 23, 60, 61, 62, 69, 118, 124, 287, 451–65, 467–69, see also connectors and wires bend radii, minimum allowable, 453, 465 coaxial, 377, 463, 465, see also cables: shielded RG-58, 364 static charges on, 394 cryogenic ribbon types, 468, 469, see also wires: cryogenic systems, wiring for damage and degradation of, 452–54 deterioration and ageing, 453–54 due to handling and abuse, 453 vulnerable cable types, 452 destruction of faulty, intentional, 465 diagnostic methods, 471–75 cable testers, use of, 473 fault detection and locating, 474–75 insulation testing, 474 intermittent fault troubleshooting, 472–73 resistance measurement method, 471–72 shield integrity, determination of, 475 electrical noise due to flexure, 71, 388, 389 electromagnetic interference issues involving, 381–82, 456–62, see also cables: shielded and cables: modes of failure balanced circuits, 375, 381, 461, 462 cable raceways for EMI and crosstalk reduction, use of, 464 crosstalk see crosstalk digital signals, 381
566
Index
cables (cont.) routing cables to avoid interference, 464 twisted wire pairs, use of, 445, 461–62 fatigue failure of, 102, 104, 453, 456, 464 fiber-optic, 337 GP-IB, 462–63 ground loops involving, 367, see also ground loops high-impedance circuits, in microphonics, 388, 389 triboelectric effects, 388 high-voltage, 202, 203, 205, 384, 446, 452, 456, 463, 464, 474, 475 inspection and replacement of, 465 installation of, 463–65 labeling of, 19, 455, 465 modes of failure, 413, 451–52 balanced circuits, some causes of unbalance in, 452 coaxial cable characteristic-impedance shifts, 454 corona in high-voltage cable, 452 ground faults, 451 manifestations of short- or open-circuits in coaxial cable at RF, 451 problems resulting from cable deformation, 452 shield end-connection problems, 452 shielded cables, slow deterioration of shielding ability, 454 multi-conductor, 462 patch cords, 414, 454, 455, 459, 465 propagation of vibrations along, 79 ribbon-, 463 selection of, 454–56 commercial vs. homemade types, 455 for flexure and vibration conditions, 455–56 provenance, 454 small cable conductors, robustness of, 456 stress derating of conductors, 456 shielded, 367, see also cables: coaxial, cables: electromagnetic interference issues involving, and cables: modes of failure choice of shield coverage, 457–59 circumstances requiring use of, 374 double-braid shields, 360, 458 foil-shields, 367, 459 grounding of shields, 359, 360, 373, 376, 456–57, 459–60 pigtail problems, RF, 360, 458, 459–60 rapid fixes for inadequately-shielded, 460 transfer impedance of, 457, 458, 459, 460 wrapped-wire shields, 459 storage of, 465 unreliability of, 413 CAD see computer aided design calculations, 1, 3, 13, 29, 36–57 biases as a potential source of error during, 38
computer algebra systems, 38, 45, 46, 511 branch cuts, simplification of expressions containing, 40 checking calculations by, 48, 52 effective use of, 46 integration, definite, 39, 40, 46 limits, evaluation of, 40 mistakes, most common cause of, 39, 46 problems arising from use of, 39–40 conceptual difficulties, avoiding, 42 diagrams, helpfulness and creation of, 42–43 incorrect technique, errors due to, 36–38 asymptotic series, mistakes in using, 37 discontinuities in functions, problems caused by, 37 equations, spurious solutions of, 37 integral tables, errors in, 38–39 notation, choice of, 43 numerical, 3, 40–42, 514 brute-force methods, 41 commercial software for, 134, 510 differentiation, 41, 514 integration, 41 most frequent error sources, 40 random number generators, choice of, 42 outsourcing of, 44–45 testing of check calculations, 49–51 computer algebra systems, calculations by see calculations: computer algebra systems: checking calculations by conjugate variables, use of, 51 dimensional analysis, 48 graphical methods, 51 internal consistency, some checks involving, 48 numerical testing-methods, 51 order-of-magnitude estimates, 50 special cases, looking at, 47 symmetry properties, 48 time required for testing calculations, 47 unit conversion, errors resulting from, 37 calculators, handheld, 42 calibration, 10, 25, 91, 301, 302, 303, see also apparatus: calibration and validation of automatic see auto-calibration calibration information, incorrect transcription of, 20, 303 CAMAC (modular electronics) system, 131 capacitors ceramic, 370, 376 electrolytic, 25, 61, 63, 376, 401 burst noise in, 401 drift in, 401 explosion of, 401 lifespan reduction by elevated temperatures, 401
567
Index
replacement of, 396 stress derating of, 401 feedthrough, 375 high-frequency behavior, 375 mylar, 375 polypropylene, 401 tantalum, 370, 401 capillary tubes, 292 blocking of, 260, 261, 262, 291 cutting, 261 cavitation, 105 CDs see computer (digital) information, long-term storage of: CDs and DVDs centaurs, allegory of, 29–30 centile spectra, 74 R CF seals see seals, static demountable:CF (ConFlat ) checklists, 14, 24 chemical laboratories, corrosive atmospheres in, 96 circuit breakers, 129 tripping of, 63 class (computer program data type), 513 clean hood, 324 cleaning megasonic, 105 of connectors see connectors: inspection and cleaning of of insulators in very high impedance circuits, 389 of optical components, 327–33 brushing and wiping of optics, 329 cleaning agents to avoid, 330–31 CO2 snow method, 332 compressed gas techniques, 328–29 general procedures, 328–30 immersion techniques, 330 items prone to damage, 327 plastic film peeling technique, 332 reactive-gas methods, 332–33 ultrasonic cleaning, 331 vapor degreasing, 330, 331 of surfaces prior to bonding, 152 of vacuum systems, 148 glow-discharge method, 148, 212 reactive-gas technique, 148, 212 ultrasonic, 104, 105, 144, 331 CM see common-mode voltages, definition of cold traps see cryogenic devices: cold traps comments (computer programming) see software, writing of: detailed program design and construction: documentation of code common mode failures, 3, 28 common-impedance coupling, 355 common-mode choke, 364, 376, 377 common-mode voltages, definition of, 362 common-mode, electrical interference, 87 communal equipment, problems with, 22–23 communication, 20
compatibility, 117 complexity, dealing with, 3 compressed air, 65, 97, 198, 258, 328 supplies, oil and water in, 96–97, 259 compressors, 64, 69, 70, 71, 72, 74, 80, 86, 96, 101, 102, 104 air, failure of, 258 air, oil-less, 96 cylinder-type, 70, 72, 74, 80 for cryopumps, 201 rotary vane, 80 computer (digital) information, long-term storage of, 500–01 CDs and DVDs archive-grade, 501 lifespan of, 501 proper storage and use of, 501 file formats, preferred, 501 hard drives, suitability for, 500 magnetic tape lifespan and proper storage, 304, 501 media conversion services, 501 obsolescence of storage media and file formats, 501 computer aided design, 132 computer algebra systems see calculations: computer algebra systems computer interfaces, 496–99 balanced vs. unbalanced serial types, 497 cable length issues, 498 GP-IB, 131, 440, 462–63, 497–98 RS-232, 356, 362 driver issues, 497 links, data errors on, 496–97 standardization problems, 496 RS-485, 497 USB, 498 USB-to-GP-IB converters, 498 computer security issues, 502–04 firewalls, 504 general points, 502 operating system patches, importance of, 502 Windows-based PCs vs. other systems, 502 network security, 504 virus attacks, preventing, 503–04 antivirus software, 503 antivirus software, continuous monitoring functions of, 503 virus infection, symptoms of, 503 viruses and their effects, 502–03 hardware damage, possibility of, 502 hoax virus reports, 503 routes of virus propagation, 502 software damage, 502 computer systems, failure rate vs. time example, 116 computers, 3 a.c. mains power quality issues, 495, see also electricity supply problems
568
Index
computers (cont.) backing-up information on, 499–500 frequency of backups, 499 online backup services, 500 some backup techniques and strategies, 499–500 some reasons for data loss, 499 compatibility of hardware and software, 496 crashes and other problems, common causes of, 489–90 background programs, 489 drivers, 489 hardware-related sources of trouble, 490 extended-period data collection, precautions for use in, 509–10 hard drives for see hard drives, computer power supplies for, 89, 494–95 catastrophic failure of, 494 impending failure, some signs of, 494 quality issues, 494 redundant, 2 programmable logic controllers, 13, 130, 210, 491 programming of see software, writing of redundant, 2 ruggedized PCs, 490–91, 492 safety-related applications of, 507, see also fail-safe design of apparatus selection of computers and operating systems, 487–89 consumer-grade PCs, unreliability of, 487 Linux, 118, 488, 489 Mac computers, 488 Mac OS, 488, 489 PCs, 118, 488 UNIX, 488, 489 virtualization software, 489 Windows, 488, 489 software for see software technical support and information for, 490 condensation, 61, 65, 66, 67, 68, 92, 96, 100, 161, 176, 205, 263, 264, 270–71, 288, 320, 328, 335, see also moisture condensation detectors, 270 cone seal see seals, static demountable: conical taper joints R ConFlat seals see seals, static demountable:CF R (ConFlat ) connectors, 19, 60, 62, 69, 91, 117, 118, 135, 372, 431–51, 464, see also cables and electrical contacts bend reliefs for, 438, 453 cable clamps for, 416, 437, 438, 460 coaxial, 434, 437, 441, 450 BNC, 434, 435, 437, 439, 441, 442, 450, 455, 459 SMA, 434, 435, 441, 445, 455 subminiature, 434
threaded vs. bayonet types, 441, 458 TNC, 441, 459 twist-on (in order to attach connectors to cables), unreliability of, 441 type-N, 434, 441, 442, 445, 459 contact wear and corrosion, reduction of, 448–49 connector lubricants, use of, 443, 449 connector savers, use of, 448 crimping vs. soldering of, 422–23, 440 crosstalk between pins see crosstalk: between pins in connectors cryogenic environments, for, 469 diagnostic methods, 471–75 intermittent fault troubleshooting, 472–73 non-ohmic contacts, testing for, 472 D-subminiature see connectors: D-type D-type, 434, 440, 441, 442 D-type vs. circular multi-pin, 440 failure modes of, 413, 431–33 failure, causes of, 433–37 corrosion, 100, 435–37, 443 fretting of contacts, 435, 443 human error, 433–34 particulates and debris, 437 wear-and-tear, 434–35 FischerTM , 437, 441, 444 GP-IB, 440, 462, 497 inspection and cleaning of, 446, 447, 450–51 intermittent problems, commonness of, 431 keying systems for, 440 R LEMO , 437, 441, 444 lifespan of, 435 locking mechanism for, 439 metal vs. plastic housings, 440 particularly troublesome types, 445–48 CAMAC, 448 consumer-grade, 437 high-current, 446–47, 450 high-voltage, 205, 445–46, 450 mains power, 447–48 microwave, 445, 448 physical and visual access to, 448 RF filtered, 442 RF, reflections from, 422 scoop-proof types, 440 securing connector parts, 450 selection of, 437–45 contact materials, 442–43 contact materials, problems caused by dissimilar, 443 contacts, reliability of gold-plated, 442 contacts, silver-plated, 436, 442 contacts, tin-plated, 443 general considerations, 437–42 insulator materials, 444
569
Index
provision of a ground pin in multi-pin types, 444–45 stress derating, 443–44, 446 shielded, considerations for, 441 stress relaxation, of contacts, 435, 443, 447 temperatures, effects of elevated, 435 triaxial, 434 unreliability of, 413, 431 XLR, 445, 455 conservatism, 1 conservative technology, use of, 4, 116 contact enhancer sprays see electrical contacts: enhancer sprays for contracts, reliability incentive, 122 cooling fans see fans cooling methods for equipment, air-, 65 cooling water, 61, 69, 72, 73, 79, 102, 129, 168, 170, 197, 198, 199, 201, 205, 212, 270, 294, see also water-cooling and water-cooling systems lines, 166, 261 copper gaskets, 166 corona see high-voltage: electric discharges: corona corrosion, 66, 67, 91, 96, 141, 145, 146, 153, 159, 160, 161, 163, 169, 205, 265, 266, 268, 269, 270, 288, 301, 334–35, 435–37, 451 chemical laboratory atmospheres, in, 96 chlorinated solvents, effect of, 101 corrosion fatigue, 101 crevice, 100, 270 electrolytic, 100, 268, 270, 436 galvanic, 66, 99–100, 141, 144, 268, 270, 427, 435, 443 galvanic series, 99, 100 sacrificial anode, 100 mold-induced, 335 rusting of steel, 67 stress corrosion, 101, 144, 158, 167, 169, 170 counterfeit parts, 120, 403 creep deformation of indium in seals, 244 dimensional instability of materials, 315 migration across surfaces, 97, 98, 229, 231 crimping see electrical contacts, permanent or semi-permanent: crimping crosstalk between conductors within a multi-conductor cable, 462, 468 between electrical cables, 382, 387, 454, 464 between pins in connectors, 440, 449–50 fiber optic cables, 337 cryocoolers, 73, 293, 294 Gifford-McMahon, 294 pulse-tube, 294 cryogenic, 149, 159, 160, 161, 162, 164, 173, 201, 253 devices and systems, testing of, 286
environments, use of mechanical devices in see cryogenic devices: mechanisms R grease, Apiezon-N , 297 liquid and gas lines, blockages of, 291–93 liquids see liquid cryogens, liquid helium, and liquid nitrogen refrigerators see cryocoolers refrigerators, dilution see dilution refrigerators systems, 61, 123, 144, 166, 254, 285–305 cryogen-free types, 293–94 wiring for see wires: cryogenic systems, wiring for thermal conductance, 296, 297, 298 thermometry see cryogenic devices: thermometers cryogenic apparatus, 149 abnormal operation of, 171 fatigue of materials in, 103, 104 heat leaks in, 294–95 by RF energy, 295 poor vacuum, 294 Taconis oscillations see Taconis oscillations touches, 171, 286, 294, 304 touches, detector of, 295 vibrations, 295 materials selection for, 142 moisture-related problems in, 288–89 overpressures in, 290–91 thermal contact problems in, 296–300 choice of mechanical contact type, 300 direct metal-to-metal mechanical contacts, 298–300 force between mechanical contacts, 296 testing direct metal-to-metal mechanical contacts, 299 use of grease in mechanical contacts, 297 use of indium in mechanical contacts, 297 welded and brazed contacts, 296 windows in see windows: in cryogenic apparatus cryogenic devices 1 K pots, 72, 291, 292, 300 cold traps, 150, 173, 292 mechanisms, 218, 219, 249, 288, 293 mechanisms, lubrication and wear of, 228 superconducting magnets, 7, 288, 293, 303–05 ferromagnetic objects near, 304 fringing fields, harmful effects of, 304, 380 moisture hazard, 304 quenches, causes of, 304 thermometers, 171, 177, 301–03 bismuth-ruthenate types, 302 calibration shifts, causes of, 302–03 carbon resistor types, 302 carbon-glass types, 303 CernoxTM types, 303 common causes of damage to, 301 diode types, 302, 303
Index
570
cryogenic devices (cont.) germanium types, 303 marking of, 303 measurement errors due to RF heating and interference, 301–02 measurement errors, most common cause of, 301 platinum resistance types, 303 redundant, 303 resistance types, 301, 302 rhodium-iron types, 303 ruthenium-oxide types, 302 soldering of lead wires, 301 with built-in RF filters, 302 threaded fasteners, suitable strong materials for, 299 valves, 255, 257, 288, 300 cryogenic temperature controllers, 302 cryostats, 67, 73, 141, 163, 166, 249, 290, 293, 294, 318 dip-stick, 287, 291, 292 current leakage, 66 curve fitting see data: fitting curves to cyclic stresses, 70 data analysis, subconscious biases in, 541–42 fitting curves to, 41, 542 fitting curves to data correctness of routines for fitting, 41 testing of routines for fitting, 51 data recovery (from failed hard drives) see hard drives, computer: recovery of data from failed data sheets, 121 databases, of journal titles and abstracts, 6 debugging (of equipment and software in general) see troubleshooting debugging computer programs see software, writing of: debugging programs dehumidifiers, 67 derating see stress derating desiccants, 95 details for assessing information reliability, 28 paying attention to, 7–8 recording of, 24 development projects, 128 dew point, 65, 68, 270, 271 dewar, 139, 176, 219, 291, 292 diagrams, use of in mathematics, 42 diamond-anvil cell, 336 dielectric absorption, 389 differential amplifiers see amplifiers: differential differential-mode voltages, definition of, 362 differential-mode, electrical interference, 87
diffraction gratings see optical: components: diffraction gratings diffusion pumps see pumps, vacuum:diffusion digital information, long-term storage of see computer (digital) information, long-term storage of dilution refrigerators, 172, 176, 177, 286, 292 dissertations, 6, 18, 19 division of labor, value of, 21–22 DM see differential-mode voltages, definition of documentation, 124, see also manuals creation and preservation of, 18–19 deficient, 11, 127, 536 of experimental setups, 23 quality of, for commercial products, 121 reducing the labor of producing, 24 requirements for major commercial equipment items, 122 software, of see software, writing of: detailed program design and construction: documentation of code DVDs see computer (digital) information, long-term storage of: CDs and DVDs electric discharges, 98, 165, 170, 202, 207, see also arcing, corona, and tracking electric fields, interference from, 387, 389 electric heaters, 68, 83 electric motors, 83, 86, 98, 212, 249 derating of, 59, 60 for use in ultrahigh vacuum, 219 for vacuum and/or cryogenic environments, 249, 250 stepper types, 222, 250 electrical contacts, 67, 69, see also connectors enhancer sprays for, 397, 400 failure of, 98 in superconducting magnets, 304 lubricants for, 98 electrical contacts, permanent or semi-permanent, 414–31, see also soldering, for electronics and connectors brazing, 423–24 of small copper wires using an oxy-hydrogen torch, 424 resistance brazing, 423 crimping, 299, 422–23 advantages of, 422 disadvantages of, 423 reliability of crimp connections, 422 requirements for reliable, 423 testing connections, 471 upper temperature limit of crimp joints, 422 diagnostic methods, 471–75 high-current contact fault detection, 473 intermittent fault troubleshooting, 472–73
571
Index
non-ohmic contacts, testing for, 472 resistance measurement method, 471–72 difficult materials, methods of making contacts to, 425–29 aluminum, soldering of, 425, 426, 427 crimping, 428 electroplating and sputter deposition for improved soldering, 428 friction soldering, 426 inert gas soldering, 428 niobium, 426, 428 resistance welding, 428 silver epoxy, use of, 428–29 solder selection, 426, 427 stainless steel, 425, 426, 428 ultrasonic soldering, 425–26 ultrasonic soldering, of delicate items, 105, 426 ground contacts see ground: contacts high-current connections, use of mechanical fasteners in, 424–25 connection hardware requirements, 424 fastener requirements, 425 inspection of, 425 overheating problems, 424 potential problems, list of, 413 thermoelectric EMFs, effects of contacts on see thermoelectric EMFs in low-level d.c. circuits, minimization of welding, 423–24 resistance welding, 423 TIG method, 424 electricity supply problems, 23, 83–90 blackouts see blackouts brownouts see brownouts definitions and causes of, 83–85 drops, 84, 495 power disturbances, investigation of, 85 preventive measures, 86–90 line voltage conditioners, use of, 87 RF electrical noise reduction, 86–87, see also a.c. line filters standby generators, use of, 90 surge suppression, 86, see also surge suppressor and overvoltage protection uninterruptible power supplies, use of see uninterruptible power supplies sags, 83, 88 surges, 84, see also overvoltages swells, 84, 87, 88 electrolytic capacitors see capacitors: electrolytic electromagnetic interference, 27, 65, 73, 80, 353–82, 398, see also cables: electromagnetic interference issues involving crosstalk, in the form of see crosstalk
electric fields, from see electric fields, interference from electrostatic discharges, from, 392 ground loops, from see ground loops low-frequency magnetic fields, from see magnetic fields, interference from low-frequency problems, professional assistance with, 382 radio frequency see RF interference electron beams, 98 electron microscopes, 64, 71, 72, 81, 304 electronic devices in a vacuum, cooling of, 212 electronic modules, 130 electroplating, 152, 163, 233 electrostatic discharge, 68, 94, 341, 342, 390–94 devices vulnerable to damage by, 391–92 electromagnetic interference resulting from, 392 events that may cause component damage, 392 intermittent faults resulting from, 392 latent failure phenomenon, 392 origins, character, and effects of, 390–92, 473 possible voltage levels, 390 preventing problems, 393–94 conductive wrist straps, 393, 394 field meter, 394 soldering precautions, 418 topical antistats, 394 EMI see electromagnetic interference emotional stress impact on human error, 11 enclosure air conditioners, 65 heaters, 68 equipment see apparatus erosion, 168, 265 ESD see electrostatic discharge etalon fringes see optical: etalon fringe problems experience list, 119 experimental measurements and techniques, reproducibility of, 547–51 general points (introduction), 547–48 knowledge explicit, 548, 551 tacit, 548, 551 laboratory visits as a way of acquiring missing expertise, 548 measuring the Q of sapphire (historical example), 549–51 reaching ultralow temperatures (historical example), 547 experiments chance occurrences as a source of error in, 543 control-, 538–40 failure of, due to failure of auxiliary hypotheses, 540 extractor gauges see vacuum gauges: extractor
572
Index
Fabry–Perot interferometer see optical: interferometers: Fabry–Perot fail-safe design of apparatus, 129, see also computers: safety-related applications of fans, 25, 64, 65, 69, 71, 72, 73, 197, 258, 313, 340, 400–01 Faraday isolators see optical: devices: Faraday isolators Faraday shields see shields: Faraday fasteners see threaded fasteners fatigue, human, 12, 15 fatigue, of materials, 3, 69, 70, 71, 91, 101–04, 141, 154, 158, 160, 166, 167, 168, 177, 191, 195, 207, 254, 265, 296, 301, 415, 416, 432, 451, 453, 456, 464 characteristics and causes, 102–03 due to thermal stresses, 153, 416 endurance limit, 102 examples of, 101–02 general points (introduction), 101 high-cycle, 103, 104, 169, 261 life, 103 low-cycle, 103, 104, 161, 169 preventive measures, 104 feedthroughs, 141 feedthroughs, electrical, 141, 164, 165, 166, 170, see also leaks cryogenic, 286 high current, 206 high-voltage, 202, 203, 205 feedthroughs, mechanical, 141, 164, 166, 246–50, see also leaks electric motors in the sealed environment, use of, 249–50 in cryogenic apparatus, 286 magnetic drive, 168, 248–49, 293 magnetic fluid seals, 247 metal bellows types, 168, 247, 248, 252, 257, 300 sliding seals, 98, 141, 149, 168, 246–47, 300 lubrication of, 247 reliability of, 246 use of guard vacuums in, 247 fiber optics, 337–38 analog links, 367, 382 connectors, vulnerability of, 337 crosstalk and electromagnetic interference immunity, 337 digital communication links, 338, 362, 382 etalon fringes in, 317, 338 harsh environments, resistance to, 337 noise and drift in multimode fibers, 338 optical measurements, use of for, 338 polarization instabilities in, 338 power transmission, for, 362 robustness of, 337 single-mode connectors, cleaning of, 328
single-mode connectors, contamination of see optical: components, contamination of: single-mode fiber optic connectors filaments, electric, 69, 104, 205, 206, 207 derating of, 59 fatigue failure of, 102, 343 filters (gas and liquid), 65, 96, 97, 201, 261, 262, 268 for liquid helium, 292 liquid and gas systems, warning devices for, 262 self-cleaning, 262 filters, a.c. line see a.c. line filters filters, RF see RF interference: prevention of: filters, use of fingerprints, 139, 149, 319, 320, 328, 331 firewalls see computer security issues: firewalls flexible metal hose see bellows flexures, 219–22 advantages and disadvantages, 219–21 backlash in, 220 for oscillatory motion, 226 hysteresis in, 221 methods for operating, 221–22 electrostriction, 222 micrometer drive, 221 piezoelectric, 221 voice coil, 221 vibration suceptibility, 221 flow charts (computer programming) see software, writing of: detailed program design and construction: flow charts, use of flow switches see switches:flowfluorescent lamps and human error, 15 as causes of electric field interference, 389 as causes of magnetic field interference, 380 as sources of RF interference, 16, 377 as sources of transient overvoltages, 86 forms, use of (for laboratory documentation tasks), 24 fracture, 102 fretting, 69, 225, 226 frustration, impact on human error of, 11 fuses, 129 fatigue of, 129 resettable, 129 galvanic corrosion see corrosion:galvanic gas handling systems see liquid and gas handling systems gears, in vacuum environments, 227, 232, 233 generators, standby, 90 gettering, 202 gloves, 324 Glyptal, 161 Google, 6 GP-IB see computer interfaces: GP-IB gravity-wave detection (historical example), 552–58
573
Index
ground contacts, 60, 62, 353, 358, 429–30 accidental or uncertain types, 430 inspection of, 430 intermittent problems, 429 preferred joining methods, 429 troublesome metal finishes, removal of, 429 faults, 451 map, 358, 368 plane, 382, 389 safety-, 354, 355, 356, 359, 360, 447 semi-floating input, 364 systems, planning of, 357–58 ground loops, 23, 62, 302, 338, 353–69, 377, 456, 497 avoiding, 357–67 causes of, some potential, 356–57, 454 detection of, 368–69 ground-loop tester, use of, 368 multiple-branch problems, 369 oscilloscopes, use of, 368 Rogowski coil method, 369 transformer-based tracing method, 369 digital paths, isolation of, 362 optical fibers, 362 opto-isolators, 362 floating power supplies, use of, 360–62 intermittent noise from, 60, 356, 429 noise voltages from ground loops, character of, 355 power leads, devices for opening ground loops involving inductors, 361 resistors, 361 severe, due to miswired a.c. power plugs and receptacles, 357 shielded twisted-pair cables, involving, 359, 360 signal paths, devices for opening ground loops in analog fiber-optic links, 367 common-mode chokes, 364 differential amplifiers, 362–63 isolation amplifiers, 365–67 transformers, 365 single-point grounding, use of, 359–60 systems affected by, 356 unavoidable, methods of reducing effects of, 367, 459 grounding, see also ground and ground loops arrangements, importance of, 353–54 in electronic systems, use of connector shield contacts for, 444 of cable shields see cables: shielded: grounding of shields RF, 376 guard vacuum, 164, 165, 166, 247 guarding see high impedance circuits: guarding technique, for leakage current reduction
hard drives, computer, 63, 392, 491–94 backing-up information on see computers: backing-up information on failure, risks and causes of, 491–92 failure, symptoms of impending, 493 long-term information storage, suitability for see computer (digital) information, long-term storage of: hard drives, suitability for RAID systems see hard drives, computer: redundant disc systems, use of recovery of data from failed, 493 data recovery services, 493 data recovery software, 493 redundant disc systems, use of, 2, 492–93 solid-state disks as an alternative to, 494 hard soldering see brazing heat pipe, 212 heat sinks, 64 R Helicoflex seals see seals, static R demountable:Helicoflex metal O-ring high impedance circuits, 71, 387–90 definition, 387 difficulties arising in, 387–88 guarding technique, for leakage current reduction, 390 solutions to difficulties, 388–90 very high impedance, 60, 62, 66 avoidance and precautionary measures, 389, 419 definition, 388 insulator material selection, 389 leakage current problems, 388 moisture-repellent coatings for, 390 high-frequency circuits, 60, 66, 419 high-voltage, 59, 179, 202, 265, 266, 270, 326, 362, 366, 371, 382–87, 419 cables see cables: high-voltage circuits, 60, 62, 66 soldering of, 421 connectors see connectors: particularly troublesome types: high-voltage electric discharges arcing, 66, 175, 205, 265, 270, 371, 382, 383, 384 conditions likely to produce, 383–84 corona, 66, 179, 325, 371, 383, 384, 386–87, 452, 464 corona, high-frequency, 383, 446 damage resulting from, 383 malfunctions caused by, 383 phenomena and their effects, 382–83 tracking, 179, 383 electric discharges, detection of, 386–87 AM radio method, 386 ozone odor indication, 383, 386 ultrasound method, 387 UV-camera, use of, 387
Index
574
high-voltage (cont.) electric discharges, prevention of, 384–86 corona rings, use of, 386 corona shields, use of, 386 high-voltage putty, use of, 386 insulator coatings, use of, 385 R Kapton CR, use of, 386 reducing potential gradients, 385–86 equipment cleaning of, 385 maintenance of, 25 quality considerations, 384 water leaks due to arcing in water-cooled, 265 feedthroughs, electrical see feedthroughs, electrical: high-voltage insulators see insulators: high-voltage power supplies see power supplies: high-voltage safety, 387 transformers see transformers: high-voltage hose clamps see water-cooling systems: hose termination: hose clamps human error, 1, 2, 5, 8–20, 39, 42, 46, 210, 224, 251, 271, 433–34 access restrictions, impact of, 20 automation, 13–14 biases see biases, subconscious and biases, of information sources checklists, use of, 14, 24 designing systems and tasks to minimize human error, 17–18, 521 dominant causes of, 10–11 dominant types of, 9–10 emotional stress, impact of, 11 fatigue, impact of, 12 frequency of problems caused by, 8 frustration, impact of, 11 labeling, 19–20 latent errors, 9 mental processes, vulnerable, 17 mental rehearsals of procedures, 12 mistake proofing, 18 omission errors, 9–10, 14, 42 overconfidence, and planning, 13 physical environment, effect of, 15–17 procedures and documentation, 18–19 strong habit intrusion, 14–15 transcription errors, 20, 36, 46, 303 ways of reducing human error, 11–20 humidifiers, 68 humidity see relative humidity HVAC systems, 71, 72, 74, 75, 82 hygroscopic dust failures, 66 IEEE-488 bus see computer interfaces: GP-IB image intensifiers, 304 improvisation, 8, 22, 132 indium
foil, for cryogenic thermal contacts, 297 seals see seals, static demountable: indium solders, 426, 427 inductors high-frequency behavior, 375 toroidal, 380 inertia block, 78 information clarity of presentation, 28 direct vs. indirect, 29 reliability of, 28–30 sources of, 6 information (digital), long-term storage of see computer (digital) information, long-term storage of insulation, 69, 91 insulators, 66, 206 connectors, materials for, 444 high-voltage, 384, 385 very high impedance circuits, materials for, 389 integrated circuits, power, 212 interlibrary loan, 6 interlock, 129, 130, 192, 271 intermittent arcing, 60, 371 burst noise in electronic components, 61 cables, 17, 451 corona, 60 electrical cables and contacts, troubleshooting of, 472–73, 475 electrical conductors, 60, 62 electrical contacts, 10, 60, 62, 397, 431, 447, 448 electromagnetic interference, 60, 370 electrostatic discharge induced faults, 392 etalon fringe problems in optical systems, 61, 316 failures (in general), 5, 26, 177 causes and characteristics, 60–62 preventing and solving, 62–63 troubleshooting of, 26–27 faults see intermittent: failures (in general) ground loop electrical noise, 356, 429 leaks, 61, 62, 138, 171 mode hopping in diode lasers, 61, 339, 341 noise in measurements, 60 open circuits, 91, 286, 419 power line disturbances, 60, 61 problems in equipment, recording of, 25 short circuits, 60, 286, 474 software faults, 60, 525 solder contacts, 414 vibrational disturbances, 61, 71, 75 Internet, 6, 61 inverter, 88, 89 ion-air gun, 329 ionization gauges see vacuum gauges: ionization ionizer, 324, 329 ISO9001, limitations of, 120
Index
575
ISO9241–6, 15 isolation amplifiers see amplifiers: isolation iteration (computer programming construct), 519 Jubilee clips see water-cooling systems: hose termination: hose clamps keywords (for database searches), 6 knowledge explicit, 548, 551 tacit, 548, 551 labels, 19, 93, 135 LabView, 507, 508, 509, 518, 523, see also software: data-acquisition, commercial LabWindows/CVI, 509, see also software: data-acquisition, commercial laminar flow station, 324 lamps see light sources: incoherent types Large Hadron Collider, 7 laser-induced breakdown, 321 lasers see light sources: lasers latent conditions, 24 leak detection, see also leak testing, vacuum and leaks gas, 263 ultrasonic method, 179, 263 vacuum, 170–79 at large distances, 178–79 coaxial helium leak detection probe, 173 cryogenic systems, leak testing of, 175–78, 294 dry leak-detectors, 173 dye penetrants, use of, 171 extractor gauges for low-temperature helium leak sensing, 178 high-voltage devices, helium leak testing of, 175, 384 in large vacuum systems, 178 large leak locating using mass-spectrometer leak detector, 175 large leak locating using soap-water method, 171 leak detector cost issue, 173 leak detectors, mass spectrometer, 172, 173, 175, 176 leak telescope, 178 mass spectrometer leak detector method, 172–74 permeation, of O-rings by helium, 174, 175 potential problems during, 174–75 sniffer technique, 172 solvent-based methods, 171 superfluid leaks, 177 ultrasonic method, 178 useful helium leak testing practices, 173–74 visual inspection, for locating leaks, 172 water and other liquids, UV fluorescent method, 263 leak testing vacuum, 20, 140, 149, see also leak detection: vacuum and leaks: vacuum
of 321 stainless steel tubing, 145 of commercially-welded vacuum components, 151 of raw materials, 143–44 leaks, 91, 100, 261, 290, see also leak detection and leak testing, vacuum helium gas, 201 in valves, 250, 251, 252, 253, 254, 255, 257, 259, 260 vacuum, 20, 62, 138–47, 150–81, 191, 205, 285, 294 benefit of, 172 cold, 27, 61, 140, 141, 144 common causes of, 141, 157, 159, 233, 236, 237, 242 cryogenic, 175, 176, 286 dependence on vacuum system condition, 140 guard vacuums, use of, 164–66, 247 in dynamic seals see feedthroughs, mechanical: sliding seals in mechanical pumps, 193 in static demountable seals see seals:static demountable leak paths in stainless steel, 144 leak-prone components, 165–70 molecular, 138 of water into vacuum, 166, 197, 265 permeation leakage, 139 real, 139, 180 repair of, 179–81 size vs. time, 138 spontaneous plugging of, 138 superfluid, 61, 139, 140, 151, 176 undesirable blocking of, 148, 149, 171, 172 virtual, 139, 170, 176, 180 water see water-cooling systems: leaks least squares procedure (curve fitting), 41 lenses see optical: components: lenses light detectors, 67, 344 avalanche photodiodes, 344 photodiodes, 344 photomultipliers, 304, 344 light sources, 338–43 incoherent types, 343 arc lamps, 321, 338, 343 fluorescent lamps see fluorescent lamps incandescent lamps, 343 light-emitting diodes, 343 tungsten-halogen lamps, 321, 343 lasers, 61, 269, 311, 317, 318, 324, 338, 341–43, 345 argon-ion, 339, 342 CO2 , 372, 379 diode, 61, 131, 318, 322, 339, 341–42 frequency shifts of, 63 gas lasers, 322, 338, 342, 343 helium–cadmium, 342 helium–neon, 322, 339, 340, 342
576
Index
light sources (cont.) high-power, 320, 321, 326, 327, 336, 337, 342, 345 Nd-YAG, diode laser pumped, 343 pulsed, 372 solid-state, 343 water-cooled, 339 noise and drift in, 338–41, 342, 343 active compensation methods for reducing, 340–41 due to dust inside laser cavities, 322 frequency noise in lasers, 71 He-Ne lasers, drifting polarization in, 339 laser beam intensity stabilization, 340 laser beam pointing stabilization, 340 laser noise cancellation, 340 microphonics in lasers, 339 mode-hopping in lasers, 61, 339, 341, 342 temperature-induced drift, 339 light, ultraviolet, 322, 323 photopolymerization by, 323 lightning, 83, 84, 371 line voltage conditioners, 62, 87, 88 Linux (computer operating system) see computers: selection of computers and operating systems: Linux liquid and gas handling systems, 246, 260–63 configuration of pipe networks, 260 construction issues, 261–62 soldering, 261 welding and brazing, 261 filter issues, 262 leak detecting and locating, 263 materials selection, 260–61 pipe thread fittings, 262 PTFE tape, problems caused by, 262 liquid cryogens, 67, 129, 285, 292, 293 liquid helium, 73, 139, 166, 176, 178, 219, 250, 285, 290, 291, 292, 293, 295 filters for see filters: for liquid helium liquefiers for producing, 293 storage dewars, 287, 289, 290, 291, 292 transfer problems, 289–90 transfer tubes, 168, 288, 290 blockage of, 289 damage to, 287–88, 289 poor vacuum in, 289 touches, 289 liquid nitrogen, 72, 73, 148, 150, 173, 176, 177, 192, 195, 196, 198, 205, 206, 243, 290 liquid oxygen, 293 literature searches, 5 lockups, 84, 392, 495 logbooks, 18, 24, 26, 171 low temperature see cryogenic lubricants
dry see lubrication and wear under extreme conditions: dry lubricants for connectors see connectors: contact wear and corrosion, reduction of: connector lubricants, use of solid see lubrication and wear under extreme conditions: dry lubricants lubrication and wear under extreme conditions, 227–33 dry lubricants, 226, 231–33 gold, 231, 232 MoS2 , 231, 232 PTFE, 231, 232 silver, 231, 232 uses for, 231 WS2 , 232 hydrodynamic and boundary lubrication, 229 liquid lubricants, 229–31 multiply-alkylated cyclopentanes, 229, 320 perfluoropolyether, 229, 231 silicone, 229 materials selection for sliding contact, 228–29 self-lubricating solids, 233 Mac OS (computer operating system) see computers: selection of computers and operating systems: Mac OS machinery, quality considerations during purchase of, 118 magnetic fields, interference from low-frequency, 379–81 affected items, 379, 403 detection of fields, 381 prevention of, 380–81, 461 sources of fields, 355, 380 magnetic shield see shields: magnetic magnetic tape see computer (digital) information, long-term storage of: magnetic tape lifespan and proper storage mains power problems see electricity supply problems maintenance, 10, 20, 25, 64, 134, 135, 201 record, 171 maintenance work, supervision of, 23 management of projects, 22 manuals, 18, 19, 25, 26, 121, 123, 124–25, 536, see also documentation auxiliary information about products, 125 errors in, 125 margins of safety, 2, 3, 58, 93 Mariner 1 space probe, 7 mass flow controller, 256 mass spectrometer, 172, 179 material samples, problems involving, 105, 543–47 occurrence and examples of, 543–44 polywater, historical example of, 542, 544–46 some measures for preventing, 546–47
577
Index
measurement frequency, recommended (for low-level a.c. measurements), 367 measurements absolute, 25 errors, 20, 63 interference with, 83 precision optical, 324 recording of, 14 measuring devices see sensors mechanical devices see mechanisms mechanical drawings, 19, 133 mechanisms, 218–60 backlash in, 219, 222, 249 bearings see bearings direct versus indirect drive, 222–23 erratic motion of, 219, 222 flexural see flexures hydrodynamic and boundary lubrication of, 229 in cryogenic environments see cryogenic devices: mechanisms in vacuum environments see vacuum: mechanical devices for use in jamming of, 219 lubrication and wear of, under extreme conditions see lubrication and wear under extreme conditions materials selection for sliding contact, 228–29 modular, 130, 131 motor-driven, 129, see also electric motors precision positioning devices in optical systems, 223–24, 315 precision positioning devices, accuracy of, 225 prevention of damage due to exceeding mechanical limits, 224 limit switches, 224 shear pins, 129, 224 torque limiters, 129, 224, 251 quality considerations, 118 transferring complexity to electronics, 223 translation stages, 223, 225 Michelson interferometer see optical: interferometers: Michelson microphonic, 71, 339, 344, 388, 389, 403, 413, 451, 452, 459, 469 microscopes, optical, 71, 326, 327, 329 microwave connectors, 445, 448 devices, 392 equipment, 394, 448 instruments, 392, 394 resonators, 71 miracles, grounds for believing in, 30 mirrors see optical: components: mirrors misprints in instruction manuals, 125 in published mathematical work, 44 mistake proofing, 18
modularity, 1, 3–4, 44 in cryogenic apparatus construction, 177 in vacuum system construction, 141, 154 use of in apparatus design, 130–31 modules, 27, 179 commercial, 130, 131 moisture, 66, 100, 285, 335, see also condensation harmful effects of, 66 in cryogenic apparatus, difficulties caused by, 288–89 IR and UV optical materials, damage to, 333–34 locations and conditions, 67 problems, avoiding, 67–68 mold, 66, 328, 335 molecular sieve, 148, 150, 195 contamination of vacuum valves by, 251 MOSFETs, 391, 393, 399 motion feedthroughs see feedthroughs, mechanical MOVs see overvoltage protection:metal oxide varistors mumetal, 380 NIM (modular electronics) system, 130 nitrogen gas, for vacuum pump oil back-migration prevention, 192 noise, acoustic, 64, 65, 82, 400 effect on human error, 16 noise, electrical, 87, 98 1/f, 66, 413, 431 burst, 61, 401, 431 due to poor contacts, 397 electromagnetic interference, in the form of see electromagnetic interference intermodulation, 432 microphonic see microphonic of electrochemical origin, 66 on a.c. power line, 83, 89 popcorn see noise, electrical: burst RF, on a.c. power line, 85 reduction of, 86–87 noise, optical due to dust near spatial filters, 344 due to etalon fringe effects, 316 due to light from high-voltage electric discharges, 383 from light sources see light sources: noise and drift in in interferometric devices, 312 noise, vibrational, see also vibrations cancellation of effects due to, 82 coherent, 74 random, 74 notation, mathematical, 43 notebooks, 19, 24, 26 notes, backing up of, 24 numerical calculations see calculations: numerical
578
Index
OEMs, 130 omission errors, 9–10, 14, 42 omitted checks, 14 op amps, 61 optical air path effects, 311 apparatus, structural materials for aluminum, 314, 315, 321 brass, 315, 321 fused silica, 314 Invar, 314, 315 plastics, 314 stainless steel, 314 steel, 321 R Zerodur , 314 assembly, shift in focus of, 312 athermalization techniques, 313 coatings, 334 components, 66, 81, 105 beamsplitters, 316, 318 cemented optics, 330 diffraction gratings, 91, 322, 325, 326, 327, 328, 330, 331, 336 filters, interference, 313, 335 in laser cavities, 327 infrared, 327, 328 lenses, 318, 335, 336 mirrors, 318, 327 pellicles, 316, 325, 327 ultraviolet, 327 windows, 316, 318, 331, 334, 335, 336, see also windows components, cleaning of see cleaning: of optical components components, contamination of, 98, 231, 318–33, 383 common contaminants, 319 diffraction gratings, 322, 326 high-power light systems, 320, 321, 325, 326 inspection, 326 intra-cavity laser optics, 322 protective measures, 323–26 single-mode fiber optic connectors, 323 transport cushioning-materials, by, 94 ultraviolet optics, 323 vulnerable devices and systems, 318 components, distortions of, 312 detectors see light detectors devices, 91 Faraday isolators, 317, 338, 341 Pockels cells, 372 spatial filters, 344 element and structure materials, stability of, 314–15 elements and support structures, temperature changes in, 312–13
etalon fringe problems, 62, 315–18, 339 fibers see fiber optics interferometers common-path, 82, 311 Fabry-Perot, 315 Michelson, 311, 549 interferometric devices, a.c. phase noise in, 312 interferometry, 61, 71, 82, 313 materials, degradation of, 333–37 corrosion, by, 334–35 IR and UV materials, of, 333–34 mold, by, 335 thermal shocks, by, 334 UV light, by, 334 materials, exceptionally robust types diamond, 336 fused silica, 336 rhodium mirror coatings, 336 sapphire, 335–36 silicon carbide, 336 modules, 130, 131 mounts, 315 adjustable, 223 rigidity and stability of, 81 paths, temperature variations in, 310–12 scattering, 318 sources see light sources structures, bending of, 313 systems, 61, 62, 71 alignment of, 345 coatings for mechanical hardware in, 321 enclosures for, 311, 313, 324 precision positioning devices in, 223–24, 315 tables, 79, 81, 314, 324 optics, 67, 310–45 laser, high power, 67, 320, 321, 325, 326, 327, 336, 345 O-rings see seals, static demountable:O-ring outgassing, 139 outlier points (“outliers”), 41, 541, 542 outsourcing, 44, 128 overconfidence, and planning, 13 overheating as a cause of intermittent equipment problems, 60 during UHV bakeout process, 211 of equipment, measures for preventing, 64–65 overkill, use of, 28 overvoltage protection, 2, 394–95, see also electricity supply problems: preventive measures: surge suppression causes of overvoltages, 394 crowbars, 395 low leakage diodes, 395 metal oxide varistors, 86, 395
Index
579
overvoltages, 83, 84, 492, see also electricity supply problems: surges transient, 84, 85, 86, 87, 495 ozone, 237, 239, 266, 326, 331, 332, 333, 383, 386 patch cords see cables: patch cords PCs see computers Penning gauges see vacuum gauges: Penning percentile spectra see centile spectra perfluoropolyether, 98, 320 persuading and motivating people, 21 PFPE see perfluoropolyether photodiodes see light detectors: photodiodes photolysis see photopolymerization photomultiplier tubes see light detectors: photomultipliers photopolymerization, 96, 323 pigtails, RF see cables: shielded: pigtail problems, RF pipes, 104, 260, 261 Pirani gauges see vacuum gauges: Pirani and vacuum gauges: Pirani and thermocouple planning, 12, 132–33 PLC see programmable logic controller PMT see light detectors: photomultipliers Pockels cells see optical: devices: Pockels cells pointers (computer programming), problems caused by, 522, 525 polyorganosiloxanes see silicone polywater see material samples, problems involving: polywater, historical example of porosity, 140, 141, 142, 143, 145, 146, 147, 149, 155, 156, 157, 159, 167, 294 potentiometers, 60, 400 power amplifiers, 58, 121, 128, 396, see also power electronics audio, for driving electromagnets, 59 modular, 130 power electronics, 395–97 cooling of, 396 over-temperature shutdown, 396 overvoltage protection, 396 soft-start protection, 396 power line EMI filters see a.c. line filters power problems, ac see electricity supply problems power supplies, 2, 58, 62, 63, 64, 87, 128, 131, 380, 396, see also power electronics a.c. power adapter see a.c. power adapters computer see computers: power supplies for d.c., 85, 396 excessive voltage ripple, 396 derating of, 59 for diode lasers, 341 for ion pumps, 203, 205, 385 high-voltage, 130, 205, 385 hybrid linear/switching, for superconducting magnets, 302, 377
interlock circuits in, 396 intermittent, 60 linear, 83, 87, 302, 377, 396 modular, 130 quality considerations, 118 switching, 83, 85, 87, 302, 371, 377, 396, 536 uninterruptible see uninterruptible power supplies voltage regulation, 63 power-line filters see a.c. line filters power-line monitors, 85 preamplifiers, 87, 361 preamplifiers, differential, 363 prejudices, collective scientific see biases, subconscious: experimental work, as a source of error in: biases caused by social interactions pressure gauge, 70, 102 pressure regulators see valves: pressure regulators pressure relief devices, 2, 129, 252–54, 288, 290 pressure relief valves see valves: pressure relief printed circuit boards, 66 problem area, coordinator for a, 22 procedures and checklists, 14 conflicts with strong habits, 15 deficient, 11 failure to follow, 21 mandatory, 21 mental rehearsals of, 12 programmable logic controllers see computers: programmable logic controllers programming, computer see software, writing of pseudocode (computer programming) see software, writing of: detailed program design and construction: pseudocode, use of PTFE tape, 246, 262, 270 PTFE, porosity of, 294 pumping, differential, 164 pumps, 69, 71, 72, 79, 80, 86 pumps, vacuum, 73, 80, 190–207 cryo-, 80, 175, 201–02, 203 leaky pressure relief valves in, 252 maintenance of, 201 regeneration of, 201, 202 diffusion, 64, 150, 191, 192, 193, 195–98, 199, 210 automatic protection devices for, 197–98 backstreaming of oil from, 196 causes of harm to, 197 contamination of the vacuum system by, 196–97 critical backing pressure, 196 fluids (oils) for, 97, 98, 148, 197, 207 Herrick effect, 196 in UHV systems, 198 ion, 80, 148, 150, 175, 202–05, 251 advantages of, 202–03 contamination, 204 deterioration and failure modes, 204–05
Index
580
pumps, vacuum (cont.) for pumping noble gases, 204 hi-potting of, 204 limitations of, 203–04 memory effect in, 203 power supplies for, 205 power supply for, battery-operated, 203 pumping speeds, 203 liquid nitrogen traps for, 192, 196 mechanical, 191 mechanical primary, 190, 191, 192 diaphragm, 191, 194, 195 dry positive-displacement pumps, 194–95 foreline traps for, 191, 192 leaks in, 193 long service-life oils for, 193 low vapor pressure oils for, 192 oil-mist filters for oil-sealed pumps, 193 oil-sealed, 150, 191–94 preventing vacuum system contamination by pump oil, 191–93 pump problems due to oil degradation, 193 reliability of oil-sealed types, 194 Roots+claws, 194 rotary-piston, 74, 191 rotary-vane, 81, 191, 195, 288 rotary-vane, for circulating helium in cryogenic equipment, 293 scroll, 81, 191, 194, 195 non-evaporable getter, 157, 206–07, 251 activation of, 206 poisoning of, 207 oil-free, 148 sorption, 148, 150, 195 valve contamination caused by, 251 titanium sublimation, 148, 205–06, 207, 251 contamination, 205 turbomolecular, 64, 80, 81, 198–201, 203 consequences of improper venting, 200–01 magnetic-bearing types, 199–200 possible contamination from ball-bearing types, 200 turbo-drag pumps, 199 vulnerability to damage, 199 purging, 261, 288, 289, 291, 292, 300, 325 PVC, 261, 266 quality of components, 135 rack, electronics, 463 temperature within, 65 radiation environments, high, 228, 231 radiators, for room temperature regulation, 64 radiofrequency see RF RAID see hard drives, computer: redundant disc systems, use of
record keeping, 24–25, 27 rediscoveries, 5 redundancy, 2–3, 4, 165, 265, 290, 303, 469, 492 regression analysis see data: fitting curves to data relative humidity, 27, 62, 66, 67, 68, 205, 270, 333, 384, 388, 389, 401 definition, 65 relay, see also switches contacts, 59, 60, 98, 398 reed-, 399 solid-state-, 399, 400 repair of commercial equipment, 25 reproducibility of experimental measurements and techniques see experimental measurements and techniques, reproducibility of residual gas analyzer, 171, 179 resonance, 70, 73, 75, 80, 81, 92, 93, 104, 105, 170, 200 frequencies, 76, 221 RF heating of cryogenic devices, 295, 301, 302, see also RF interference RF induction heating equipment, 372 RF interference, 302, 365, 370–79, 457, 462 a.c. power line, from, 85, 376 cable deficiencies, due to, 457, see also cables: electromagnetic interference issues involving detecting and locating RF sources, 378–79 AM radio method, 378 H-field probe, 378 spectrum analyzer method, 379 ground loops as a cause of, 377 in low-frequency devices, 372 intermittent, 60 manifestations of, 370 prevention of, 372–78, 379 filters, use of, 62, 302, 374–76, 442, see also a.c. line filters power supply selection, 302, 377 RF grounding methods, 376–77 shielded rooms, use of, 377–78 shields, all-metal cryostats as, 302 shields, effect of slots in, 373 shields, use of, 373–74 sources of, 370–72 arc welders, 85, 371, 372 broadcast transmitters, 370, 371, 378, 379 computer equipment, 371, 491 laboratory equipment, 372 switching power supplies, 85, 371 thermostats, 27, 371 RGA see residual gas analyzer RH see relative humidity roundoff errors, 3 RS-232 see computer interfaces: RS-232 rules-of-thumb, 133 rupture discs see valves: rupture discs
Index
581
salt sea-spray, 67, 92, 334, 384 samples see material samples, problems involving schematic diagrams, 19, 133 scientific instrumentation journals, 6 scientific papers, improvement of, 13 screws, tamper-resistant, 23 seals, see also leaks ceramic-to-metal, 165, 166 cryogenic, 166, 177 glass-to-metal, 165, 166 insulator-to-metal, 140, 164 seals, dynamic see feedthroughs, mechanical seals, static demountable, 91, 140, 141, 164, 233–46, see also leaks R CF (ConFlat ), 235, 239–40, 241, 242 care of, 240 disadvantages of, 239 reliability of, 239 conical taper joints, 149, 245 copper gasket, 239, 240 cryogenic, 242–45 damage to, 233–34 flanges leaks caused by distortion during welding, 236 repair of scratched, 234 tightening of threaded fasteners on, 235–36 R Helicoflex metal O-ring, 241 advantages, 241 reliability of, 241 indium, 141, 161, 177, 242–45 creep of indium, 244 flange separation method, 243 high-reliability flange designs, 243 indium purity considerations, 243 preferred flange and screw materials, 242 relative reliability of, 242 leaks due to contaminants on, 234 metal, 152, 175, 235 metal-gasket face-sealed fittings, 240 O-ring, 141, 149, 152, 171, 174, 175, 234, 236–39, 241, 242, 246 compression set, 237 damage and deterioration of, 237, 253 diffusion of helium through, 139 hardening when cold, 290 inspection of, 234, 239 installation and removal of, 238–39 R Kalrez , 237, 238 materials properties and selection, 237–38 nitrile, 237, 238 nitrile, use near high-voltage equipment, 237 preferred material for fittings, 238 PTFE, 238 Viton, 237, 238 pipe thread fittings, 246, 270
protection of, 235 weld lip connections, 245 selection (computer programming construct), 519 sensors, 91, 537 calibration of, 25 redundant, 2 transcription errors in calibration data, 20 sequence (computer programming construct), 519 shear pins see mechanisms: prevention of damage due to exceeding mechanical limits: shear pins shields, 62 Faraday, 361, 365, 388, 392 magnetic, 365, 380, 381, 403 RF see RF interference: prevention of: shields, use of shipping see transport shock absorbing casters, 96 shocks, 91, 92, 93 short circuits, 61, 270, 288, 451 shutter, 325 signal processing, statistical, 551–52 Wiener filter (example), 551 silicone, 97 contamination, 97–98, 150, 152, 197, 207, 229, 245, 320 diffusion pump fluids (oils), 97, 148, 197, 207 grease for high-voltage applications, 386 high vacuum grease, 97, 98, 149, 171, 176, 229, 239, 243, 245, 247 oil, 98, 239 resin, 181 rubbers, 97, 98, 239 corrosive properties of, 301 for high-voltage applications, 385, 386 tapes, for high-voltage use, 386 simplicity (for enhancing reliability), 1–2 SIS junctions, 391 sleep deprivation, 12 small incremental improvements, 4 sneaks, 4 snubber network, 399 software, 1, 4, 13, 117, 224, see also computers commercial and open-source, 504–06 early releases and beta software, avoiding, 504 open-source software, 45, 118 open-source vs. closed-source software, 506 pirated software, 506 questions for software suppliers, 505–06 software failure-rate vs. product age example, 504 data-acquisition, commercial, 132, 507–09 choosing a data-acquisition application, 509 examples and properties of, 507 graphical programming languages, 507–08 graphical programming, some concerns with, 508–09
582
Index
software (cont.) for experimental measurements, importance of understanding, 536–37 old laboratory software, using, 526–27 repair of corrupted, 27 software, writing of, 510–26, see also software: data-acquisition, commercial C (language), 509, 513 C# (language), 513 C++ (language), 513 changes to software, consequences of, 526 debugging programs, 523, 524–26, see also software, writing of: testing programs and troubleshooting bug-ridden code, rewriting of, 526 changing erroneous code, precautions in, 526 difficulty of debugging, 510 general approach to debugging, 524 intermittent faults, causes of, 525 stabilizing and locating faults, 525 static code analyzers, use of, 525 symbolic debuggers, use of, 525 syntax and semantics inspection utilities see software, writing of: debugging programs: static code analyzers, use of tools for debugging, 525–26 detailed program design and construction, 514–23 documentation of code, 508, 520–21 errors in programming, some common, 522–23 excessively clever code constructs, 517 flow charts, use of, 514 global variables, problems with, 520 input errors, testing for, 521 inspection of completed code, manual, 523 LabView, programming style for, 518 naming of variables and routines, 519–20 pair programming, 516–17 pseudocode, use of, 514–16, 521 structured programming, 518–19 style, general programming, 517–18 style, graphical programming, 518 general programming considerations (introduction), 510–11 pre-existing programs, availability of, 510 program failures, main cause of, 510 object-oriented programming, 513 procedural programming, 512, 513 requirement establishment and initial design of code, 511–14 preliminary steps, need for, 511 requirements, determination of, 511 re-use of existing code and algorithms, 514 routines, desirable properties of, 513–14 subdivision of programs, 512 testing programs, 523, 524, see also software, writing of: debugging programs
choice of test data, 524 psychology, importance of, 524 symbolic debuggers, use of for testing, 524 solar heating, reduction of, 63 solder, 100 flux, 66 joints, 102 soldering, for electronics, 97, 414–22, see also electrical contacts, permanent or semi-permanent cleanliness, importance of, 415 dissolution of thin conductors during, 419–20 electrostatic discharge issues, 418 flux removal of, 419 selection of, 416, 418, 470–71 high-current circuits, use of solder in, 421–22 high-temperature environments, use of solder in, 421–22 high-voltage circuits, creation of solder contacts in, 421 low-level d.c. circuits, in, 431 of cryogenic thermometer leads, 301 of gold, 301, 419, 420 of silver, 419, 420 of small enameled wires, 470–71 omission errors during, 10, 414 selection of solder, 416–18 cryogenic use, solders for, 417 eutectic vs. non-eutectic types, 417 high melting point solders, 421 high-tin alloys, 417 indium types, 419 lead-free solders, 417 Sn-Pb alloys, 416, 418 solder joints, 7, 62 degradation of, 416 fatigue failure of, 69, 102, 103, 415, 416 fatigue problems caused by mixing different alloys, 418 intermittent, 60, 414 modes of failure, 414 non-ohmic, 372, 414 unreliability of, 413 weakness of, 415–16 training requirements, 415 soldering, for vacuum systems, 97, 144, 146, 147, 151, 152, 158–64, 165, 243 cryogenic apparatus, use of ultrasonic soldering on, 163 disadvantage in cryogenic systems, 160, 285 flux active, 146 corrosive, 151, 158, 159 inorganic acid and salt, 159 organic acid, 159, 163
Index
583
problems due to, 141, 144, 159 removal of, 163 rosin, 159 joint design and soldering process, 162–63 sleeve joints, 162 problems caused by mixing solders, 162 selection of solder, 159–62 advantages of eutectic solder alloys, 153 indium alloys, 160, 161 low temperature applications, 160 non-eutectic solder alloys, problem caused by, 159 purity, 159 room temperature applications, 160 tin-lead alloys, 146, 161 tin-silver alloys, 160, 161 solder joints, 140, 141, 149, 153, 177, 191 as demountable seals, 152 fatigue failure of, 69, 158 repair of leaky, 180 slow deterioration of, 151, 158 soldering difficult materials, 163–64 inert gas method, 164 ultrasonic method, 151, 160, 163 worker qualifications, 150 solid air, 219, 287, 289, 291, 292, 293, 300 solid nitrogen, 290 solid oxygen, 293 soundproofing, 16, 82 spatial filters see optical: devices: spatial filters specifications for leak tightness of welded vacuum components, 151 for wiring in commercially-made cryostats, 470 preparing, for custom-made apparatus, 122 true meaning of, 120–21 spectrometers, 223, 320 spectroscopy, 322 monochromators used in, 222 optical detector arrays, use of, 223 spectrum analyzers, 74 sputtering, 428 sputtering systems, 372, 379 standard, for calibration of apparatus, 538 standards, importance of, 117 static eliminator, 324 strain-relief, 104, 169, 439 stress concentrations, sources of mechanical, 104 stress derating, 3, 58–60, 118, 121, 166, 206 stress relaxation, 219 stresses, cyclic or fluctuating tensile-, 101 superconducting magnets see cryogenic devices: superconducting magnets superfluidity in He3 , discovery of (historical example), 558–59 superleaks see leaks: vacuum: superfluid
surge suppressor, 62, 85, 86, 87, 90, see also overvoltage protection: metal oxide varistors surges see electricity supply problems: surges switches, 60, 62, 135, 397–400 alternatives to mechanical switches for improved reliability, 399–400 contacts, 98 gold alloy, 397 silver, 398 derating of, 59, 60, 399 flow-, 129 for low and high current and voltage levels, 397–98 Hall-effect proximity-, 399 inductive loads, use of with, 398–99 labeling of, 19 limit-, 129, 224, 399 magnetic reed, 399 thermal, 198 Taconis oscillations, 72, 295 technical support, 125 temperature air, reduction of gradients in, 313 room, problems caused by high, 63 room, reduction and regulation of, 63–64 test equipment, rental of, 475 testing, 4, 5, 10, 121, 123, 124, 134 of software, 524 thermal contraction, 176, 242, 261, 469 data, 299 thermal cutouts, 129 thermal distortion parameter see thermal distortion, relative thermal distortion, relative, 314 thermal expansion coefficients, 104, 219, 242, 285, 314 thermal lensing, 321 thermoacoustic oscillations see Taconis oscillations thermoelectric EMFs in low-level d.c. circuits, minimization of, 430–31 connectors, avoidance of, 431 preferred joining methods for copper, 431 Seebeck coefficients, 430, 431 solder contacts, avoidance of, 431 thermometers cryogenic see cryogenic devices: thermometers infrared, 473 thermostats, 371, 397, see also switches electronic-, 400 threaded fasteners, 10, 69, 91, 228, 232, 235, 299, 450 thread-locking compound, 450 time domain reflectometer, use of see cables:diagnostic methods:fault detection and locating
584
Index
torque limiters see mechanisms: prevention of damage due to exceeding mechanical limits: torque limiters tracking see high-voltage: electric discharges: tracking transcription errors see human error: transcription errors transformers audio, 365 heavy power, vibrations due to, 72 high-voltage, 384 low-frequency signal, 403 EI core types, 403 magnetization of, 403 microphonics in, 403 toroidal types, 380, 403 parasitic, 355 power, 356, 380 power isolation, 361 power supply, 380, 399 power, toroidal, 380 saturation by stray fields, 305 transient voltage surge suppressor see surge suppressor transients see overvoltages: transient transistors, power, 212 translation stages see mechanisms: translation stages transport, 91–96 air pressures during, 92 companies for packaging and transporting delicate equipment, 95 conditions encountered during, 91–93 containers, 94 handling, 92, 94, 95 harm caused by, 91, 124, 225, 245 insurance, 93, 95–96 labels, 93 local, of delicate items, 96 packaging, 91, 92, 93–95, 124 “floating” design, 94 cushioning materials, 94 moisture barriers, 95 of bellows, 170 of vacuum system components, 150 relative humidities during, 92 sea- vs. air-freight, 92 shipping cases, 95 shock and vibration levels, allowable, 93 temperatures during, 92 water-cooled equipment, of, 91 triboelectric effects, 388 trimmers (potentiometers), 60, 400 troubleshooting, 2, 25–28, 118, 135, 285 overkill, use of in, 28 tubes, thin-walled, 141, 143, 145, 146, 155, 159, 162, 163
turbopumps see pumps, vacuum: turbomolecular TVSS see surge suppressor UHV see ultrahigh vacuum ultrahigh vacuum, 139, 140, 142, 143, 146, 147, 149, 150, 164, 171, 173, 181, 228, 232, 233, 240, 241, 247, 249, see also vacuum bakeout see bakeout environments, friction and wear problems in, 218, 226, 227 equipment, 123, 231 systems, 61, 139, 144, 165, 179, 195, 207, 239, 252 contamination in, some common causes of, 150, see also vacuum system contamination and outgassing seizure of threaded fasteners in, 228, 232 valves, 255 ultrasonic detector, 179, 387 ultrasound, damage caused by, 104–05, 426 uninterruptible power supplies, 62, 85, 87–90, 202, 495 double-conversion, 88, 89, 90 line-interactive, 88, 90 output waveform, 89 passive-standby, 88, 90 selection and use of, 89–90 UNIX (computer operating system) see computers: selection of computers and operating systems: UNIX UPSs see uninterruptible power supplies vacuum, see also ultrahigh vacuum apparatus, materials for use in aluminum, 147, 155, 163 brass, 140, 141, 142, 143, 146, 151, 155, 163 bronze, 141 cast metals, 142 ceramics, 143 copper, 140, 146, 155, 157, 163 copper-nickel, 144, 146, 155, 163 forged metals, 142 R Hastelloy , 144, 169 R Inconel , 169 iron, 142 phosphor bronze, 146 PTFE, 294 rolled metals, 142, 146 sintered metals, 143 stainless steel, 141, 143, 144–45, 152, 155, 158, 163, 169 chambers, 97, 128, 166 components, monolithic construction of, 154 electronic devices in a vacuum, cooling of, 212–13 fittings, 69, 238 flanges, 141, 142 gauges see vacuum gauges
585
Index
greases, 97, 98, 149, 171, 239, 243, 245, 247 hoses metal bellows see bellows rubber and plastic, 174 joints, 93, 102, 103 fatigue failure of, 102 leaks in vacuum systems see leaks: vacuum lubrication in see lubrication and wear under extreme conditions mechanical devices for use in bearings see bearings: for vacuum environments gears see gears, in vacuum environments pumps see pumps, vacuum sealing compounds, for leak repair, 180 seals see seals and seals, static demountable and feedthroughs, mechanical systems, 133, 149, 201, 246, 254, 322 base pressure, 170, 171 cleaning procedures for, 148 theory, 133, 173 valves see valves wear in see lubrication and wear under extreme conditions vacuum gauges, 69, 166, 170, 178, 204, 207–10 Bayard-Alpert, 207, 208, 209–10 cleaning of, 210 contamination sensitivity of, 209 filament problems, 209 capacitance, 208 insensitivity to contamination and gas composition, 208 contamination of (in general), 207 extractor, 178 ionization, 98, 104, 141, 166 Penning, 170, 207, 209 contamination sensitivity of, 209 ionizing radiation sources for rapid-starting, 209 robustness, 209 Pirani, 170 Pirani and thermocouple, 208 cleaning of, 208 contamination sensitivity of, 208 gas composition sensitivity of, 208 vacuum system contamination and outgassing diagnostic methods, 178, 179 sources of, 147–50, 158 cleaning agents, 147–48 vacuum greases, 149 vacuum pump fluids and substances, 148–49 valve surfaces, 251 various contaminants, 149–50 valves, 69, 100, 118, 135, 141, 150, 204, 206, 250–60, 261, 270 all-metal, 141, 149, 198, 250, 251, 255, 257 frequent causes of damage, 255 vulnerabilities, 255 ball, 251, 252, 259–60
fluid flow control capability, 259 for water shutoff, 259 reliability of, 259 ballast, 293 check, 250, 252, 290 contamination of, 251 cryogenic see cryogenic devices: valves durable actuators for, 252 gate, 250 gate, poppet, and load lock vacuum valves, 257–58 human error in the operation of, 210, 251, 271 inlet isolation, for oil-sealed pumps, 192 labeling of, 19 leaks in see leaks: in valves metering, 250, 255–57 blockages of, 256, 262 capillary-type, 256 erratic operation of, 256 leak valves, 255 mass flow controller devices, 256 needle, 255, 256 needle, drift in flow rate caused by, 256 pulse-width-modulation flow control method, 256 motor-operated, 210 needle valves used for non-metering purposes, 250, 257, 288, 300 over-tightening of, 251, 300 pneumatic, 96, 210, 251, 252, 258 position indicators for, 251, 258 pressure regulators, 260, 262, 264, 265 pressure relief, 2, 3, 129, 141, 201, 250, 252–54 ageing of O-ring material, 253 Bunsen type, 290 clogging of, 253 on cryogenic equipment, 253, 288, 290 provision of redundant devices, 253 testing of, 253 rupture discs, 129, 253, 254 corrosion of, 254 drawbacks of, 254 fatigue failure of, 102, 254 on cryogenic equipment, 290, 291 reliability of, 254 solenoid, 210, 258 unreliable types, 250 with sliding-seal motion feedthroughs, 141, 250, 252 variables (computer programming) global see software, writing of: detailed program design and construction: global variables, problems with naming of see software, writing of: detailed program design and construction: naming of variables and routines uninitialized-, 522, 525
Index
586
varistors, metal oxide see overvoltage protection: metal oxide varistors vibration consultant, 69 vibration damping, 70, 81, 93 of bearings, 225 of cables, 79, 456 pneumatic, 76 pulsation dampers, 79 sand, 78, 79 surge suppressors, 79 tiles, 70 viscoelastic material, 70 vibration isolation short-circuiting of, 78 vibration isolators, 74 active, 77 air bladder, 76 air mount, 77, 80 air spring, 74, 76, 77, 78 cross-bellows, 78 double-gimbal, 78 passive, 72, 76, 77 pneumatic, 76, 77 with automatic leveling, 76 rubber or composite pads as, 76 vibrations, 27, 64, 65, 68–83, 91, 92, 93, 101, 102, 104, 160, 166, 168, 170, 191, 200, 202, 221, 225, 261, 265, 295, 325, 339, 344, 400, 403 interference with measurements, 71–83 analysis of vibration measurements, 74 controlling vibrations at source, 80–81 difficulties resulting from vibrations, 71 floor vibrations, isolating apparatus from, 76–78 instrument support structures, vibrations in, 81 measuring vibration levels, 74 optical apparatus, some vibration-sensitivity considerations for, 81 pumping line, cable, or pipe vibrations, isolating apparatus from, 78–79 site selection for sensitive apparatus, 73–76 sound waves, vibrations caused by, 82 sources of vibrations, 71–73, 294 large-amplitude vibration issues, 69–71, 104 video adapters, 63 viewports see windows virtual instruments, 131–32 viruses, computer see computer security issues VME (modular electronics) system, 131 volatile methyl siloxanes, 98 vortex cooler, 65 wall warts see a.c. power adapters warranty, 124 water flow indicator, 271 sensor, 271 water hammer, 79, 259, 261
water-cooled equipment, damage during transport of, 91 water-cooling, 206, 311, 339, 343, see also cooling water water-cooling systems, 99, 100, 197, 218, 263–71, 342 backup cooling-water sources, 265 chillers, 71, 264, 265 common failure modes, 263–64 condensation problems, 264, 270–71 cooling towers, 264, 269 corrosion problems see water-cooling systems: water purity requirements and water-cooling systems: system materials selection and corrosion harmful organisms in, 264, 268, 269 hose selection and treatment, 266 EPDM rubber, 266 materials for use near high-voltage equipment, 266 nitrile rubber, 266 hose termination, 266–67 hose clamps, 267 hose clamps, constant torque, 267 JIC fittings, 267 inspection of, 271 leak detection, automatic, 267 leak locating, UV fluorescent method, 263 leaks, 166, 265–67 common causes of, 265 system materials selection and corrosion, 270 aluminum, 270 copper, 268, 270 iron, 268, 270 stainless steel, 270 tap water, use of, 264, 270 water flow and temperature interlocks and indicators, 271 water purity requirements, 268–69 biocorrosion, 268 blockages, common causes of, 268 corrosion, 268 de-ionized water, corrosive properties of, 269 dissolved minerals, 268, 269 dissolved oxygen, 268, 269 filters, 268 pH of water, 269 resistivity of water, 269 stagnant water, corrosive properties of, 269 ultraviolet light disinfection, 269 weld decay, in series 300 stainless steels, 145, 155 welding, for vacuum systems, 140, 144, 146, 147, 151, 154 arc welding, 154–55 orbital-, 155 TIG-, 154 electron-beam-, 156, 157
587
Index
flange distortion as a cause of leaks, 236 importance of the skill of the welder, 155 of aluminum, 150, 155 of copper, 155 of copper-nickel, 155 of stainless steel, 145, 150, 155 weld joints, 140, 149, 154, 157, 177 hydrogen embrittlement of, 147, 157 repair of leaky, 180 welding of specific materials, 155–56 worker qualifications, 150 Wiedemann–Franz–Lorenz law, 300 wild data-points see outlier points (“outliers”) window blinds, 63 windows, see also optical: components: windows in cryogenic apparatus, 304, 318 ultrahigh vacuum, 166, 235, 236 vacuum, 141, 164, 165, 325 Windows (computer operating system) see computers: selection of computers and operating systems: Windows wires, 60, 466–71, see also cables bakeout tolerance of insulations, 211 cryogenic systems, wiring for, 163, 286, 468–70, see also wires: enameled single-strand types
cryogenic woven looms, 468 harmful wiring practices, 469 protective measures, redundant, 469 reliability problems, 468 workmanship, necessary standards of, 470 diagnostic methods, 471–75 insulation testing, 474 resistance measurement method, 471 enameled single-strand types, 466–71 care and handling of, 470 fine wire, 470, 474 magnet wire, 466 removal of enamel, 468 selection of enamel, 466–68 small copper wires, handling and reliability of, 466 soldering small wires see soldering, for electronics: of small enameled wires ultrafine wire, 470 failure modes of, 413 fatigue failure of, 102 ground faults involving, 451 high-voltage, 456 hookup wires, stranded vs. solid, 466 storage of, 465, 470