Fundamental Principles of Engineering Nanometrology
This page intentionally left blank
Fundamental Principles of Engineering Nanometrology Professor Richard K. Leach
AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO William Andrew is an imprint of Elsevier
William Andrew is an imprint of Elsevier The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA First edition 2010 Copyright Ó 2010, Richard K. Leach. Published by Elsevier Inc. All rights reserved The right of Richard K. Leach to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988 No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is availabe from the Library of Congress ISBN–13: 978-0-08-096454-6 For information on all Elsevier publications visit our web site at books.elsevier.com
Printed and bound in the United States of America 10 10 9 8 7 6 5 4 3 2 1
Contents ACKNOWLEDGEMENTS .................................................................................xv FIGURES .......................................................................................................xvii TABLES ......................................................................................................... xxv CHAPTER 1 Introduction to metrology for micro- and nanotechnology .... 1 1.1 What is engineering nanometrology? ..................................... 2 1.2 The contents of this book..................................................... 3 1.3 References ......................................................................... 4
CHAPTER 2 Some basics of measurement ........................................... 5 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
Introduction to measurement ............................................. 5 Units of measurement and the SI ....................................... 6 Length ............................................................................. 7 Mass................................................................................ 10 Force ............................................................................... 12 Angle ............................................................................... 13 Traceability ...................................................................... 14 Accuracy, precision, resolution, error and uncertainty........... 15 2.8.1 Accuracy and precision ............................................ 16 2.8.2 Resolution and error................................................. 16 2.8.3 Uncertainty in measurement..................................... 17 2.8.3.1 The propagation of probability distributions ... 18 2.8.3.2 The GUM uncertainty framework................... 19 2.8.3.3 A Monte Carlo method ................................. 21 2.9 The laser .......................................................................... 23 2.9.1 Theory of the helium-neon laser ................................ 23 2.9.2 Single-mode laser wavelength stabilisation schemes ... 25 2.9.3 Laser frequency-stabilisation using saturated absorption............................................................... 25 2.9.3.1 Two-mode stabilisation ................................ 27 2.9.4 Zeeman-stabilised 633 nm lasers ............................. 28 2.9.5 Frequency calibration of a (stabilised) 633 nm laser .....30 2.9.6 Modern and future laser frequency standards ............. 31 2.10 References ....................................................................... 31
v
vi
Contents
CHAPTER 3 Precision measurement instrumentation – some design principles....................................................................... 35 3.1 Geometrical considerations ................................................ 36 3.2 Kinematic design .............................................................. 36 3.2.1 The Kelvin clamps ................................................... 37 3.2.2 A single degree of freedom motion device .................. 38 3.3 Dynamics ......................................................................... 38 3.4 The Abbe Principle............................................................ 40 3.5 Elastic compression .......................................................... 41 3.6 Force loops....................................................................... 43 3.6.1 The structural loop................................................... 43 3.6.2 The thermal loop ..................................................... 43 3.6.3 The metrology loop .................................................. 44 3.7 Materials.......................................................................... 44 3.7.1 Minimising thermal inputs........................................ 45 3.7.2 Minimising mechanical inputs .................................. 46 3.8 Symmetry......................................................................... 46 3.9 Vibration isolation ............................................................. 47 3.9.1 Sources of vibration ................................................. 47 3.9.2 Passive vibration isolation ........................................ 49 3.9.3 Damping ................................................................. 50 3.9.4 Internal resonances.................................................. 50 3.9.5 Active vibration isolation .......................................... 51 3.9.6 Acoustic noise ......................................................... 51 3.10 References ....................................................................... 52
CHAPTER 4 Length traceability using interferometry............................ 55 4.1 Traceability in length........................................................... 55 4.2 Gauge blocks – both a practical and traceable artefact ........... 56 4.3 Introduction to interferometry............................................... 58 4.3.1 Light as a wave.......................................................... 58 4.3.2 Beat measurement when u1 s u2 .............................. 61 4.3.3 Visibility and contrast ................................................ 61 4.3.4 White light interference and coherence length.............. 62 4.4 Interferometer designs......................................................... 64 4.4.1 The Michelson and Twyman-Green interferometer...........64 4.4.1.1 The Twyman-Green modification..................... 65 4.4.2 The Fizeau interferometer........................................... 66 4.4.3 The Jamin and Mach-Zehnder interferometers.............. 68 4.4.4 The Fabry-Pe´rot interferometer ................................... 70 4.5 Gauge block interferometry .................................................. 72 4.5.1 Gauge blocks and interferometry ................................. 72 4.5.2 Gauge block interferometry......................................... 72
Contents
4.5.3 Operation of a gauge block interferometer....................74 4.5.3.1 Fringe fraction measurement – phase stepping....................................................... 74 4.5.3.2 Multiple wavelength interferometry analysis........................................................ 75 4.5.3.3 Vacuum wavelength....................................... 76 4.5.3.4 Thermal effects.............................................76 4.5.3.5 Refractive index measurement ....................... 77 4.5.3.6 Aperture correction ....................................... 78 4.5.3.7 Surface and phase change effects .................. 79 4.5.4 Sources of error in gauge block interferometry.............. 80 4.5.4.1 Fringe fraction determination uncertainty..........80 4.5.4.2 Multi-wavelength interferometry uncertainty ................................................... 80 4.5.4.3 Vacuum wavelength uncertainty ..................... 80 4.5.4.4 Temperature uncertainty................................80 4.5.4.5 Refractive index uncertainty........................... 81 4.5.4.6 Aperture correction uncertainty ...................... 81 4.5.4.7 Phase change uncertainty .............................. 81 4.5.4.8 Cosine error .................................................. 82 4.6 References ......................................................................... 82
CHAPTER 5 Displacement measurement............................................. 85 5.1 Introduction to displacement measurement ........................... 85 5.2 Displacement interferometry ................................................ 86 5.2.1 Basics of displacement interferometry ......................... 86 5.2.2 Homodyne interferometry ........................................... 86 5.2.3 Heterodyne interferometry .......................................... 87 5.2.4 Fringe counting and sub-division................................. 89 5.2.5 Double-pass interferometry.........................................89 5.2.6 Differential interferometry .......................................... 90 5.2.7 Swept-frequency absolute distance interferometry ........ 91 5.2.8 Sources of error in displacement interferometry............ 92 5.2.8.1 Thermal expansion of the metrology frame ...... 92 5.2.8.2 Deadpath length ........................................... 93 5.2.8.3 Cosine error .................................................. 93 5.2.8.4 Non-linearity.................................................94 5.2.8.5 Heydemann correction................................... 95 5.2.8.6 Random error sources....................................97 5.2.8.7 Other source of error in displacement interferometers .............................................97 5.2.9 Angular interferometers.............................................. 98 5.3 Capacitive displacement sensors .......................................... 99
vii
viii
Contents
5.4 5.5 5.6 5.7
Inductive displacement sensors.......................................... 100 Optical encoders ............................................................... 102 Optical fibre sensors..........................................................104 Calibration of displacement sensors.................................... 106 5.7.1 Calibration using optical interferometry ..................... 107 5.7.1.1 Calibration using a Fabry-Pe´rot interferometer.............................................107 5.7.1.2 Calibration using a measuring laser .............. 107 5.7.2 Calibration using X-ray interferometry ........................108 5.8 References .......................................................................111
CHAPTER 6 Surface topography measurement instrumentation .......... 115 6.1 Introduction to surface topography measurement ............... 115 6.2 Spatial wavelength ranges................................................ 116 6.3 Historical background of classical surface texture measuring instrumentation .............................................. 117 6.4 Surface profile measurement............................................ 120 6.5 Areal surface texture measurement................................... 121 6.6 Surface topography measuring instrumentation.................. 122 6.6.1 Stylus instruments................................................. 123 6.7 Optical instruments......................................................... 126 6.7.1 Limitations of optical instruments ........................... 127 6.7.2 Scanning optical techniques................................... 132 6.7.2.1 Triangulation instruments .......................... 132 6.7.2.2 Confocal instruments................................. 134 6.7.2.2.1 Confocal chromatic probe instrument ................................ 138 6.7.2.3 Point autofocus profiling ............................139 6.7.3 Areal optical techniques .........................................142 6.7.3.1 Focus variation instruments ....................... 142 6.7.3.2 Phase-shifting interferometry ..................... 144 6.7.3.3 Digital holographic microscopy ................... 147 6.7.3.4 Coherence scanning interferometry ............. 149 6.7.4 Scattering instruments ........................................... 152 6.8 Capacitive instruments .................................................... 155 6.9 Pneumatic instruments.................................................... 156 6.10 Calibration of surface topography measuring instruments ......156 6.10.1 Traceability of surface topography measurements..................................................... 156 6.10.2 Calibration of profile measuring instruments .......... 157 6.10.3 Calibration of areal surface texture measuring instruments ........................................................ 159 6.11 Uncertainties in surface topography measurement ............. 162
Contents
6.12 Comparisons of surface topography measuring instruments ... 165 6.13 Software measurement standards .....................................167 6.14 References .....................................................................168
CHAPTER 7 Scanning probe and particle beam microscopy................ 177 7.1 Scanning probe microscopy................................................178 7.2 Scanning tunnelling microscopy .........................................180 7.3 Atomic force microscopy....................................................181 7.3.1 Noise sources in atomic force microscopy ..................182 7.3.1.1 Static noise determination ...........................183 7.3.1.2 Dynamic noise determination .......................183 7.3.1.3 Scanner xy noise determination....................183 7.3.2 Some common artefacts in AFM imaging ...................185 7.3.2.1 Tip size and shape ......................................185 7.3.2.2 Contaminated tips.......................................186 7.3.2.3 Other common artefacts ..............................186 7.3.3 Determining the coordinate system of an atomic force microscope .....................................................186 7.3.4 Traceability of atomic force microscopy .....................187 7.3.4.1 Calibration of AFMs.....................................188 7.3.5 Force measurement with AFMs .................................189 7.3.6 AFM cantilever calibration........................................191 7.3.7 Inter- and intra-molecular force measurement using AFM ..............................................................193 7.3.7.1 Tip functionalisation ...................................195 7.3.8 Tip sample distance measurement ............................196 7.3.9 Challenges and artefacts in AFM force measurements.................................................197 7.4 Scanning probe microscopy of nanoparticles .......................198 7.5 Electron microscopy ..........................................................199 7.5.1 Scanning electron microscopy ..................................199 7.5.1.1 Choice of calibration specimen for scanning electron microscopy ......................200 7.5.2 Transmission electron microscopy .............................201 7.5.3 Traceability and calibration of transmission electron microscopes ...............................................202 7.5.3.1 Choice of calibration specimen.....................203 7.5.3.2 Linear calibration ........................................203 7.5.3.3 Localised calibration ...................................203 7.5.3.4 Reference graticule .....................................204 7.5.4 Electron microscopy of nanoparticles ........................204 7.6 Other particle beam microscopy techniques.........................204 7.7 References .......................................................................207
ix
x
Contents
CHAPTER 8 Surface topography characterisation ............................... 211 8.1 Introduction to surface topography characterisation.............. 211 8.2 Surface profile characterisation .......................................... 212 8.2.1 Evaluation length .................................................. 213 8.2.2 Total traverse length .............................................. 213 8.2.3 Profile filtering ......................................................213 8.2.3.1 Primary profile .......................................... 215 8.2.3.2 Roughness profile...................................... 215 8.2.3.3 Waviness profile ........................................ 216 8.2.4 Default values for profile characterisation ................ 216 8.2.5 Profile characterisation and parameters ................... 216 8.2.5.1 Profile parameter symbols .......................... 217 8.2.5.2 Profile parameter ambiguities..................... 217 8.2.6 Amplitude profile parameters (peak to valley)...........218 8.2.6.1 Maximum profile peak height, Rp ............... 218 8.2.6.2 Maximum profile valley depth, Rv ............... 218 8.2.6.3 Maximum height of the profile, Rz .............. 218 8.2.6.4 Mean height of the profile elements, Rc ............................................ 219 8.2.6.5 Total height of the surface, Rt ....................219 8.2.7 Amplitude parameters (average of ordinates)............ 219 8.2.7.1 Arithmetical mean deviation of the assessed profile, Ra................................... 219 8.2.7.2 The root mean square deviation of the assessed profile, Rq .................................. 221 8.2.7.3 Skewness of the assessed profile, Rsk ......... 222 8.2.7.4 Kurtosis of the assessed profile, Rku...........223 8.2.8 Spacing parameters ............................................... 224 8.2.8.1 Mean width of the profile elements, RSm ........................................................ 224 8.2.9 Curves and related parameters................................ 224 8.2.9.1 Material ratio of the profile......................... 224 8.2.9.2 Material ratio curve ................................... 225 8.2.9.3 Profile section height difference, Rdc.......... 226 8.2.9.4 Relative material ratio, Rmr ....................... 226 8.2.9.5 Profile height amplitude curve....................226 8.2.10 Profile specification standards ................................ 227 8.3 Areal surface texture characterisation ................................. 229 8.3.1 Scale-limited surface ............................................... 229 8.3.2 Areal filtering ..........................................................230 8.3.3 Areal specification standards .................................... 232 8.3.4 Unified coordinate system for surface texture and form................................................................. 234 8.3.5 Areal parameters ..................................................... 235
Contents
8.3.6 Field parameters .....................................................235 8.3.6.1 Areal height parameters...............................236 8.3.6.1.1 The root mean square value of the ordinates, Sq .....................236 8.3.6.1.2 The arithmetic mean of the absolute height, Sa ......................236 8.3.6.1.3 Skewness of topography height distribution, Ssk ..........................236 8.3.6.1.4 Kurtosis of topography height distribution, Sku..........................236 8.3.6.1.5 The maximum surface peak height, Sp ...................................237 8.3.6.1.6 The maximum pit height of the surface, Sv..................................237 8.3.6.1.7 Maximum height of the surface, Sz..................................237 8.3.6.2 Areal spacing parameters.............................237 8.3.6.2.1 The auto-correlation length, Sal ....237 8.3.6.2.2 Texture aspect ratio of the surface, Str .................................238 8.3.6.3 Areal hybrid parameters...............................238 8.3.6.3.1 Root mean square gradient of the scale-limited surface, Sdq ............238 8.3.6.3.2 Developed interfacial area ratio of the scale-limited surface, Sdr ...239 8.3.6.4 Functions and related parameters.................239 8.3.6.4.1 Areal material ratio of the scale limited surface ............................239 8.3.6.4.2 Areal material ratio of the scale-limited surface, Smc(c) .......239 8.3.6.4.3 Inverse areal material ratio of the scale-limited surface, Sdc(mr) ......239 8.3.6.4.4 Areal parameters for stratified functional surfaces of scalelimited surfaces...........................240 8.3.6.4.5 Void volume, Vv(mr).....................241 8.3.6.4.6 Material volume, Vm(mr) ..............241 8.3.6.4.7 Peak extreme height, Sxp .............241 8.3.6.4.8 Gradient density function .............242 8.3.6.5 Miscellaneous parameters............................242 8.3.6.5.1 Texture direction of the scale-limited surface, Std.............242 8.3.7 Feature characterisation ...........................................243 8.3.7.1 Step 1 – Texture feature selection ................243
xi
xii
Contents
8.3.7.2 Step 2 – Segmentation ................................ 243 8.3.7.2.1 Change tree................................. 245 8.3.7.3 Step 3 – Significant features........................248 8.3.7.4 Step 4 – Selection of feature attributes......... 248 8.3.7.5 Step 5 – Quantification of feature attribute statistics ....................................... 249 8.3.7.6 Feature parameters .....................................249 8.3.7.6.1 Density of peaks, Spd .................. 250 8.3.7.6.2 Arithmetic mean peak curvature, Spc ............................................ 250 8.3.7.6.3 Ten point height of surface, S10z... 250 8.3.7.6.4 Five point peak height, S5p.......... 250 8.3.7.6.5 Five point pit height, S5v ............. 250 8.3.7.6.6 Closed dale area, Sda(c)............... 250 8.3.7.6.7 Closed hill area, Sha(c) ................ 251 8.3.7.6.8 Closed dale volume, Sdc(c) .......... 251 8.3.7.6.9 Closed hill volume, Shv(c) ............ 251 8.4 Fractal methods ................................................................ 251 8.4.1 Linear fractal methods .............................................252 8.4.2 Areal fractal analysis................................................ 255 8.4.2.1 Volume-scale analysis.................................. 255 8.4.2.2 Area-scale analysis...................................... 255 8.5 Comparison of profile and areal characterisation .................. 257 8.6 References .......................................................................258
CHAPTER 9 Coordinate metrology .................................................... 263 9.1 Introduction to CMMs........................................................ 263 9.1.1 CMM probing systems .............................................. 266 9.1.2 CMM software ......................................................... 266 9.1.3 CMM alignment....................................................... 267 9.1.4 CMMs and CAD ....................................................... 267 9.1.5 Prismatic against freeform........................................ 268 9.1.6 Other types of CMM ................................................. 268 9.2 Sources of error on CMMs .................................................. 268 9.3 Traceability, calibration and performance verification of CMMs .......................................................................... 269 9.3.1 Traceability of CMMs ............................................... 270 9.4 Miniature CMMs ............................................................... 272 9.4.1 Stand-alone miniature CMMs.................................... 273 9.4.1.1 A linescale-based miniature CMM ................ 273 9.4.1.2 A laser interferometer-based miniature CMM..........................................................274 9.5 Miniature CMM probes ......................................................275
Contents
9.6 Calibration of miniature CMMs ...........................................281 9.6.1 Calibration of laser interferometer-based miniature CMMs ......................................................283 9.6.2 Calibration of linescale-based miniature CMMs ..........283 9.7 References .......................................................................285
CHAPTER 10 Mass and force measurement ...................................... 289 10.1 Traceability of traditional mass measurement ..................289 10.1.1 Manufacture of the Kilogram weight and the original copies...................................................290 10.1.2 Surface texture of mass standards.......................291 10.1.3 Dissemination of the kilogram ............................291 10.1.4 Post nettoyage-lavage stability ............................292 10.1.5 Limitations of the current definition of the kilogram......................................................292 10.1.6 Investigations into an alternative definition of the kilogram ..................................................293 10.1.6.1 The Watt balance approach..................294 10.1.6.2 The Avogadro approach........................294 10.1.6.3 The ion accumulation approach............295 10.1.6.4 Levitated superconductor approach ......295 10.1.7 Mass comparator technology...............................295 10.1.7.1 The modern two-pan mechanical balance..............................................296 10.1.7.2 Electronic balances.............................296 10.2 Low-mass measurement ................................................297 10.2.1 Weighing by sub-division....................................297 10.3 Low-force measurement.................................................298 10.3.1 Relative magnitude of low forces.........................298 10.3.2 Traceability of low-force measurements ...............298 10.3.3 Primary low-force balances.................................299 10.3.4 Low-force transfer artefacts ................................301 10.3.4.1 Deadweight force production................301 10.3.4.2 Elastic element methods .....................301 10.3.4.3 Miniature electrostatic balance methods.............................................304 10.3.4.4 Resonant methods ..............................304 10.3.4.5 Further methods and summary .............306 10.4 References ...................................................................308
APPENDIX A ................................................................................................311 APPENDIX B ................................................................................................315 INDEX ..........................................................................................................317
xiii
This page intentionally left blank
Acknowledgements Many people have helped me to put this, my first book, together. The work has involved some re-arrangements in my personal life and I thank my loving partner, Nikki, for putting up with this (especially with me insisting on having the laptop in the living room on a permanent basis). Above all I would like to express thanks to Dr Han Haitjema (Mitutoyo Research Centre Europe, The Netherlands) for his critical comments on most of the chapter drafts and for his never-ending good humour and a sound basis in reality! Also, many external folk have contributed and for this they have my eternal gratitude and friendship. In no particular order, these include: John Hannaford (himself), Prof Derek Chetwynd (University of Warwick, UK), Dr Andreas Freise (University of Birmingham, UK), Prof Liam Blunt, Dr; Leigh Brown and Prof Xiangqian (Jane) Jiang (University of Huddersfield, UK), Dr Mike Conroy, Mr Daniel Mansfield, Mr Darian Mauger and Prof Paul Scott (Taylor Hobson, UK), Dr Roy Blunt (IQE, UK), Dr Jon Petzing (Loughborough University, UK), Dr Georg Wiora (Nanofocus, Germany), Dr Franz Helmli (Alicona, Austria), Dr Lars Lindstrand (Scantron, UK), Prof Chris Brown (Worcester Polytechnic Institute, USA), Prof Paul Shore (Cranfield University, UK), Dr James Johnstone (NanoKTN, UK), Dr Roland Roth (Zeiss, Germany), Prof Gert Ja¨ger (Ilmenau University of Technology, Germany), Dr Ted Vorburger (NIST, USA), Dr Ernst Treffers (Xpress Precision Engineering, Netherlands), Dr Marijn van Veghel (NMi-VSL, Netherlands), Dr Chris King (University College London, UK), Dr Tristan Colomb (Lynce´e Tec, Switzerland), and Dr Katsuhiro Miura and Mr Atsuko Nose (Mitaka Kohki Co, Japan). Many folk at NPL have supported me and contributed to the contents of the book. These include: Mr James Claverley, Dr Alex Cuenat, Dr Stuart Davidson, Mr David Flack, Prof Mark Gee, Mr Claudiu Giusca, Dr Peter Harris, Mr Chris Jones, Mr Andy Knott, Dr Andrew Lewis, Dr Simon Reilly and Dr Andrew Yacoot. Especial thanks are due to Mr Julian Game for all his magical work with the superb figures.
xv
xvi
Acknowledgements
I must also thank Dr Nigel Hollingsworth (Key Technologies Innovations International) for all his support during the writing of the book. This book is dedicated to the late Prof Albert Franks, who was my first manager at NPL and gave me a great deal of inspiration for this field of research. Thank you Albert. I wish to express thanks to my parents and sisters; they are, after all, the ones I wish to please most. Also I would like to mention my son Marcus, whom I love dearly.
Figures Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7 Figure 2.8 Figure 2.9 Figure 3.1 Figure 3.2 Figure 3.3 Figure 3.4 Figure 3.5 Figure 3.6 Figure 3.7 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 4.5 Figure 4.6 Figure 4.7
An ancient Egyptian cubit (a standard of mass is also shown) ....................................................................................... 6 Metal bar length standards (gauge blocks and length bars) ..... 8 An iodine-stabilised helium-neon laser based at NPL, UK ... 10 Kilogram 18 held at the NPL, UK .......................................... 11 Energy levels in the He-Ne gas laser for 632.8 nm radiation .................................................................................. 24 Schema of an iodine-stabilised He-Ne laser .......................... 27 Frequency and intensity profiles in a two-mode He-Ne laser ............................................................................. 27 Magnetic splitting of neon – g is the Lande´ g factor, m the Bohr magneton .............................................................. 29 Calibration scheme for Zeeman-stabilised laser.................... 30 (a) A Type I Kelvin clamp, (b) a Type II Kelvin clamp ............ 38 A single degree of freedom motion device.............................. 39 Effects of Abbe error on an optical length measurement ...... 40 Mutual compression of a sphere on a plane .......................... 42 Kevin Lindsey with the Tetraform grinding machine ............ 47 Measured vertical amplitude spectrum on a ‘noisy’ (continuous line) and a ‘quiet’ (dotted line) site [29] ............. 48 Damped transmissibility, T, as a function of frequency ratio (u/u0)............................................................................... 50 Definition of the length of a gauge block ............................... 57 A typical gauge block wrung to a platen ................................. 58 Amplitude division in a Michelson/Twyman-Green interferometer.......................................................................... 60 Intensity as a function of phase for different visibility .......... 61 Intensity distribution for a real light source .......................... 62 Illustration of the effect of a limited coherence length for different sources ................................................................ 63 Schema of the original Michelson interferometer ................. 64
xvii
xviii
Figures
Figure 4.8 Schema of a Twyman-Green interferometer .......................... 65 Figure 4.9 The Fizeau interferometer ...................................................... 66 Figure 4.10 Typical interference pattern of a flat surface in a Fizeau interferometer .............................................................. 67 Figure 4.11 Schema of a Jamin interferometer.......................................... 69 Figure 4.12 Schema of a Mach-Zehnder interferometer ........................... 69 Figure 4.13 Schematic of the Fabry-Pe´rot interferometer ......................... 70 Figure 4.14 Transmittance as a function of distance, L, for various reflectances .............................................................................. 71 Figure 4.15 Possible definition of a mechanical gauge block length ........ 72 Figure 4.16 Schema of a gauge block interferometer containing a gauge block ........................................................................... 73 Figure 4.17 Theoretical interference pattern of a gauge block on a platen ............................................................................... 74 Figure 4.18 Method for determining a surface and phase change correction................................................................................. 79 Figure 5.1 Homodyne interferometer configuration ............................... 87 Figure 5.2 Heterodyne interferometer configuration............................... 88 Figure 5.3 Optical arrangement to double pass a Michelson interferometer ....................................................... 90 Figure 5.4 Schema of a differential plane mirror interferometer ............ 91 Figure 5.5 Cosine error with an interferometer ...................................... 94 Figure 5.6 Schema of an angular interferometer ..................................... 98 Figure 5.7 A typical capacitance sensor set-up ........................................ 99 Figure 5.8 Schematic of an LVDT probe ............................................... 101 Figure 5.9 Error characteristic of an LVDT probe ................................. 102 Figure 5.10 Schema of an optical encoder ............................................... 103 Figure 5.11 Total internal reflectance in an optical fibre ........................ 104 Figure 5.12 End view of bifurcated optical fibre sensors, (a) hemispherical, (b) random and (c) fibre pair ................... 105 Figure 5.13 Bifurcated fibre optic sensor components ............................ 106 Figure 5.14 Bifurcated fibre optic sensor response curve ........................ 106 Figure 5.15 Schema of an X-ray interferometer ...................................... 109 Figure 5.16 Schema of a combined optical and X-ray interferometer .... 110 Figure 6.1 Amplitude-wavelength space depicting the operating regimes for common instruments ........................................ 117 Figure 6.2 The original Talysurf instrument (courtesy of Taylor Hobson) ...................................................................... 119 Figure 6.3 Example of the result of a profile measurement .................. 120 Figure 6.4 Profiles showing the same Ra with differing height distributions ............................................................... 122
Figures
Figure 6.5
Figure 6.6 Figure 6.7 Figure 6.8 Figure 6.9 Figure 6.10 Figure 6.11 Figure 6.12 Figure 6.13
Figure 6.14
Figure 6.15
Figure 6.16 Figure 6.17 Figure 6.18 Figure 6.19 Figure 6.20 Figure 6.21 Figure 6.22 Figure 6.23 Figure 6.24
Figure 6.25 Figure 6.26 Figure 6.27 Figure 6.28
A profile taken from a 3D measurement shows the possible ambiguity of 2D measurement and characterisation ..................................................................... 122 Schema of a typical stylus instrument ................................. 123 Damage to a brass surface due to a high stylus force .......... 124 Numerical aperture of a microscope objective lens ............. 128 Example of the batwing effect when measuring a step using a coherence scanning interferometer ............... 131 Over-estimation of surface roughness due to multiple scattering in vee-grooves ........................................ 132 Principle of a laser triangulation sensor ............................... 133 Confocal set-up with (a) object in focus and (b) object out of focus ............................................................................ 135 Demonstration of the confocal effect on a piece of paper: (a) microscopic bright field image (b) confocal image. The contrast of both images has been enhanced for a better visualisation ............................................................. 136 Schematic representation of a confocal curve. If the surface is in focus (position 0) the intensity has a maximum .................................................................... 136 Schema of a Nipkow disk. The pinholes rotate through the intermediate image and sample the whole area within one revolution ....................................................................... 137 Schema of a confocal microscope using a Nipkow disk ...... 137 Chromatic confocal depth discrimination ........................... 139 Schema of a point autofocus instrument ............................. 140 Principle of point autofocus operation ................................. 141 Schema of a focus variation instrument .............................. 142 Schema of a phase-shifting interferometer .......................... 144 Schematic diagram of a Mirau objective .............................. 145 Schematic diagram of a Linnik objective ............................. 146 Schematic diagram of DHM with beam-splitter (BS), mirrors (M), condenser (C), microscope objective (MO) and lens in the reference arm (RL) used to perform a reference wave curvature similar to the object wave curvature (some DHM use the same MO in the object wave) ........................ 148 Schema of a coherence scanning interferometer ................. 150 Schematic of how to build up an interferogram on a surface using CSI .......................................................... 151 Integrating sphere for measuring TIS .................................. 154 Analysis of a type A1 calibration artefact ............................ 158
xix
xx
Figures
Figure 6.29 Figure 6.30 Figure 6.31 Figure 6.32 Figure 6.33 Figure 6.34 Figure 6.35 Figure 6.36 Figure 7.1 Figure 7.2 Figure 7.3
Figure 7.4
Figure 7.5 Figure 7.6 Figure 7.7
Figure 7.8
Figure 7.9
Type ER1 – two parallel groove standard ............................. 160 Type ER2 – rectangular groove standard .............................. 160 Type ER3 – circular groove standard .................................... 161 Type ES – sphere/plane measurement standard ................... 162 Type CS – contour standard.................................................. 163 Type CG1 – X/Y crossed grating ........................................... 163 Type CG2 – X/Y/Z grating standard ..................................... 164 Results of a comparison of different instruments used to measure a sinusoidal sample ................................... 166 Schematic image of a typical scanning probe system, in this case an AFM .............................................................. 179 Block diagram of a typical SPM ............................................ 182 Noise results from an AFM. The upper image shows an example of a static noise investigation on a bare silicon wafer. The noise-equivalent roughness is Rq ¼ 0.013 nm. For comparison, the lower image shows the wafer surface: scan size 1 mm by 1 mm, Rq ¼ 0.081 nm ............................ 184 Schematic of the imaging mechanism of spherical particle imaging by AFM. The geometry of the AFM tip prevents ‘true’ imaging of the particle as the apex of the tip is not in contact with the particle all the time and the final image is a combination of the tip and particle shape. Accurate sizing of the nanoparticle can only be obtained from the height measurement ............................................................. 185 Definition of the pitch of lateral artefacts: (a) 1D and (b) 2D .................................................................. 187 Schematic of a force curve (a) and force-distance curve (b) ................................................................................. 190 Schematic illustration of the strong capillary force that tends to drive the tip and sample together during imaging in air ........................................................................ 194 (a) TEM image of nominal 30 nm diameter gold nanoparticles; (b) using threshold to identify the individual particles; (c) histogram of the measured diameters ............. 205 TEM image of 150-nm-diameter latex particles. This image highlights the drawback to TEM size measurement using TEM or SEM. The first is that a white ‘halo’ surrounds the particle. Should the halo area be included in the size measurement? If so there will be a difficulty in determining the threshold level. The second is the particles are aggregated, again making sizing difficult......... 206
Figures
Figure 8.1 Figure 8.2 Figure 8.3 Figure 8.4 Figure 8.5 Figure 8.6 Figure 8.7
Figure 8.8
Figure 8.9 Figure 8.10 Figure 8.11 Figure 8.12 Figure 8.13 Figure 8.14
Figure 8.15 Figure 8.16 Figure 8.17 Figure 8.18 Figure 8.19 Figure 8.20 Figure 8.21 Figure 8.22 Figure 8.23
Separation of surface texture into roughness, waviness and profile.............................................................................. 214 Primary (top), waviness (middle) and roughness (bottom) profiles ................................................................................... 215 Maximum profile peak height, example of roughness profile ............................................................... 218 Maximum profile valley depth, example of roughness profile ..................................................................................... 219 Height of profile elements, example of roughness profile ..................................................................................... 220 The derivation of Ra ............................................................. 221 Profiles with positive (top), zero (middle) and negative (bottom) values of Rsk (reprinted from ASME B46.11995, by permission of the American Society of Mechanical Engineers. All rights reserved) .......................... 222 Profiles with low (top) and high (bottom) values of Rku (reprinted from ASME B46.1-1995, by permission of the American Society of Mechanical Engineers. All rights reserved) ................................................................................ 223 Width of profile elements ..................................................... 224 Material ratio curve ............................................................... 225 Profile section level separation ............................................. 226 Profile height amplitude distribution curve ......................... 227 Amplitude distribution curve ............................................... 227 Epitaxial wafer surface topographies in different transmission bands: (a) the raw measured surface; (b) roughness surface (short scale SL-surface) S-filter ¼ 0.36 mm (sampling space), L-filter ¼ 8 mm); (c) wavy surface (middle scale SF-surface) S-filter ¼ 8 mm, F-operator; and (d) form error surface (long scale form surface), F-operator ............................................................... 231 Areal material ratio curve ..................................................... 240 Inverse areal material ratio curve ......................................... 240 Void volume and material volume parameters .................... 242 Example simulated surface ................................................... 245 Contour map of Figure 8.18 showing critical lines and points ..................................................................... 245 Full change tree for Figure 8.19 ............................................ 246 Dale change tree for Figure 8.19 .......................................... 247 Hill change tree for Figure 8.19 ............................................ 247 Line segment tiling on a profile ............................................ 253
xxi
xxii
Figures
Figure 8.24 Figure 8.25 Figure 9.1 Figure 9.2 Figure 9.3
Figure 9.4 Figure 9.5 Figure 9.6 Figure 9.7 Figure 9.8
Figure 9.9
Figure 9.10 Figure 9.11
Figure 9.12 Figure 10.1
Figure 10.2 Figure 10.3
Inclination on a profile ......................................................... 254 Tiling exercises for area-scale analysis ................................. 256 A typical moving bridge CMM ............................................. 264 CMM configurations............................................................. 265 Illustration of the effect of different measurement strategies on the diameter and location of a circle. The measurement points are indicated in red; the calculated circles from the three sets are in black and the centres are indicated in blue .................................................................... 271 Schema of the kinematic design of the Zeiss F25 CMM .... 273 Schema of the NMM ............................................................ 275 Schema of the NMM measurement coordinate measuring principle .............................................................. 276 Silicon micro-scale probe designed by [34], produced by chemical etching and vapour deposition .............................. 277 The fibre probe developed by PTB. Notice the second microsphere on the shaft of the fibre; this gives accurate measurement of variations in sample ‘height’ (z axis) [38] ............................................................................ 278 A vibrating fibre probe. The vibrating end forms a ‘virtual’ tip that will detect contact with the measurement surface while imparting very little force [41] .................................... 279 Vertical AFM probe for MEMS sidewall investigation [44].... 280 Miniature CMM performance verification artefacts. (a) METAS miniature ball bar, (b) PTB ball plate, (c) PTB calotte plate, (d) PTB calotte cube, (e) Zeiss halfsphere plate ....................................................................................... 282 Straightness (xTx) measurement of the F25 with the CAA correction enabled ................................................................. 284 Comparative plot of described surface interaction forces, based on the following values: R ¼ 2 mm; U ¼ 0.5 V; g ¼ 72 mJ$m2; H ¼ 1018 J; e ¼ r ¼ 100 nm. Physical constants take their standard values: e0 ¼ 8.854 1012 C2$N1$m2; h ¼ 1.055 1034 m2$kg$s1 and c ¼ 3 108 m$s1 ................................................................ 299 Schema of the NPL low-force balance .................................. 300 Experimental prototype reference cantilever array – plan view ............................................................................... 302
Figures
Figure 10.4 Images of the NPL C-MARS device, with detail of its fiducial markings; the 10 mm oxide squares form a binary numbering system along the axis of symmetry .......................................................................... 303 Figure 10.5 Computer model of the NPL Electrical Nanobalance device. The area shown is 980 mm 560 mm. Dimensions perpendicular to the plane have been expanded by a factor of twenty for clarity ............................ 305 Figure 10.6 Schema of a resonant force sensor – the nanoguitar ........... 306
xxiii
This page intentionally left blank
Tables
Table 3.1 Table 3.2 Table 4.1 Table 4.2 Table 4.3 Table 6.1 Table 7.1 Table 7.2 Table 7.3 Table 8.1 Table 8.2 Table Table Table Table Table
8.3 8.4 8.5 8.6 8.7
Sources of seismic vibration and corresponding frequencies [27] ......................................................................... 48 Possible sources of very-low-frequency vibration ..................... 49 Gauge block classes according to ISO 3650 [5] ....................... 58 The quality factor and coherence length of some light sources ....................................................................................... 63 Effect of parameters on refractive index: RH is relative humidity ............................................................ 78 Minimum distance between features for different objectives .................................................................. 129 Overview of guidance deviations, standards to be used and calibration measurements [12] ........................................ 189 Examples of surface forces commonly encountered in AFM measurement ................................................................. 193 Various substances that have been linked to AFM tips or cantilevers ........................................................................... 195 Relationship between cut-off wavelength, tip radius (rtip) and maximum sampling spacing [12] .................................... 216 Relationships between nesting index value, S-filter nesting index, sampling distance and ball radius ............................... 233 Types of scale-limited features ................................................ 244 Criteria of size for segmentation ............................................ 244 Methods for determining significant features ........................ 248 Feature attributes .................................................................... 249 Attribute statistics ................................................................... 249
xxv
xxvi
Tables
Table 10.1
Table 10.2
Summary of surface interaction force equations. In these equations F is a force component, U the work function difference between the materials, D the sphere-flat separation, g the free surface energies at state boundaries, H the Hamaker constant and q the contact angle of in-interface liquid on the opposing solid surfaces. In the capillary force the step function u(.) describes the breaking separation; e is the liquid layer thickness and r the radius of meniscus curvature in the gap ......................................... 298 Advantages and disadvantages of low-force production and measurement methods ......................................................... 307
CHAPTER 1
Introduction to metrology for micro- and nanotechnology There are many stories of wonderful new machines and changes in lifestyle that will be brought about by the commercial exploitation of microand nanotechnology (MNT) (see, for example, references 1-3). However, despite significant increases in funding for research into MNT across the globe, the commercial success to date has not been as high as has been predicted. At the smaller of the two scales, most work in nanotechnology is still very much at the research stage. However, in the more mature world of microsystems technology (MST) there is already a significant industry in its own right. In fact, the MST industry has now matured to such an extent that it is undergoing dramatic change and restructuring, along the lines followed previously by conventional engineering and macro-scale technology. Despite overall steady growth in the total market, particular sectors and individual companies are experiencing difficult times; acquisitions, mergers and even bankruptcies are becoming commonplace. It is asserted that what the MNT industry needs is a standards infrastructure that will allow fabrication plants to interchange parts, packaging and design rules; effectively the MNT equivalent of macro-scale nuts and bolts or house bricks. This will not stifle innovation; on the contrary, it will allow designers and inventors to have more time to consider the innovative aspects of their work, rather than having to waste time ‘re-inventing the wheel’. The results of recent government reviews [3] and surveys in Europe [4] and the USA [5] clearly indicate that standardization is the major issue that is hampering commercial success of the MST industry. This book considers a subset of the metrology that will be required in the near future to support a standards infrastructure for MNT. If interchangeability of parts is to become a reality, then fabrication plants need to move away from ‘in-house’ or ‘gold’ standards, and move towards measurement standards and Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS What is engineering nanometrology? The contents of this book References
1
2
C H A P T ER 1 : Introduction to metrology for micro- and nanotechnology
techniques that are traceable to national or international realisations of the measurement units [6]. Progress in MNT is not just of interest at the academic level. There is a considerable advantage in being able to reach a sufficient number of markets with new devices and materials to be able to recover development costs. There is consequently much effort devoted not only to development of MNT devices and materials, but also to maximising market uptake and transfer of technology from the research stage, through production, out to the commercial marketplace. In many cases, examination of the barriers preventing successful uptake of new technology reveals some areas of metrology where there needs to be more research than is carried out at the moment. Also, metrology does not just allow control of production but can allow legal, ethical and safety issues [7] to be settled in a quantitative and informative manner. There is a major thrust in standardization for MNT activities in many national and regional committees. The International Organization for Standardization (ISO) has recently set up ISO technical committee (TC) 229. The IEC has also established TC 113 to complement electrical activities. Recognising that there is an intersection between matter and radiation at the MNT level, several of the working groups are collaborations between ISO and IEC. The Joint Working Groups (JWGs) are divided into terminology and nomenclature (JWG1), measurement and characterization (JWG2) and two sole ISO WGs on health, safety and environment (WG3) and product specifications and performance (WG4). The main work of the committees so far has been to define common definitions for nanotechnology and to issue reviews of handling engineered nanomaterials in the workplace. Measurement and characterization standards are currently being developed especially for carbon nanotube analysis. This work is also being complemented by activities in Europe that are coordinated by CEN TC 352. There are also many well-established and related ISO committees that are not exclusively MNT but cover aspects of engineering nanometrology; for example, ISO TC 213, which covers surface texture standards, and ISO TC 201, which covers many of the standardization issues for scanning probe microscopes, and ISO TC 209 (cleanroom technologies) is also forming a working group (WG10) on nanotechnology considerations.
1.1 What is engineering nanometrology? The field of engineering metrology relates to the measurement and standardization requirements for manufacturing. In the past, engineering metrology mainly covered dimensional metrology, i.e. the science and
The contents of this book
technology of length measurement (see [8,9]). Modern engineering metrology usually encompasses dimensional plus mass and related quantity metrology. Some authors have also incorporated materials metrology into the fold [10] and this is an important inclusion. However, this book will concentrate on the more traditional dimensional and mass areas. This choice is partly to keep the scope of the book at a manageable level and partly because those are the areas of research that the author has been active in. So, engineering nanometrology is traditional engineering metrology at the MNTscale. Note that whilst nanotechnology is the science and technology of structures varying in size from around 0.1 nm to 100 nm, nanometrology does not only cover this size range. Nanometrology relates to measurements with accuracies or uncertainties in this size range (and smaller!). For example, one may be measuring the form of a 1 m telescope mirror segment to an accuracy of 10 nm. It is important to realise that there are many areas of MNT measurement that are equally as important as dimensional and mass measurements. Other areas not included in this book are measurements of electrical, chemical and biological quantities, and the wealth of measurements for material properties, including the properties of particles. There are also areas of metrology that could well be considered engineering nanometrology but have not been covered by this book. These include the measurement of roundness [11], thin films (primarily thickness) [12,13], the dynamic measurement of vibrating structures [14] and tomography measurements (primarily x-ray computed tomography [15] and optical coherence tomography [16]). Once again, the choice of contents has been dubiously justified above!
1.2 The contents of this book This book is divided into ten chapters. Chapter 2 gives an introduction to measurement, including short histories of, and the current unit definitions for, length, angle, mass and force. Basic metrological terminology is introduced, including the highly important topic of measurement uncertainty. The laser is presented in chapter 2, as it is a very significant element of many of the instruments described in this book. Chapter 3 reviews the most important concepts needed when designing or analysing precision instruments. Chapter 4 covers the measurement of length using optical interferometry, and discusses the concepts behind interferometry, including many error sources. Chapter 5 reviews the area of displacement measurement and presents most modern forms of displacement sensor. The field of surface texture measurement is covered in the next
3
4
C H A P T ER 1 : Introduction to metrology for micro- and nanotechnology
three chapters, as it is a very large and significant topic. Chapter 6 covers stylus and optical surface measuring instruments, and chapter 7 covers scanning probe and particle beam instruments. Both chapters 6 and 7 include instrument descriptions, limitations and calibration methods. Chapter 8 presents methods for characterizing surfaces, including both profile and areal techniques. Chapter 9 introduces the area of coordinate metrology and reviews the latest developments with micro-coordinate measuring machines. Lastly, chapter 10 presents a review of the latest advances in low mass and force metrology.
1.3 References [1] Storrs Hall J 2005 Nanofuture: what’s next for nanotechnology (Promethius Books) [2] Mulhall D 2002 Our molecular future: how nanotechnology, robotics, genetics and artificial intelligence will transform our future (Promethius Books) [3] 2004 Nanoscience and nanotechnologies: opportunities and uncertainties (Royal Society and Royal Academy of Engineering) [4] Singleton L, Leach R K, Cui Z 2003 Analysis of the MEMSTAND survey on standardisation for microsystems technology Proc. Int. Seminar MEMSTAND, Barcelona, Spain, 24-26 Feb. 11–31 [5] MEMS Industry Group Report: ‘‘Focus on Fabrication,’’ Feb. 2003 [6] Postek M T, Lyons K 2007 Instrumentation, metrology and standards: key elements for the future of nanotechnology Proc. SPIE 6648 664802 [7] Hunt G, Mehta M 2008 Nanotechnology: risk, ethics and law (Earthscan Ltd) [8] Hume K J 1967 Engineering metrology (Macdonald & Co.) 2nd edition [9] Thomas G G 1974 Engineering metrology (Newnes-Butterworth: London) [10] Anthony D M 1986 Engineering metrology (materials engineering practice) (Pergamon) [11] Smith G T 2002 Industrial metrology: surfaces and roundness (Springer) [12] Tompkins H G, Eugene A I 2004 Handbook of ellipsometry (Springer) [13] Yacoot A, Leach R K 2007 Review of x-ray and optical thin film measurement methods and transfer artefacts NPL Report DEPC-EM 13 [14] Lobontiu N 2007 Dynamics of microelectromechanical systems (Springer) [15] Withers P J 2007 X-ray nanotomography Materials Today 10 26–34 [16] Brezinski M E 2006 Optical coherence tomography: principles and applications (Academic Press)
CHAPTER 2
Some basics of measurement 2.1 Introduction to measurement Over the last couple of thousand years significant advances in technology can be traced to improved measurements. Whether we are admiring the engineering feat represented by the Egyptian pyramids, or the fact that in the twentieth century humans walked on the moon, we should appreciate that this progress is due in no small part to the evolution of measurement. It is sobering to realise that tens of thousands of people were involved in both operations and that these people were working in many different places producing various components that had to be brought together – a large part of the technology that enabled this was the measurement techniques and standards that were used [1]. The Egyptians used a royal cubit as the standard of length measurement (it was the distance from Pharaoh’s elbow to his fingertips – see Figure 2.1), while the Apollo space programme ultimately relied on the definition of the metre in terms of the wavelength of krypton 86 radiation. In Egypt the standards were kept in temples and the priests were beheaded if they were not recalibrated on time. Nowadays there are worldwide systems of accrediting laboratories, and laboratories are threatened with losing their accreditation if the working standards are not recalibrated on time. Primary standards are kept in national measurement institutes that have a great deal of status and national pride. The Egyptians appreciated that, provided that all four sides of a square are the same length and the two diagonals are equal, then the interior angles will all be the same – 90 . They were able to compare the two diagonals and look for small differences between the two measurements to determine how square the base of the pyramid was. Humans have walked on the moon because a few brave people were prepared to sit on top of a collection of ten thousand manufactured parts all Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS Introduction to measurement Units of measurement and the SI Length Mass Force Angle Traceability Accuracy, precision, resolution, error and uncertainty The laser References
5
6
C H A P T ER 2 : Some basics of measurement
FIGURE 2.1 An ancient Egyptian cubit (a standard of mass is also shown).
built and assembled by the lowest bidder, and finally filled with hundreds of tons of explosive hydrogen and oxygen propellant. A principal reason that it all operated as intended was that the individual components were manufactured to exacting tolerances that permitted final assembly and operation as intended. The phrase ‘mass production’ these days brings visions of hundreds of cars rolling off a production line every day. From Henry Ford in the 1920s through to the modern car plants such as BMW and Honda, the key to this approach is to have tiers of suppliers and sub-contractors all sending the right parts to the next higher tier and finally to the assembly line. The whole manufacture and assembly process is enabled by the vital traceable measurements that take place along the route. Modern manufacturing often involves the miniaturization of products and components. This ‘nanotechnology revolution’ has meant that not only have the parts shrunk to micrometres and nanometres, but tolerances have too. The dimensional and mass measurements that are required to ensure that these tiny parts fit together, or ensure that larger precision parts are fit for purpose, are the subject of this book.
2.2 Units of measurement and the SI The language of measurement that is universally used in science and engineering is the Syste`me International d’Unite´s (SI) [2]. The SI embodies the
Length
modern metric system of measurement and was established in 1960 by the 11th Confe´rence Ge´ne´rale des Poids et Mesures (CGPM). The CGPM is the international body that ensures wide dissemination of the SI and modifies the SI as necessary to reflect the latest advances in science and technology. There are a number of international organizations, treaties and laboratories that form the scientific and legal infrastructure of measurement (see [3] for details). Most technologically advanced nations have national measurement institutes (NMIs) that are responsible for ensuring that measurements comply with the SI and ensure traceability (see section 2.7). Examples of NMIs include the National Physical Laboratory (NPL, UK), PhysikalischTechnische Bundesanhalt (PTB, Germany), National Metrology Institute Japan (NMIJ, Japan) and the National Institute of Standards and Technology (NIST, USA). The web sites of the larger NMIs all have a wealth of information on measurement and related topics. The SI is principally based on a system of base quantities, each associated with a unit and a realization. A unit is defined as a particular physical quantity, defined and adopted by convention, with which other particular quantities of the same kind are compared to express their value. The realization of a unit is the physical embodiment of that unit, which is usually performed at an NMI. The seven base quantities (with their associated units in parentheses) are: time (second), length (metre), mass (kilogram), electric current (ampere), thermodynamic temperature (kelvin), amount of substance (mole) and luminous intensity (candela). Engineering metrology is mainly concerned with length and mass, and these two base quantities will be given some attention here. Force and angle are also important quantities in engineering metrology and will be discussed in this chapter. The other base quantities, and their associated units and realizations, are presented in Appendix 1. In addition to the seven base quantities there are a number of derived quantities that are essentially combinations of the base units. Some examples include acceleration (unit: metres per second), density (unit: kilogram per cubic metre) and magnetic field strength (unit: ampere per metre). There are also a number of derived quantities that have units with special names. Some examples include frequency (unit: hertz or cycles per second), energy (unit: joule or kilogram per square metre per second) and electric charge (unit: coulomb or the product of ampere and second). Further examples of derived units are presented in Appendix 2.
2.3 Length The definition and measurement of length has taken many forms throughout human history (see [4,5] for more thorough historical overviews).
7
8
C H A P T ER 2 : Some basics of measurement
The metre was first defined in 1791, as ‘one ten millionth of the polar quadrant of the earth passing through Paris’. The team of surveyors that measured the part of the polar quadrant between Dunkirk and Barcelona took six years to complete the task. This definition of the metre was realized practically with a bar (or end gauge) of platinum in 1899. This illustrates the trade-offs between physical stability and reproducibility, and the practical realizability of standards. Of course the earth’s quadrant is far more stable than a human’s arm length, but to realize this in a standard is much more tedious. Some years after the prototype metre was realized, some errors were found in the calculation of its length and it was found that the platinum metre was about 1 mm short. However, it was decided to keep the material artefact for practical reasons. Another struggle that has continued until today is the preference of material length; whether to use an end standard (see section 4.2 and Figure 2.2) with two flat faces that define a distance, or a line standard where two lines engraved in a material define a length. In 1889 the platinum metre was replaced by a platinum-iridium line standard, the socalled X-metre, that kept the same defined distance as well as possible. The X-metre was used until 1960 [6], when the metre was defined as: the metre is the length equal to 1 650 763.73 wavelengths in vacuum of the radiation corresponding to the transition between the levels 2p10 and 5d5 of the krypton 86 atom
FIGURE 2.2 Metal bar length standards (gauge blocks and length bars).
Length
This redefinition was possible because of the developments in interferometry and the sharp spectral line of the krypton atom that enabled interferometry up to 1 m – with gauge blocks. Around 1910, such a re-definition was proposed, but at that time the metre could not be reproduced with a lower uncertainty than with the material artefact. In 1983, advances in the development of the laser, where many stabilization methods resulted in lasers that were more stable than the krypton spectral line, led to the need for a new definition. In the meantime, it was found that the speed of light in a vacuum is constant within all experimental limits, independent of frequency, intensity, source movement and time. Also it became possible to link optical frequencies to the time standard. This enabled a redefinition of the metre as [7]: the length of the path travelled by light in a vacuum in a time interval of 1/c of a second, where c is the speed of light given by 299 792 458 m$s1 Together with this definition, a list of reference frequencies was given, with associated uncertainties [8]. These included spectral lamps, for example. The krypton spectral line was unchanged but it received an attributed uncertainty. More convenient and precise, however, are stabilized laser systems. Such a current realization of the metre can have an uncertainty in frequency of one part in 1011. Figure 2.3 shows an iodine-stabilized helium-neon laser held at NPL. This new definition was only possible because it could be realized with a chain of comparisons. As discussed, the speed of light in a vacuum is generally regarded as a universal constant of nature, therefore, making it ideal as the basis for a length standard. The speed of an electromagnetic wave is given by c ¼ nl
(2.1)
where n is the frequency and l is the wavelength of the radiation. Therefore, length can be disseminated by measuring frequency or wavelength, usually using either time of flight measurements or interferometry (see chapter 4). Note that length can be considered to be a base quantity that is realized in a manner that is based upon the principles of quantum mechanics. The emission of electromagnetic waves from an atom (as occurs in a laser – see section 2.9) is a quantized phenomenon and not subject to change provided certain conditions are kept constant. This is a highly desirable property of a base unit definition and realization [9]. Note that the modern definition of length has become dependent on the time definition. This was proposed earlier; in the seventeenth century Christiaan Huygens proposed to define the metre as the length of a bar with
9
10
C H A P T ER 2 : Some basics of measurement
FIGURE 2.3 An iodine-stabilised helium-neon laser based at NPL, UK.
a time of oscillation of one second. However, this failed because of the variation of local acceleration due to gravity with geographic location. Most of the measurements that are described in this book are length measurements. Displacement is a change in length, surface profile is made up of height and lateral displacement, and coordinate measuring machines (CMMs, see chapter 10) measure the three-dimensional geometry of an object.
2.4 Mass In 1790, Louis XVI of France commissioned scientists to recommend a consistent system for weights and measures. In 1791 a new system of units was recommended to the French Academy of Sciences, including a unit that was the mass of a declared volume of distilled water in vacuo at the freezing point. This unit was based on natural constants but was not reproducible enough to keep up with technological advances. Over the next hundred years this definition of a mass unit was refined and a number of weights were manufactured to have a mass equal to it. In 1879 Johnson Matthey and Co. of London successfully cast an ingot of an alloy of platinum and iridium, a highly stable material. The water definition was abandoned and the platinum-iridium weight became the standard kilogram (known as the
Mass
International Prototype of the Kilogram). In 1889 forty copies of the kilogram were commissioned and distributed to the major NMIs to be their primary standard. The UK received Kilogram 18, which is now held at NPL (see Figure 2.4). The International Prototype of the Kilogram is made of an alloy of 90% platinum and 10% iridium and is held at the Bureau International des Poids et Mesures (BIPM) in Paris, France. A thorough treatise on mass metrology is given in chapter 10. Whereas the definition of length is given in terms of fundamental physical constants, and its realization is in terms of quantum mechanical effects, mass does not have these desirable properties. All mass measurements are traced back to a macroscopic physical object. The main problem with a physical object as a base unit realization is that its mass could change due to loss of material or contamination from the surrounding environment. The International Prototype Kilogram’s mass could be slightly greater or less today than it was when it was made in 1884 but there is no way of proving this [10]. It is also possible that a physical object could be lost or damaged. For these reasons there is considerable effort worldwide to re-define mass in terms of fundamental physical constants [11,12]. The front-runners at the time of writing are the Watt balance (based on electrical measurements that
FIGURE 2.4 Kilogram 18 held at the NPL, UK.
11
12
C H A P T ER 2 : Some basics of measurement
can be realized in terms of Plank’s constant and the charge on an electron [13]) and the Avogadro method (based on counting the number of atoms in a sphere of pure silicon and determining the Avogadro constant [14]); more methods are described in section 10.1.6. As with the metre, it is easy to define a standard (for example, mass as a number of atoms) but as long as it cannot be reproduced better than the current method, a re-definition, even using well-defined physical constants, does not make sense. On the MNTscale, masses can become very small and difficult to handle. This makes them difficult to manipulate, clean, and ultimately calibrate. These difficulties are discussed in the following section, which considers masses as force production mechanisms.
2.5 Force The SI unit of force, a derived unit, is the newton – one newton is defined as the force required to accelerate a mass of one kilogram at a rate of one metre per second, per second. The accurate measurement of force is vital in many MNT areas, for example the force exerted by an atomic force microscope on a surface (see section 7.3.5), the thrust exerted by an ion thrust space propulsion system [15] or the surface forces that can hamper the operation of devices based on microelectromechanical systems (MEMS) [16]. Conventionally, force is measured using strain gauges, resonant structures and loadcells [17]. The calibration of such devices is carried out by comparison to a weight. If the local acceleration due to gravity is known, the downward force generated by a weight of known mass can be calculated. This is the principle behind deadweight force standard machines – the mass values of their internal weights are adjusted so that, at a specific location, they generate particular forces. At NPL, gravitational acceleration is 9.81182 m$s2, so a steel weight with a mass of 101.9332 kg will generate a downward force of approximately 1 kN when suspended in air. Forces in the meganewton range (the capacity of the largest deadweight machines) tend to be generated hydraulically – oil at a known pressure pushes on a piston of known size to generate a known force [18]. When measuring forces on the MNT scale, different measurement principles are applied compared to the measurement of macroscale forces. As the masses used for deadweight force standards decrease, their relative uncertainty of measurement increases. For example at NPL a 1 kg mass can be measured with a standard uncertainty of 1 mg, or 1 part in 109. However, a 1 mg mass can only be measured with a standard uncertainty of, once again, 1 mg, or 1 part in 103, a large difference in relative uncertainty. This undesired
Angle
scaling effect of mass measurements is due to the limitations of the instrumentation used and the small physical size of the masses. Such small masses are difficult to handle and attract contamination easily (typically dust particles have masses ranging from nanograms to milligrams). The limitation also arises because the dominant forces in the measurement are those other than gravitational forces. Figure 10.1 in chapter 10 shows the effects of the sort of forces that are dominant in interactions on the MNT scale. Therefore, when measuring force from around 1 mN or lower, alternative methods to mass comparison are used, for example, the deflection of a spring with a known spring constant. Chapter 10 details methods that are commonly used for measuring the forces encountered in MNT devices along with a description of endeavours around the world to ensure the traceability of such measurements.
2.6 Angle The SI regards angle as a dimensionless quantity (also called a quantity of dimension one). It is one of a few cases where a name is given to the unit one, in order to facilitate the identification of the quantity involved. The names given for the quantity angle are radian (plane angle) and steradian (solid angle). The radian is defined with respect to a circle and is the angle subtended by an arc of a circle equal to the radius (approximately 57.2958 ). For practical angle measurement, however, the sexagesimal (degrees, minutes, seconds) system of units, which date back to the Babylonian civilization, is used almost exclusively [19]. The centesimal system introduced by Lagrange towards the end of the eighteenth century is rarely used. Other units referred to in this section require either a material artefact (for example, mass) or a natural standard (for example, length). No ultimate standard is required for angle measurement since any angle can be established by appropriate sub-division of the circle. A circle can only have 360 . In practice basic standards for angle measurement either depend on the accurate division of a circle or the generation of an angle from two known lengths. Instruments that rely on the principle of sub-division include precision index tables, rotary tables, polygons and angular gratings [19]. Instruments that rely on the ratio of two lengths include angular interferometers (see section 5.2.9), sine bars, sine tables and small angle generators. Small changes in angle are detected by an autocollimator [20] used in conjunction with a flat mirror mounted on the item under test, for example a machine tool. Modern autocollimators give a direct digital readout of angular position. The combination of a precision polygon and two autocollimators
13
14
C H A P T ER 2 : Some basics of measurement
enables the transfer of high accuracy in small angle measurement to the same accuracy in large angles, using the closing principle that all angles add up to 360 . Sometimes angle measurement needs to be gravity-referenced and in this case use is made of levels. Levels can be based either on a liquid-filled vial or on a pendulum and ancillary sensing system.
2.7 Traceability The concept of traceability is one of the most fundamental in metrology and is the basis upon which all measurements can be claimed to be accurate. Traceability is defined as follows: Traceability is the property of the result of a measurement whereby it can be related to stated references, usually national or international standards, through a documented unbroken chain of comparisons all having stated uncertainties. [21] To take an example, consider the measurement of surface profile using a stylus instrument (see section 6.6.1). A basic stylus instrument measures the topography of a surface by measuring the displacement of a stylus as it traverses the surface. So, it is important to ensure that the displacement measurement is ‘correct’. To ensure this, the displacement-measuring system must be checked or calibrated against a more accurate displacementmeasuring system. This calibration is carried out by measuring a calibrated step height artefact (known as a transfer artefact). Let us suppose that the more accurate instrument measures the displacement of the step using an optical interferometer with a laser source. This laser source is calibrated against the iodine-stabilized laser that realises the definition of the metre, and an unbroken chain of comparisons has been ensured. As we move down the chain from the definition of the metre to the stylus instrument that we are calibrating, the accuracy of the measurements usually decreases. It is important to note the last part of the definition of traceability that states all having stated uncertainties. This is an essential part of traceability as it is impossible to usefully compare, and hence calibrate, instruments without a statement of uncertainty. This should become obvious once the concept of uncertainty has been explained in section 2.8.3. Uncertainty and traceability are inseparable. Note that in practice the calibration of a stylus instrument is more complex than a simple displacement measurement (see section 6.10).
Accuracy, precision, resolution, error and uncertainty
Traceability ensures that measurements are consistent and accurate. Any quality system in manufacturing will require that all measurements are traceable and that there is documented evidence of this traceability (for example ISO 17025 [22]). If component parts of a product are to be made by different companies (or different parts of an organisation) it is essential that measurements are traceable so that the components can be assembled and integrated into a product. In the case of dimensional nanometrology, there are many examples when it is not always possible to ensure traceability because there is a break in the chain, often at the top of the chain. There may not be national or international specification standards available and the necessary measurement infrastructure may not have been developed. This is the case for many complex three-dimensional MNT measurements. Also, sometimes an instrument may simply be too complex to ensure traceability of all measurements. An example of this is the CMM (see chapter 9). Whilst the scales on a CMM (macro- or micro-scale) can be calibrated traceably, the overall instrument performance, or volumetric accuracy, is difficult and timeconsuming to determine and will be task-specific. In these cases it is important to verify the performance of the instrument against its specification by measuring well-chosen artefacts that have been traceably calibrated in an independent way. Where there are no guidelines or where there is a new measurement instrument or technique to be used, the metrologist must apply good practice and should consult other experts in the field. Traceability does not only apply to displacement (or length) measurements – all measurements should be traceable to their respective SI unit. In some cases, for example in a research environment or where a machining process is stable and does not rely on any other process, it may only be necessary to have a reproducible measurement. In this case the results should not be used where others may rely upon them and should certainly not be published.
2.8 Accuracy, precision, resolution, error and uncertainty There are many terms used in metrology that one must be aware of and it is important to be consistent in their use. The ISO VIM [21] lays out formal definitions of the main terms used in metrology. Central to many metrology terms and definitions is the concept of the ‘true value’. The true value of a measurement is the hypothetical result that would be returned by an ideal measuring instrument if there were no errors in the measurement. In practice the perfect scenario can never be achieved; there will always be some
15
16
C H A P T ER 2 : Some basics of measurement
degree of error in the measurement and it may not always be possible to have a stable, single-valued measurand. Even if one had an ideal instrument and measurement set-up, all measurements are ultimately subject to Heisenberg’s Uncertainty Principle; a consequence of quantum mechanics that puts a limit on measurement accuracy [23]. Often the true value is estimated using information about the measurement scenario. In many cases, where repeated measurements are taken, the estimate of the true value is the mean of the measurements.
2.8.1 Accuracy and precision Accuracy and precision are the two terms in metrology that are most frequently mixed up or used indistinguishably. The accuracy of a measuring instrument indicates how close the result is to the true value. The precision of a measuring instrument refers to the dispersion of the results when making repeated measurements (sometimes referred to as repeatability). It is, therefore, possible to have a measurement that is highly precise (repeatable) but is not close to the true value, i.e. inaccurate. This highlights the fundamental difference between the two terms and one must be careful when using them. Accuracy is a term relating the mean of a set of repeat measurements to the true value, whilst precision is representative of the spread of the measurements. The VIM definition of accuracy is: closeness of agreement between a measured quantity value and a true quantity value of a measurand and the definition of precision is: closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions.
2.8.2 Resolution and error The resolution of a measuring instrument is a quantitative expression of the ability of an indicating device to distinguish meaningfully between closely adjacent values of the quantity indicated. For example, for a simple dial indicator read by eye, the resolution is commonly given as half the distance between smallest, distinguishable indicating marks. It is not always either easy or obvious how to determine the resolution of an instrument. Consider for example an optical instrument that is used to measure surface texture and focuses light onto the surface. The lateral resolution is sometimes quoted in
Accuracy, precision, resolution, error and uncertainty
terms of the Rayleigh or Abbe criteria [24] although, depending on the numerical aperture of the focusing optics, the lateral resolution may be determined by the detector pixel spacing (see section 6.7.1). The axial resolution will be a complex function of the detector electronics, the detection algorithm and the noise floor. This example highlights that resolution is not a simple parameter to determine for a given instrument. It is also important to note that one should always consider resolution hand in hand with other instrument performance indicators such as accuracy and precision. Again using the example of the optical surface measuring instrument, some surfaces can cause the instrument to produce errors that can be several hundred nanometres in magnitude despite the fact that the instrument has an axial resolution of perhaps less than a nanometre (see section 6.7.1). The error in a measuring instrument is the difference between the indicated value and the true value (or the calibrated value of a transfer artefact). Errors usually fall into two categories depending on their origin. Random errors give rise to random fluctuations in the measured value and are commonly caused by environmental conditions, for example seismic noise or electrical interference. Systematic errors give rise to a constant difference from the true value, for example due to alignment error or because an instrument has not been calibrated correctly. Most measurements contain elements of both types of error and there are different methods for either correcting errors or accounting for them in uncertainty analyses (see [25] for a more thorough discussion on errors). Also errors can appear as random or systematic dependent on how they are treated. The VIM definition of resolution is: smallest change in a quantity being measured that causes a perceptible change in the corresponding indication and the definition of error is: measured quantity value minus reference quantity value.
2.8.3 Uncertainty in measurement As discussed in the introductory text for section 2.8 all measurements are subject to some degree of imperfection. It follows that a measured value can be expected to differ from the true quantity value, and measured values obtained from repeated measurement to be dispersed about the true quantity value or some value offset from the true quantity value. A statement
17
18
C H A P T ER 2 : Some basics of measurement
of uncertainty describes quantitatively the degree of imperfection of a measurement. A basic introduction to uncertainty of measurement is given elsewhere [26] although some of the more important terms and definitions are described briefly here. The Guide to the Expression of Uncertainty in Measurement (GUM) [27] is the definitive text on most aspects of uncertainty evaluation and should be read before the reader attempts an uncertainty evaluation for a particular measurement problem. A working group of the Joint Committee for Guides in Metrology (JCGM), the body responsible for maintaining the GUM, is in the process of preparing a number of documents to support and extend the application of the GUM [28]. The first of these documents, Supplement 1 to the GUM on the propagation of distributions using a Monte Carlo method [29], has been published. The VIM definition of measurement uncertainty is: non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used When measurement uncertainty is evaluated and reported as a coverage interval corresponding to a specified coverage probability p, it indicates an interval that is expected to contain 100p % of the values that could be attributed to the measured quantity.
2.8.3.1 The propagation of probability distributions The basis for the evaluation of measurement uncertainty is the propagation of probability distributions. In order to apply the propagation of probability distributions, a measurement model of the generic form Y ¼ fðX1 ; .; XN Þ
(2.2)
relating input quantities X1, ., XN, about which information is available, and the measurand or output quantity Y, about which information is required, is formulated. The input quantities include all quantities that affect or influence the measurement, including effects associated with the measuring instrument (such as bias, wear, drift, etc.), those associated with the artefact being measured (such as its stability), those associated with the measurement process, and ‘imported’ effects (such as the calibration of the instrument, material properties, etc.). Information concerning the input quantities is encoded as probability distributions for those quantities, such as rectangular (uniform), Gaussian (normal), etc. The information can take a variety of forms, including a series of indications, data on a calibration certificate, and the expert knowledge of the metrologist. An implementation of the propagation of probability distributions
Accuracy, precision, resolution, error and uncertainty
provides a probability distribution for Y, from which can be obtained an estimate of Y, the standard uncertainty associated with the estimate, and a coverage interval for Y corresponding to a stipulated (coverage) probability. Particular implementations of the approach are the GUM uncertainty framework (section 2.8.3.2) and a Monte Carlo method (section 2.8.3.3). In a Type A evaluation of uncertainty, the information about an input quantity Xi takes the form of a series of indications xir, r ¼ 1, ., n, obtained independently. An estimate xi of Xi is given by the average of the indications, i.e., xi ¼ x ¼
n 1X xir ; n r¼1
(2.3)
with associated standard uncertainty u(xi) given by the standard deviation associated with the average, i.e., vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n X u 1 (2.4) uðxi Þ ¼ sðxÞ ¼ t ðxir xi Þ2 ; nðn 1Þ r ¼ 1 and degrees of freedom ni ¼ n 1. In a Type B evaluation of uncertainty, the information about Xi takes some other form, and is used as the basis of establishing a probability distribution for Xi in terms of which an estimate xi and the associated standard uncertainty u(xi) are determined. An example is the case that the information about Xi takes values between the limits a and b (a b). Then, Xi could be characterized by a rectangular distribution on the interval [a, b] from which it follows that xi and u(xi) are the expectation and standard deviation of Xi evaluated in terms of this distribution, i.e., xi ¼
bþa ; 2
uðxi Þ ¼
ðb aÞ pffiffiffi : 2 3
(2.5)
Note that there are other types of distribution, for example triangular and U-shaped.
2.8.3.2 The GUM uncertainty framework The primary guide in metrology on uncertainty evaluation is the GUM [27]. It presents a framework for uncertainty evaluation based on the use of the law of propagation of uncertainty and the central limit theorem. The law of propagation of uncertainty provides a means for propagating uncertainties through the measurement model, i.e., for evaluating the standard uncertainty u(y) associated with an estimate y of Y given the standard uncertainties u(xi)
19
20
C H A P T ER 2 : Some basics of measurement
associated with the estimates xi of Xi (and, when they are non-zero, the covariances u(xi, xj) associated with pairs of estimates xi and xj). The central limit theorem is applied to characterize Y by a Gaussian distribution (or, in the case of finite effective degrees of freedom, by a scaled and shifted t-distribution), which is used as the basis of providing a coverage interval for Y. In the GUM uncertainty framework, the information about an input quantity Xi takes the form of an estimate xi, a standard uncertainty u(xi) associated with the estimate, and the degrees of freedom ni associated with the standard uncertainty. The estimate y of the output quantity is determined by evaluating the model for the estimates of the input quantity, i.e. y ¼ fðx1 ; .; xN Þ:
(2.6)
The standard uncertainty u(y) associated with y is determined by propagating the standard uncertainties u(xi) associated with the xi through a linear approximation to the model. Writing the first-order Taylor series approximation to the model as N X
Y y ¼
ci ðXi xi Þ
(2.7)
i¼1
where ci is the derivative of first order of f with respect to Xi evaluated at the estimates of the input quantities, and assuming the Xi are uncorrelated, u(y) is determined from u2 ðyÞ ¼
N X
c2i u2 ðxi Þ:
(2.8)
i¼1
In the equation (2.8), which constitutes the law of propagation of uncertainty for uncorrelated quantities, the ci are called (first-order) sensitivity coefficients. A generalization of the formula applies when the model input quantities are correlated. An effective degrees of freedom neff associated with the standard uncertainty u(y) is determined using the Welch-Satterthwaite formula, i.e. N X c4i u4 ðxi Þ u4 ðyÞ ¼ : ni neff i¼1
(2.9)
The basis for evaluating a coverage interval for Y is to use the central limit theorem to characterize the random variable Y y uðyÞ
(2.10)
Accuracy, precision, resolution, error and uncertainty
by the standard Gaussian distribution in the case that neff is infinite or a t-distribution otherwise. A coverage interval for Y corresponding to the coverage probability p takes the form y U:
(2.11)
U is called the expanded uncertainty given by U ¼ kuðyÞ
(2.12)
where k is called a coverage factor, and is such that ProbðjZj kÞ ¼ p
(2.13)
where Z is characterized by the standard Gaussian distribution in the case that neff is infinite or a t-distribution otherwise. There are some practical issues that arise in the application of the GUM uncertainty framework. Firstly, although the GUM uncertainty framework can be expected to work well in many circumstances, it is generally difficult to quantify the effects of the approximations involved, which include linearization of the model in the application of the law of propagation of uncertainty, the evaluation of effective degrees of freedom using the WelchSatterthwaite formula, and the assumption that the output quantity is characterized by a Gaussian or (scaled and shifted) t-distribution. Secondly, the procedure relies on the calculation of the model sensitivity coefficients ci as the basis of the linearization of the model. Calculation of the ci can be difficult when (a) the model is (algebraically) complicated, or (b) the model is specified as a numerical procedure for calculating a value of Y, for example, as the solution to a differential equation.
2.8.3.3 A Monte Carlo method A Monte Carlo method for uncertainty evaluation is based on the following consideration. The estimate y of Y is conventionally obtained, as in the previous section, by evaluating the model for the estimates xi of Xi. However, since each Xi is described by a probability distribution, a value as legitimate as xi can be obtained by drawing a value at random from the distribution. The method operates, therefore, in the following manner. A random draw is made from the probability distribution for each Xi and the corresponding value of Y is formed by evaluating the model for these values. Many Monte Carlo trials are performed, i.e., the process is repeated many times, to obtain M, say, values yr, r ¼ 1, ., M, of Y. Finally, the values yr are used to provide an approximation to the probability distribution for Y.
21
22
C H A P T ER 2 : Some basics of measurement
An estimate y of Y is determined as the average of the values yr of Y, i.e., y ¼
M 1X yr : M r¼1
(2.14)
The standard uncertainty u(y) associated with y is determined as the standard deviation of the values yr of Y, i.e., u2 ðyÞ ¼
M 1 X ðyr yÞ2 : M 1 r¼1
(2.15)
A coverage interval corresponding to coverage probability p is an interval [ylow, yhigh] that contains 100p % of the values yr of Y. Such an interval is not uniquely defined. However, two particular intervals are of interest. The first is the probabilistically symmetric coverage interval for which 100(1 p)/2 % of the values are less than ylow and the same number are greater than yhigh. The second is the shortest coverage interval, which is the shortest of all intervals containing 100p % of the values. The method has a number of features, including (a) that it is applicable regardless of the nature of the model, i.e., whether it is linear, mildly nonlinear or highly non-linear, (b) that there is no requirement to evaluate effective degrees of freedom, and (c) that no assumption is made about the distribution for Y, for example, that it is Gaussian. In consequence, the method provides results that are free of the approximations involved in applying the GUM uncertainty framework, and it can be expected, therefore, to provide an uncertainty evaluation that is reliable for a wide range of measurement problems. Additionally, the method does not require the calculation of model sensitivity coefficients since the only interaction with the model is to evaluate the model for values of the input quantities. However, there are also some practical issues that arise in the application of a Monte Carlo method. The degree of numerical approximation obtained for the distribution for Y is controlled by the number M of trials, and a large value of M (perhaps 105 or 106 or even greater) may sometimes be required. One issue, therefore, is that the calculation for large values of M may not be practicable, particularly when a (single) model evaluation takes an appreciable amount of time. Another issue is that the ability to make random draws from the distributions for the Xi is central, and the use of high-quality algorithms for random-number generation gives confidence that reliable results are provided by an implementation of the method. In this regard, the ability to draw pseudo-random numbers from a rectangular distribution is fundamental in its own right, and also as the basis for making random draws from other distributions using appropriate algorithms or formulae.
The laser
2.9 The laser The invention of the laser in 1960 has had a significant impact on metrology. The realization of the definition of the metre (see section 2.3) involves the use of a frequency-stabilized laser and many commercial interferometer systems use a laser source. The most common form of laser in the metrology area is the helium-neon laser, although solid-state lasers are becoming more widespread.
2.9.1 Theory of the helium-neon laser The tube of a continuous-wave helium-neon (He-Ne) gas laser contains a mixture of approximately eight parts of helium to one part of neon at a total pressure of a few millibars. The laser consists of an optical cavity, similar to that of a Fabry-Pe´rot etalon (see section 4.4.4), formed by a plasma tube with optical-quality mirrors (one of which is semi-transparent) at both ends. The gas in the tube is excited by a high-voltage discharge of approximately 1.5 kV to 2.5 kV, at a current of approximately 5 mA to 6 mA. The discharge creates a plasma in the tube that emits radiation at various wavelengths corresponding to the multitude of allowed transitions in the helium and neon atoms. The coherent radiation emitted by the He-Ne laser at approximately 632.8 nm wavelength corresponds to the 3s2 – 2p4 atomic transition in neon [30]. The excited 3s2 level is pumped by energetic 2s0 helium atoms colliding with the neon atoms; the 2s0 helium energy level is similar in energy to the 3s2 level of neon and the lighter helium atoms are easily excited into the 2s0 level by the plasma discharge (see Figure 2.5). The excess energy of the collision is approximately thermal, i.e., it is easily removed by the atoms in the plasma as kinetic energy. The collisional pumping of the 3s2 level in neon produces the selective excitation or population inversion that is required for lasing action. The 2p neon state decays in 108 seconds to the 1s state, maintaining the population inversion. This state relaxes to the ground state by collision with the walls of the plasma tube. The laser gain is relatively small and so losses at the end of the mirrors must be minimised by using a high-reflectance coating, typically 99.9%. The output power is limited by the fact that the upper lasing state reaches saturation at quite low discharge powers, whereas the lower state increases its population more slowly. After a certain discharge power is reached, further increase in the power leads to a decrease in the population inversion, and hence lower light power output.
23
24
C H A P T ER 2 : Some basics of measurement
FIGURE 2.5 Energy levels in the He-Ne gas laser for 632.8 nm radiation.
The 632.8 nm operating wavelength is selected by the spacing of the end mirrors, i.e. by the total length of the optical cavity, lc. The length of the cavity must be such that the waves reflected by the two end mirrors are in phase for stimulated emission to occur. The wavelengths of successive axial modes are then given by 2lc ¼ ml:
(2.16)
These modes are separated in wavenumber by Ds ¼
1 2lc
(2.17)
Dn ¼
c 2lc
(2.18)
or in terms of frequency
where c is the speed of light in a vacuum. This would lead to a series of narrow lines of similar intensity in the spectrum, if it were not for the effects of Doppler broadening and the Gaussian distribution of atoms available for stimulated emission. When a particular mode is oscillating, there is a selective depopulation of atoms with specific velocities (laser cooling) that leads to a dip in the gain profile. For modes oscillating away from the centre of the gain curve the atomic populations for the two opposite directions of propagation are different due to the equal but opposite velocities. For modes oscillating at the
The laser
centre of the gain curve, the two populations become a single population of effectively stationary atoms. Thus a dip in the gain profile occurs at the centre of the gain curve – the so-called Lamb dip. The position of the Lamb dip is dependent on other parameters of the laser such as the position of the gain curve and can be unstable. For early lasers with typical cavity lengths of 1 m the mode spacing was 0.5 m1, with a gain profile width of approximately 5.5 m1. Thus several axial modes were present in the gain profile with gains sufficient for laser action, and so two or more modes would operate simultaneously, making the laser unsuitable for coherent interferometry. By using a shorter tube and then carefully lowering the power of the discharge and hence lowering the gain curve, it is possible to achieve single-mode operation.
2.9.2 Single-mode laser wavelength stabilization schemes To allow a laser to be used in interferometry with coherence lengths above a few millimetres (see section 4.3.4) it must operate in a single mode and there have been many proposed schemes for laser stabilization. The Lamb dip, mentioned above, was used in an early stabilization scheme. Here the intensity of the output beam was monitored as the length of the cavity was modulated, for example by piezoelectric actuators (PZTs). Alternatively, mirrors external to the laser cavity are used that could be modulated – the output intensity being monitored and the laser locked to the centre of the Lamb dip. The reproducibility of lasers locked to the Lamb dip is limited by shift of the Lamb dip centre as the pressure of the gas inside the laser tube varies and also by a discharge current dependent shift. The large width of the Lamb dip itself (about 5 107 of the laser frequency) also limits the frequency stability obtainable from this technique. Use has also been made of tuneable Fabry-Pe´rot etalons in a similar system. Other groups have locked the output of one laser to the frequency of a second stabilized laser. Others have used neon discharge absorption cells where the laser was locked to the absorption spectrum of neon in an external tube, the theory being that the unexcited neon would have a narrower linewidth than the neon in the laser discharge.
2.9.3 Laser frequency-stabilization using saturated absorption The technique with the greatest stability is used in the Primary Reference lasers which realize the NMI’s Primary Standard of Length and involves controlling the length of the laser cavity to alter the wavelength, and locking the wavelength to an absorption line in saturated iodine vapour [30]. This is
25
26
C H A P T ER 2 : Some basics of measurement
a very stable technique since the absorption takes place from a thermally populated energy level that is free from the perturbing effects of the electric discharge in the laser tube. If the output beam from a laser is passed straight through an absorption cell, then absorption takes place over a Doppler broadened transition. However, if the cell is placed in a standing-wave optical field the highintensity laser field saturates the absorption and a narrow dip appears at the centre of the absorption line corresponding to molecules that are stationary or moving perpendicular to the direction of beam propagation. This dip produces an increase in the laser power in the region of the absorption line. The absorption line is reproducible and insensitive to perturbations. The linewidth is dependent on the absorber pressure, laser power and energy level lifetime. Saturated absorption linewidths are typically less than 1 108 of the laser wavelength. In a practical application an evacuated quartz cell containing a small iodine crystal is placed in the laser cavity and temperature controlled to 15 C. As the iodine partly solidifies at this temperature, this guarantees a constant iodine gas pressure. The laser mirrors are mounted on PZTs and the end plates are separated by low thermal expansion bars to ensure a thermally stable cavity. A small frequency modulation is then applied to one of the PZTs. This leads to an amplitude modulation in the output power that is detected using a phase-sensitive detector and fed back to the other PZT as a correction signal. The frequency control system employs a photodiode, low noise amplifier, coherent filter and phase-sensitive detector followed by an integrating filter. Figure 2.6 is a schema of the iodine-stabilized He-Ne instrumentation. Detection of the absorption signal at the laser modulation frequency results in a first derivative scan that shows the hyperfine components superimposed on the sloping background of the neon gain curve. The laser may be servo-locked to any of these lines, the frequency of which has been fixed (together with their uncertainties) internationally at the time of the re-definition of the metre in 1983 in terms of the speed of light, and which has been fine-tuned a few times since then. Iodine-stabilized He-Ne lasers can achieve frequency stability of a few parts in 1013 over a period of a few minutes with long-term reproducibility of a few parts in 1011. The reproducibility of iodine-stabilized He-Ne lasers, when being operated under certain conditions, enables the independent manufacture of a primary length standard without a need to refer or compare to some other standard. Contrary to this concept, NMIs compare their reference standards with each other to ensure that no unforeseen errors are being introduced. Until recently these comparisons were
The laser
FIGURE 2.6 Schema of an iodine-stabilized He-Ne laser.
commonly made at the BIPM, similar to when the metre bars were in use [31].
2.9.3.1 Two-mode stabilization Instead of emitting one frequency, a laser can be designed in such a way that it radiates in two limited frequency regions. Figure 2.7 shows this schematically. If two (longitudinal) modes exist, then both should be orthogonally linearly polarized. As the laser cavity length changes, the modes move through the gain curve, changing in both frequency and amplitude. The two modes are separated into two beams by polarization components, and their amplitudes
FIGURE 2.7 Frequency and intensity profiles in a two-mode He-Ne laser.
27
28
C H A P T ER 2 : Some basics of measurement
are compared electronically. The cavity length is then adjusted, usually by heating a coil around the laser tube that is kept at approximately 40 C, to maintain the proper relationship between the modes. By using a polarizer, only one beam is allowed to exit the system. Such lasers are commonly used in homodyne interferometry (see section 5.2.2). In the comparison method of stabilization, the ratio of the intensities of the two orthogonal beams is measured and is kept constant. This ratio is independent of output power and accurately determines the output frequency of the beam. In the long term, the frequency may shift due to variations in the He-Ne gas pressure and ratio. By adjusting the intensity ratio, the output frequency can be swept by approximately 300 MHz, while maintaining a 1 MHz linewidth. In the slope method of stabilization, only the intensity of the output beam is monitored, and a feedback loop adjusts the cavity length to maintain constant power. Because of the steep slope of the laser gain curve, variations in frequency cause an immediate and significant change in output power. The comparison method is somewhat more stable than the slope method, since it measures the amplitude of the two modes and centres them accurately around the peak of the gain curve, which is essentially an invariant, at least in the short term, and the frequency is unaffected by long-term power drift caused by aging or other factors. On the other hand, the slope method of frequency control significantly simplifies the control electronics. Another stabilizing method is stabilizing the frequency difference, as the frequency difference appears to have a minimum when the intensities are equal.
2.9.4 Zeeman-stabilized 633 nm lasers An alternative technique to saturated absorption is used in many commercial laser interferometers. The method of stabilization is based on the Zeeman effect [32,33]. A longitudinal magnetic field is applied to a single-mode He-Ne laser tube, splitting the normally linearly polarized mode into two counter-rotating circular polarizations. A field strength of 0.2 T is sufficient to split the modes, which remain locked together at low magnetic field, to produce the linear polarization. These two modes differ in frequency by typically 3 MHz, around a mean frequency corresponding to the original linear mode [34]. The wavelength difference between the two modes is due to each of the two modes experiencing a different refractive index and, therefore, different optical path length, in the He-Ne mixture. This arises due to magnetic splitting of an atomic state of neon, shown in Figure 2.8.
The laser
FIGURE 2.8 Magnetic splitting of neon – g is the Lande´ g factor, m the Bohr magneton.
The Dm ¼ þ1 mode couples with the left polarized mode and the Dm ¼ 1 mode couples with the right polarized mode. The relative frequencies of the polarization modes are given by u ¼
cN 2Ln
(2.19)
where L is the cavity length, n is the refractive index for the mode and N the axial quantum number [35]. The important feature of the Zeeman split gain curve is that the position of u0 does not vary with magnetic field strength – it remains locked at the original (un-split) line centre, and thus a very stable lock point. If one combines the two oppositely polarized components, one observes a heterodyne beat frequency between them given by cN 1 1 (2.20) Du ¼ uþ u ¼ 2L nþ n which is proportional to u0 ½cþ ðnÞ c ðnÞ, where cþ(n) and c(n) are dispersion functions for the left and right polarized modes respectively. For a more complete derivation see [36]. As the laser is tuned by altering the cavity length, L, the beat frequency will pass through a peak that corresponds to the laser frequency being tuned to u0. This tuning curve can be used as an error signal for controlling the laser frequency. The particular method used to modulate the laser cavity is usually thermal expansion. A thin foil heater is attached to the laser tube and connected to a square-root power amplifier. Two magnets are fixed onto the tube to provide the axial magnetic field. A polarizing beam-splitter is used, together with a photodetector and amplifier to detect the beat frequency. This error signal is fed to various stages of counters and amplifiers and then to the heater. The laser tube requires a period of approximately ten minutes to reach the correct temperature corresponding to the required tube length for operation at frequency, u0. A phase-locked loop circuit then fine-tunes the temperature and consequently the length of the cavity to stabilize the laser at the correct frequency. This last process takes only a few seconds to achieve lock. The frequency stability of the laser is 5 1010 for 1 s averages and is white-noise limited for averaging times between 100 ms and 10 minutes. The day-to-day
29
30
C H A P T ER 2 : Some basics of measurement
reproducibility of the laser frequency is typically 5 1010. There is also a linear drift of frequency with the total amount of time for which the laser has been in operation. This is due to clean-up of the helium-neon mixture whilst undergoing discharge. The rate of drift is unique to each laser, but is stable with respect to time, and can be ascertained after a few calibrations of the laser frequency. As an example, Tomlinson and Fork [37] showed drift rates of 0.3 MHz to 5.7 MHz 0.5 MHz per calendar year, although these were for frequency against date, rather than against operational time. Reference [36] reported a drift rate of – 1 1011 per hour of operation. An attractive feature of the Zeeman-stabilized laser is that the difference in amplitude can be used for stabilization, and the difference in frequency can be taken as the reference signal when it is used in heterodyne displacement interferometry (see section 5.2.3).
2.9.5 Frequency calibration of a (stabilized) 633 nm laser The calibration of a laser’s frequency is achieved by combining the light from the stabilized laser with a primary (reference) laser via a beam-splitter. The beat signal between the two frequencies is measured with a photodetector (see Figure 2.9). If the beams are carefully aligned, the beams interfere and the interference intensity varies in time with the frequency difference (see section 4.3.2, equation (4.5)). If the laser frequencies are close enough, this beat frequency can be detected electronically, and monitored over a number of hours. Typical values of the beat signal range between 50 MHz and 500 MHz, with the iodine standard stabilized on one of its dips. As the reference laser, if it is an iodine-stabilized laser, is continuously swept over some 6 MHz, it is common to integrate the frequency difference over 10 s. As a beat frequency is an absolute value, the reference laser needs to be stabilized on different frequencies in order to determine whether the frequency of the calibrated laser is higher or lower than the reference frequency. A Zeeman-stabilized laser emits two polarizations that are
FIGURE 2.9 Calibration scheme for Zeeman-stabilized laser.
References
separated, typically by 3 MHz. During laser calibrations, beats between each of these frequencies and the iodine frequency are measured. The mean of these can be considered to be the calibrated wavelength of the Zeemanstabilized laser under test if the difference is within the uncertainty limits. Also, it is common to measure just one frequency and to take the other into account in the uncertainty; 3 MHz corresponds to a relative uncertainty of about 6 109 in frequency and so in a measured length. If the two modes of a two-mode laser are both used in the same manner, as in a common Zeeman-based laser interferometer system, then the two polarizations may differ by up to 1 GHz, which corresponds to 2 106. However, it is more common that one of the beams is blocked by a polarizer and the system is used as a homodyne interferometer (see section 5.2.2). In this case a single frequency should be measured.
2.9.6 Modern and future laser frequency standards As mentioned is section 2.3, the current definition of length is based on a fixed speed of light, and there are a number of recipes to make an optical wavelength/frequency standard. These optical standards are linked to the time standard (which is a microwave standard) via a series of complicated comparisons to determine an absolute frequency and an uncertainty. Recently a so-called ‘frequency comb’ [38] has been developed that generates a series of equally spaced (the ‘comb’) frequencies by linking a nanosecond pulsed laser to an atomic clock. This makes a direct comparison possible of optical frequencies to the time standard without the need for an intermediate (still primary) standard such as the iodine-stabilized laser. The development of frequency combs is more important as, along with the He-Ne-based gas lasers, ranges of solid-state lasers and diode lasers have become available as frequency-stabilized light sources. These can have wavelengths that are very different from the common He-Ne wavelengths (for example, the red 633 nm wavelength), and cannot be measured using a beat measurement with a He-Ne laser, because the beat frequency is too high to be measured directly. Frequency combs will also enable the development of other stabilized laser systems, such as stabilized diode lasers. Diode lasers can have a far wider wavelength range than He-Ne gas lasers and can, for example, be used in the swept-frequency absolute distance interferometry as described in section 5.2.7.
2.10 References [1] Flack D R, Hannaford J 2005 Fundamental good practice in dimensional metrology NPL Good practice guide No. 80 (National Physical Laboratory)
31
32
C H A P T ER 2 : Some basics of measurement
[2] 2006 Le Syste`me International d’Unite´s (Bureau International des Poids et Mesures: Paris) 8th edition [3] Howarth P, Redgrave F 2004 Metrology in short (EUROMET) 2nd edition, www.euromet.org/docs/pubs/docs/Metrology_in_short_2nd_edition_ may_2004.pdf [4] Hume K J 1980 A history of engineering metrology (Mechanical Engineering Publications Ltd) [5] Stout K J 1998 From Cubit to nanometre: a history of precision measurement (Prenton Press: London) [6] Barrell H 1962 The metre Contemp. Phys. 3 415–435 [7] Petley B W 1983 The new definition of the metre Nature 303 373–376 [8] Felder R 2005 Practical realization of the definition of the metre, including recommended radiations of other optical frequency standards (2003) Metrologia 42 323–325 [9] Petley B W 1985 The fundamental physical constants and the frontiers of measurement (Adam Hilger Ltd: Bristol) [10] Davis R S 1989 The stability of the SI unit of mass as determined from electrical measurements Metrologia 26 75–76 [11] Kibble B P, Robinson I A 2003 Replacing the kilogram Meas. Sci. Technol. 14 1243–1248 [12] Mills I M, Mohr P J, Quinn T J, Taylor B M, Williams E R 2005 Redefinition of the kilogram: a decision whose time has come Metrologia 42 71–80 [13] Eisenberger A, Jeckelmann B, Richard P 2003 Tracing Plank’s constant to the kilogram by electromechanical methods Metrologia 40 356–365 [14] Becker P 2001 History and progress in the determination of the Avogadro constant Rep. Prog. Phys. 64 1945–2008 [15] Sutherland O, Appolloni M, O’Neil S, Gonzalez del Amo J, Hughes B 2008 Advances with the ESA propulsion laboratory mN thrust balance 5th Int. Space Propulsion Conf., Crete, Greece, May [16] Zhoa Y -P, Wang L S, Yu T X 2003 Mechanics of adhesion in MEMS a review J. Adhesion Sci. Technol. 17 519–546 [17] 1998 The guide to the measurement of force (The Institute of Measurement and Control: London) [18] Weiler W 1984 Realization of forces at the national institutes of metrology (Physikalisch-Technische Bundesanhalt) [19] Evans J C, Taylerson C O 1986 Measurement of angle in engineering (National Physical Laboratory) 3rd edition [20] Slocum A H 1992 Precision machine design (Society of Manufacturing Engineers: USA) [21] ISO VIM: 2004 International vocabulary of basic and general terms in metrology (International Organization for Standardization)
References
[22] ISO 17025: 2005 Competence of testing and calibration laboratories (International Organization for Standardization) [23] Rae A I M 2007 Quantum mechanics (Chapman & Hall) 5th edition [24] Hecht E 2003 Optics (Pearson Education) 4th edition [25] Dotson C 2006 Fundamentals of dimensional metrology (Delmar Learning) 5th edition [26] Bell S A 2001 A beginner’s guide to uncertainty in measurement NPL good practice guide No. 11. (National Physical Laboratory) [27] BIPM, IEC, IFCC, ISO, IUPAP, OIML 1995 Guide to the expression of uncertainty in measurement 2nd edition [28] Bich W, Cox M G, Harris P M 2006 Evolution of the ‘‘Guide to the expression of uncertainty in measurement’’ Metrologia 43 S161–S166 [29] BIPM, IEC, IFCC, ISO, IUPAP, OIML 2008 Evaluation of measurement data Supplement 1 to the ‘‘Guide to the expression of uncertainty in measurement’’ Propagation of distributions using Monte Carlo methods JCGM 101 [30] Svelto O 2005 The principles of lasers (Springer) 4th edition [31] Brillett A, Ce´rez P 1981 Laser frequency stabilisation by saturated absorption J. de Phys. (France) 42(C-8) 73–82 [32] Darnedde H, Rowley W R C, Bertinetto F, Millerioux Y, Haitjema H, Wetzels S, Pire´e H, Prieto E, Mar Pe´rez M, Vaucher B, Chartier A, Chartier J-M 1999 International comparisons of He-Ne lasers stabilized with 127I2 at l ¼ 633 nm (July 1993 to September 1995). Part IV: Comparison of Western European lasers at l ¼ 633 nm Metrologia 36 199–206 [33] Umeda N, Tsujiki M, Takasaki H 1980 Stabilised Zeeman laser Appl. Opt. 19 442–450
3
He-20Ne transverse
[34] Fellman T, Junger P, Stahlberg B 1987 Stabilisation of a green He-Ne laser Appl. Opt. 26 2705–2706 [35] Baer T, Kowalski F V, Hall J L 1980 Frequency stabilisation of a 0.633 mm He-Ne longitudinal Zeeman laser Appl. Opt. 19 3173–3177 [36] Rowley W R C 1990 The performance of a longitudinal Zeeman-stabilised He-Ne laser (633 nm) with thermal modulation and control Meas. Sci. Technol. 1 348–351 [37] Tomlinson W J, Fork R L 1968 Properties of gaseous optical masers in weak axial magnetic fields Phys. Rev. 164 480–483 [38] Jones D, Diddams S, Ranka J, Stentz A, Windeler R, Hall J L, Cundiff S T 2000 Carrier envelope phase control of femtosecond mode-locked lasers and direct optical frequency synthesis Science 288 635–639
33
This page intentionally left blank
CHAPTER 3
Precision measurement instrumentation – some design principles The design, development and use of precision measurement instrumentation1 is a highly specialized field that combines precision engineering with metrology. Although precision instrumentation has been around for many decades (see [1] for a historical overview), the measurements that are required to support MNT have forced designers and metrologists to learn a number of new skills. One major difference between conventional scale instrumentation and that used to measure MNT structures and devices is the effect that the measuring instrument has on the measurement process. For example, when measuring surface topography with a stylus instrument (see section 6.6.1), one should be aware of the possible distortion of the topography caused by the finite shape of the stylus. In essence, the business end of the instrument can have a size that is comparable to the structure being measured. This ‘probe–measurand’ interaction will be discussed throughout this book where necessary for each type of instrument. This chapter will present the basic principles of precision instrumentation so that, as the reader is presented with the various instruments in the following chapters, he or she will be armed with the appropriate knowledge to understand the basic operating principles. Precision instrument design involves scientific disciplines such as mechanics, materials, optics, electronics, control, thermo-mechanics, dynamics and software engineering. Introductions to many of the precision design and metrology concepts discussed in this chapter are given elsewhere [2–4]. The rest of the chapter follows the design considerations of [5] and is by no means exhaustive.
CONTENTS Geometrical considerations Kinematic design Dynamics The Abbe Principle Elastic compression Force loops Materials Symmetry Vibration isolation References
1
In chapter 2 we discussed the difference between precision and accuracy. When referring to measurement instrumentation the term precision is most often used, but the correct expression should probably be accurate and precision measurement instrumentation. Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
35
36
C H A P T ER 3 : Precision measurement instrumentation – some design principles
3.1 Geometrical considerations Most precision measuring instrument designs involve parts that are formed from simple geometrical elements such as cubes, cylinders, tubes, beams, spheres and boxes to support loads in the system. Surfaces that are used for moving elements are often formed from flats and cylinders. In practice, however, deviations from these ideal shapes and structures occur due to form and surface texture error caused by the machining processes used to manufacture the parts. The environment in which an instrument is housed also affects geometry, for example, vibration, temperature gradients and ageing can cause undesirable dimensional changes. Other factors that can affect the geometry of an instrument include: the effects of the connections between different parts, loading of the structure by the weight of the parts, stiffness and other material properties. The above deviations from ideal geometry cause the various parts that make up an instrument to interact in a way that is very difficult to predict in practice. Also, to reiterate the point made in the previous section, of great importance on the MNT scale is the effect of the measuring probe on the part being measured and the measuring result.
3.2 Kinematic design James Clark Maxwell (1890) was one of the first scientists to rigorously consider kinematic design. He stated that: The pieces of our instruments are solid, but not rigid. If a solid piece is constrained in more than six ways it will be subject to internal stress, and will become strained or distorted, and this in a manner which, without the most micromechanical measurements, it would be impossible to specify. These sentences capture, essentially, the main concepts of kinematic design. Kinematics is a branch of mechanics that deals with relationships between the position, velocity and acceleration of a body. Kinematic design aims to impart the required movements on a body by means of constraints [6]. A rigid body possesses six degrees of freedom in motion - three linear and three rotational. In Cartesian coordinates the degrees of freedom are in the x, y and z directions plus rotations about each of the axes. A constraint is that which prevents minimally motion in just one of the degrees of freedom. There are two lemmas of kinematic design [3]:
Kinematic design
-
any unconstrained rigid body has six degrees of freedom;
-
the number of contact points between any two perfectly rigid bodies is equal to the number of constraints.
This means that Number of constraints þ remaining number of degrees of freedom ¼ 6: There are often many assumptions applied when carrying out kinematic design. Real bodies are not perfectly rigid and will experience both elastic and possibly plastic deformations under a load. Such deformations will exclude perfect point contacts and cause unwanted motions. For this reason it is often important to choose with care the materials, shapes and surface texture of a given part. Despite this, kinematic design is an extremely important concept that the designer must master. Two examples of kinematic design will be considered here – the Kelvin clamp and a single degree of freedom motion system. These are, essentially, the only two kinematic designs used on the majority of MNT measuring instruments.
3.2.1 The Kelvin clamps The Type I and Type II Kelvin clamps are examples of fully constrained systems, i.e. ones with six constraints. When designed properly these clamps are very effective where accurate re-positioning is required and are stable to within nanometres [7]. Both clamps have a top-plate (on which, for example, the object to be measured is placed) that has three rigid spheres spaced on a diameter. The three spheres then contact on a flat and in a vee and a trihedral hole, as in Figure 3.1a, or in three vee-grooves, as in Figure 3.1b. In the Type II clamp it is easy to see where the six points of contact, i.e. constraints are – two in each vee-groove. In the Type I clamp one contact point is on the flat, two more are in the vee-groove and the final three are in the trihedral hole. The Type I clamp has the advantage of a well-defined translational location based on the position of the trihedral hole, but it is more difficult to manufacture. A trihedral hole is produced by pressing three spheres together in a flatbottomed hole (the contacting sphere will then touch at a common tangent) or by complex angled machining techniques. For miniature structures an anisotropic etchant can be used on a single crystalline material [8]. The Type II clamp is more symmetrical and less influenced by thermal variations. Note that the symmetrical groove pattern confers its own advantages but is not a kinematic requirement; any set of grooves will do provided that they are not all parallel.
37
38
C H A P T ER 3 : Precision measurement instrumentation – some design principles
FIGURE 3.1 (a) A Type I Kelvin clamp, (b) a Type II Kelvin clamp.
3.2.2 A single degree of freedom motion device There are many methods for producing single degree of freedom motion (see for example [9]). One method that directly uses the idea of single point contacts is the prismatic slideway [3]. The contact points are distributed on two non-parallel flat surfaces as shown in Figure 3.2. In practice the spheres would be attached to the carriage. The degrees of freedom in the system can be deduced by considering the loading necessary to keep all five spheres in contact. Firstly, the three-point support could be positioned onto the horizontal plane, resulting in a linear constraint in the z axis and rotary constraints about the x and y axes. A carriage placed on this plane is free to slide in the x direction until either of the two remaining spheres contacts the vertical face. The x axis linear degree of freedom is then constrained. Further horizontal force would cause the carriage to rotate until the fifth sphere comes into contact, removing the rotary degree of freedom about the z axis. This gives a single degree of freedom linear motion along the y axis.
3.3 Dynamics Most precision instruments used for MNT metrology involve some form of moving part. This is especially true of surface texture measuring instruments and CMMs. Motion usually requires some form of guideway, this being two or more elements that move relative to each other with fixed degrees of freedom. For accurate positioning, the play and the friction between the parts in the guideway must be reduced (unless the friction characteristics are being used to impart damping on the guideway). To avoid sticking and slipping of
Dynamics
FIGURE 3.2 A single degree of freedom motion device.
the guideway the friction should normally be minimised and kept at a constant value even when there are velocity or acceleration changes. It is also important that a guideway has a smooth motion profile to avoid high accelerations and forces. The symmetry of a dynamic system plays an important role. With a rotating part the unbalance and mass moment of inertia must be reduced. A linear guideway should be driven through an axis that minimizes any angular motion in its travel (its axis of reaction). Stiffness is another important factor; there must be a trade-off between minimizing the forces on a guideway and maximizing its stiffness. As with the metrology frame the environment in which the instrument is housed affects its dynamic characteristics. Guideways can be produced using many techniques, but the most popular three are: -
flexures – usually used only over a small range owing to the elastic limit and parasitic motion [3,10,11];
39
40
C H A P T ER 3 : Precision measurement instrumentation – some design principles
-
dry or roller-bearing linear slideways – as used on surface profile measuring instruments, for example [12];
-
hydrostatic bearings (air bearings) [4].
Many of the most advanced guideways use active feedback control systems [13,14].
3.4 The Abbe Principle The Abbe Principle was first described by Ernst Abbe (1890) of Zeiss and states: If errors of parallax are to be avoided, the measuring system must be placed co-axially (in line with) the line in which displacement (giving length) is to be measured on the work-piece. Abbe error occurs when the measuring point of interest is displaced laterally from the actual measuring scale location (reference line or axis of measurement), and when angular errors exist in the positioning system. Abbe error causes the measured displacement to appear longer or shorter than the true position, depending on the angular offset. The spatial separation between the measured point and reference line is known as the Abbe offset. Figure 3.3 shows the effect of Abbe error on an interferometric measurement of length. To ensure zero Abbe error, the reflector axis of movement should be co-linear with the axis of measurement. To account for the Abbe error in an uncertainty analysis relies on knowing the magnitude of the Abbe offset and the magnitude of the errors in motion of the positioning system (for example, straightness).
FIGURE 3.3 Effects of Abbe error on an optical length measurement.
Elastic compression
The Abbe Principle is, perhaps, the most important principle in precision instrument design and is also one that is commonly misunderstood – Bryan [14] described it as ‘the first principle of machine design and dimensional metrology’. Abbe’s original paper concentrated on one-dimensional measuring instruments. Bryan re-stated the Abbe Principle for multi-dimensional systems as: The displacement measuring system should be in line with the functional point whose displacement is to be measured. If this is not possible, either the slideways that transfer the displacement must be free of angular motion or angular motion data must be used to calculate the consequences of the offset. Many three-axis instruments, especially coordinate measuring machines (CMMs), attempt to minimize the Abbe error through good design principles (see chapter 8). Two good examples of this are the Zeiss F25 CMM [16] and an elastically guided CMM developed at the Eindhoven University of Technology [17].
3.5 Elastic compression When any instrument uses mechanical contact, or when different parts of an instrument are in mechanical contact, there will be some form of compression due to any applied forces. With good design such compression will be minimal and can be considered negligible, but when micrometre or nanometre tolerances or measurement uncertainties are required, elastic compression must be accounted for, either by making appropriate corrections or taking account of the compression in an uncertainty analysis. In some cases where the applied load is relatively high, irreversible, or plastic, deformation may occur. This is especially probable when using either high forces or small contact areas, for example when using stylus instruments (see section 6.6.1) or atomic force microscopes (see section 7.3). The theory behind elastic and plastic deformation can be found in detail elsewhere [18]. The amount that a body compresses under applied load depends on: -
the measurement force or applied load;
-
the geometry of the bodies in contact;
-
the material characteristics of the bodies in contact;
-
the type of contact (point, line, etc.);
-
the length of contact.
41
42
C H A P T ER 3 : Precision measurement instrumentation – some design principles
The formulae for calculating the amount of compression for most situations can be found in [18] and there are a number of calculators available on the Internet (see for example emtoolbox.nist.gov/Main/Main.asp). The most common cases will be included here. More examples of simple compression calculations are given elsewhere [2]. For a sphere in contact with a single plane (see Figure 3.4), the mutual compression (i.e. the combined compression of the sphere and the plane) is given by a ¼
1=3 ð3pÞ2=3 2=3 1 P ðV1 þ V2 Þ2=3 2 D
(3.1)
where D is the diameter of the sphere, P is the total applied force and V is defined as V ¼
ð1 s2 Þ pE
(3.2)
where E is the Young’s modulus of the material and s is Poisson’s ratio. Note that the assignment of the subscript for the two materials is arbitrary due to the symmetry of the interaction. For a sphere between two parallel planes of similar material, equation (3.1) is modified by removing the factor of two in the denominator. For a cylinder in contact with a plane, the compression is given by 8a2 (3.3) a ¼ PðV1 þ V2 Þ 1 þ ln ðV1 þ V2 ÞPD
FIGURE 3.4 Mutual compression of a sphere on a plane.
Force loops
where 2a is the length of the cylinder and the force per unit length is given by P ¼
P : 2a
(3.4)
Plastic compression is much more complicated than elastic compression and will be highly dependent upon the types of materials and surfaces considered. Many examples of both elastic and plastic compression are considered in [19].
3.6 Force loops There are three types of loop structures found on precision measuring instruments: structural loops, thermal loops and metrology loops. These three structures are often interrelated and can sometimes be totally indistinguishable from each other.
3.6.1 The structural loop A structural loop is an assembly of mechanical components that maintain relative position between specified objects. Using a stylus surface texture measuring instrument as an example (see section 6.6.1) we see the structural loop runs along the base-plate and up the bridge, through the probe, through the object being measured, down through the x slideway and back into the base-plate to close the loop. It is important that the separate components in the structural loop have high stiffness to avoid deformations under loading conditions – deformation in one component will lead to uncompensated dimensional change at the functional or measurement point.
3.6.2 The thermal loop The thermal loop is described as: ‘a path across an assembly of mechanical components, which determines the relative position between specified objects under changing temperatures’ [5]. Much akin to mechanical deformations in the structural loop, temperature gradients across an instrument can cause thermal expansion and resulting dimensional changes. It is possible to compensate for thermal expansion by choosing appropriate component lengths and materials. If well designed, and if there are no temperature gradients present, it may just be necessary to make the separate components of an instrument from the same material. Thermal expansion can also be compensated by measuring thermal expansion coefficients and temperatures, and applying appropriate corrections to measured lengths.
43
44
C H A P T ER 3 : Precision measurement instrumentation – some design principles
This practice is common in gauge block metrology where the geometry of the blocks being measured is well known [20]. Obviously, the effect of a thermal loop can be minimized by controlling the temperature stability of the room in which the instrument is housed.
3.6.3 The metrology loop A metrology loop is a reference frame for displacement measurements, independent of the instrument base. In the case of many surface texture measuring instruments or CMMs, it is very similar to the structural loop. The metrology loop should be made as small as possible to avoid environmental effects. In the case of an optical instrument, relying on the wavelength of its source for length traceability, much of the metrology loop may be the air paths through which the beam travels. Fluctuations in the air temperature, barometric pressure, humidity and chemical composition of these air paths cause changes in the refractive index and corresponding changes to the wavelength of the light [21,22]. This can cause substantial dimensional errors. The last example demonstrates that the metrology and structural loops can be quite different.
3.7 Materials Nearly all precision measuring instrument designs involve minimizing the influence of mechanical and thermal inputs which vary with time and which cause distortion of the metrology frame. Exceptions to this statement are, of course, sensors and transducers designed to measure mechanical or thermal properties. There are three ways (or combinations of these ways) to minimize the effects of disturbing inputs: -
isolate the instrument from the input, for example using thermal enclosures and anti-vibration tables;
-
use design principles and choose materials that minimize the effect of disturbing inputs, for example, thermal compensation design methods, materials with low coefficients of expansion and stiff structures with high natural frequencies;
-
measure the effect of the disturbing influences and correct for them.
The choice of materials for precision measuring instruments is closely linked to the design of the force loops that make up the metrology frame.
Materials
3.7.1 Minimizing thermal inputs Thermal distortions will usually be a source of inaccuracy. To find a performance index for thermal distortion consider a horizontal beam supported at both ends of length L and thickness h [23]. One face of the beam is exposed to a heat flux of intensity Q in the y direction that sets up a temperature, T, gradient, dT/dy, across the beam. Assuming the period of the heat flux is greater than the thermal response time of the beam, then a steady state is reached with a temperature gradient given by Q ¼ l
dT dy
(3.5)
where l is the thermal conductivity of the beam. The thermal strain is given by 3 ¼ aðT0 TÞ
(3.6)
where a is the thermal expansion coefficient and T0 is the ambient temperature. If the beam is unconstrained, any temperature gradient will create a strain gradient, d3/dy in the beam causing it to take up a constant curvature given by d3 dT a ¼ a ¼ Q: (3.7) K ¼ dy dy l Integrating along the beam gives the central deflection of a d ¼ C1 L2 Q l
(3.8)
where C1 is a constant that depends on the thermal loads and the boundary conditions. Thus for a given geometry and thermal input, the distortion is minimized by selecting materials with large values of the performance index MQ ¼
l : a
(3.9)
References [24] and [3] arrive at the same index by considering other types of thermal load. If the assumption that the period of the heat flux is greater than the thermal response time of the beam is not valid then the thermal mass of the beam has to be taken into account [24]. In this case thermal conductivity is given by D l ¼ (3.10) rCp where D is the thermal diffusivity of the beam material, r is its density and Cp is its specific heat capacity. In the case of a room with stable temperature and very slow heat cycling equation (3.9) is normally valid.
45
46
C H A P T ER 3 : Precision measurement instrumentation – some design principles
3.7.2 Minimizing mechanical inputs There are many types of mechanical input that will cause unwanted deflections of a metrology frame. These include elastic deflections due to self weight, loading due to the object being measured and external vibration sources. To minimize elastic deflections a high stiffness is desirable. The elastic self-deflection of a beam is described by Wx3 EI
y ¼ C2
(3.11)
where W is the weight of the beam, E is the Young’s modulus of the beam material, I is the second moment of area of the cross-section and C2 is a constant that depends on the geometry of the beam and the boundary conditions. It can be seen from equation (3.11) that, for a fixed design of instrument, the self-loading is proportional to r/E – minimizing this ratio minimizes the deflection. The natural frequency of a beam structure is given by un ¼ C3
rffiffiffiffiffiffiffiffiffi EI ml3
(3.12)
where n is the harmonic number, m is the mass per unit length of the beam, l its length and C3 is a constant that depends on the boundary conditions. pffiffiffiffiffiffiffiffi Again, for a fixed design of instrument, un is directly proportional to E=r. For a high natural frequency and, hence, insensitivity to external vibrations it is, once again, desirable to have high stiffness. As with the thermal performance index, a mechanical performance index can be given by Mm ¼
E : r
(3.13)
Insensitivity to vibration will be discussed in more detail in section 3.9.
3.8 Symmetry Symmetry is a very important concept when designing a precision measuring instrument. Any asymmetry in a system normally has to be compensated for. In dynamics it is always better to push or pull a slideway about its axis of reaction otherwise parasitic motions will result due to asymmetry. If a load-bearing structure does not have a suitably designed centre of mass, there will be differential distortion upon loading. It would seem that
Vibration isolation
FIGURE 3.5 Kevin Lindsey with the Tetraform grinding machine.
symmetry should be incorporated into a precision measuring instrument design to the maximum extent. An excellent example of a symmetrical structure (plus many other precision instrument design concepts) is the Tetraform grinding machine developed by Kevin Lindsey at NPL [25,26]. The symmetrical tetrahedral structure of Tetraform can be seen in Figure 3.5. Calculations and experimental results showed that the Tetraform is extremely well compensated for thermal and mechanical fluctuations.
3.9 Vibration isolation Most precision measuring instruments require some form of isolation from external and internal mechanical excitations. Where sub-nanometre accuracy is required it is essential that seismic and sonic vibration is suppressed. This section will discuss some of the issues that need to be considered when trying to isolate a measuring instrument from vibration. The measurement of vibration is discussed in [27] and vibration spectrum analysis is reviewed in [28].
3.9.1 Sources of vibration Different physical influences contribute to different frequency bands in the seismic vibration spectrum, a summary of which is shown in Table 3.1 and discussed in [27].
47
48
C H A P T ER 3 : Precision measurement instrumentation – some design principles
Table 3.1
Sources of seismic vibration and corresponding frequencies [27]
Frequency/mHz
Cause of vibration
< 50 50 to 500
Atmospheric pressure fluctuations Ocean waves (60 mHz to 90 mHz fundamental ocean wave frequency) Wind-blown vegetation and human activity
> 100
Figure 3.6 shows measured vertical amplitude spectral densities for a vibrationally ‘noisy’ and a vibrationally ‘quiet’ area [29]. Note that the spectrum below 0.1 Hz is limited by the seismometer’s internal noise. The solid curve represents the vibration spectrum on the campus of the University of Colorado, Boulder. The dashed curve is that from the NIST site. The ‘quiet’ NIST laboratory is small, remote and separated from the main complex. In addition, all fans and machinery were turned off during the measurements at the NISTsite. Most of the increased vibration in the solid line above 10 Hz in Figure 3.6 can be attributed to human activity and machinery. The low-frequency peak in the dashed line can be attributed to naturally occurring environmental effects such as high winds. For determining the low-frequency vibrations a gravitational wave detector, in the form of a Michelson interferometer with 20 m arms, has been used to measure vibrations 1 km below sea level [30]. A summary of the results is given in Table 3.2.
FIGURE 3.6 Measured vertical amplitude spectrum on a ‘noisy’ (continuous line) and a ‘quiet’ (dotted line) site [29].
Vibration isolation
Table 3.2
Possible sources of very-low-frequency vibration
Source
Period
Acceleration/m$s1
Earth’s free seismic oscillation Core modes Core undertone Earth tides Post-seismic movements Crustal movements
102 – 103 s 103 s 103 – 104 s 104 – 105 s 1 – 103 days 102 days
106 – 108 1011 1011 106 106 – 108 107 – 109
3.9.2 Passive vibration isolation Simple springs and pendulums can provide vibration isolation in both vertical and horizontal directions. The transmissibility of an isolator is the proportion of a vibration as a function of frequency that is transmitted from the environment to the structure of the isolator. For a single degree of freedom vibration isolation system the transmissibility, T, is given by [30] u0 2 T ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðu02 u2 Þ2 þ 4g2 u02 u2
(3.14)
where u0 is the resonant frequency of the isolator and g is the viscous damping factor. Figure 3.7 shows the transmissibility as a function of frequency ratio for various damping factors. pffiffiffi Vibration isolation is provided only above 2 times the natural frequency of the system, that is for f << f0. 2 f0 T ¼ for f << f0 : (3.15) f Therefore, to provide vibration isolation at low frequencies, the resonant frequency of the isolation system must be as low as possible. The resonant frequency for a pendulum is given by rffiffiffi 1 g (3.16) f0 ¼ 2p l and by 1 f0 ¼ 2p
rffiffiffiffiffi k m
(3.17)
for a spring, where g is the acceleration due to gravity, l is the pendulum length, k is the spring constant and m is the mass.
49
50
C H A P T ER 3 : Precision measurement instrumentation – some design principles
FIGURE 3.7 Damped transmissibility, T, as a function of frequency ratio (u/u0)
Re-writing equation (3.17) in terms of the static extension or compression of a spring, dl, gives rffiffiffiffi 1 g (3.18) f0 ¼ 2p dl since the static restoring force kdl ¼ mg. Thus for a low resonant frequency in a spring system it is necessary to have a large static extension or compression (or use a specialized non-linear spring).
3.9.3 Damping In vibration-isolation systems it is important to have damping, to attenuate excessive vibration near resonance. In equation (3.14) it is assumed that velocity-dependent (viscous) damping is being applied. This is attractive since viscous damping does not degrade the high-frequency performance of the system. The effects at resonance due to other forms of damping can be represented in terms of an ‘equivalent viscous damping’, using energy dissipation per cycle as the criterion of equivalence [31]. However, in such cases, the value of the equivalent viscous damping is frequency-dependent and, therefore, changes the system behaviour. For hysteresis or structural damping, the damping term depends on displacement instead of velocity.
3.9.4 Internal resonances A limit to high-frequency vibration isolation is caused by internal resonances of the isolation structure or the object being isolated [32]. At low frequencies
Vibration isolation
the transmissibility is accurately represented by the simple theory given by equation (3.14), but once the first resonance is reached, the isolation does not improve. Typically the fundamental resonance occurs somewhere in the acoustic frequency range. Even with a careful design it is difficult to make a structure of an appreciable size with internal resonant frequencies above a few kilohertz.
3.9.5 Active vibration isolation Active vibration isolation is a method for extending the low-frequency isolation capabilities of a system, but is very difficult in practice. Single degree of freedom isolation systems are of little practical use because a nonisolated degree of freedom reintroduces the seismic noise even if the other degrees of freedom are isolated. Active vibration isolation uses actuators as part of a control system essentially to cancel out any mechanical inputs. An example of a six degree of freedom isolation system has been demonstrated [29] for an interferometric gravitational wave detector.
3.9.6 Acoustic noise Acoustic noise appears in the form of vibrations in a system generated by ventilators, music, speech, street noise, etc. over a frequency range from about 10 Hz to 1000 Hz in the form of sharp coherent resonances as well as transient excitations [33]. Sound pressure levels in a typical laboratory environment are greater than 35 dB, usually due to air-conditioning systems. Consider an enclosure that is a simple bottomless rectangular box whose walls are rigidly attached at each edge. When a panel is acoustically excited by a diffuse sound field, forced bending waves govern its sound transmission characteristics and the sound pressure attenuation is determined by the panel mass per unit area [32]. The panel sound pressure attenuation (dB) is given by [34] " # prs f 2 a ¼ 10 log10 1 þ þ 5 dB (3.19) r0 c where rs is its mass per unit area, r0 is the density of air at standard pressure and f is the incident acoustic field frequency. Equation (3.19) suggests that the enclosure wall should be constructed from high-density materials to obtain the largest rs possible given the load-bearing capacity of any supporting structure. Note that the attenuation decreases for every 20 dB per decade increase in either rs or frequency.
51
52
C H A P T ER 3 : Precision measurement instrumentation – some design principles
3.10 References [1] Hume K J 1980 A history of engineering metrology (Mechanical Engineering Publications Ltd) [2] Flack D R, Hannaford J 2005 Fundamental good practice in dimensional metrology. NPL good practice guide No. 80 (National Physical Laboratory) [3] Smith S T, Chetwynd D G 1992 Foundations of ultraprecision mechanism design (Gordan & Breach Science Publishers) [4] Slocum A H 1992 Precision machine design (Society of Manufacturing Engineers: Michigan) [5] Schellekens P, Roseille N, Vermeulen J, Vermeulen M, Wetzels S, Pril W 1998 Design for high precision: current status and trends Ann. CIRP 2 557–586 [6] Nagazawa H 1994 Principles of precision engineering (Oxford Science Publications) [7] Schouten C H, Rosielle P C J N, Schellekens P H J 1997 Design of a kinematic coupling for precision applications Precision Engineering 20 46–52 [8] Petersen K E 1982 Silicon as a mechanical material Proc. IEEE 70 420–456 [9] Monteiro A F, Smith S T, Chetwynd D G 1996 A super-precision linear slideway with angular correction in three axes Nanotechnology 7 27–36 [10] Smith S T 2000 Flexures: elements of elastic mechanisms (Gordon & Breach Science Publishers) [11] Teo T J, Chen I-M, Yang G, Lin W 2008 A flexure-based electromagnetic linear actuator Nanotechnology 19 515501 [12] Leach R K 2000 Traceable measurement of surface texture at the National Physical Laboratory using NanoSurf IV Meas. Sci. Technol. 11 1162–1172 [13] Hicks T R, Atherton P D 1997 The nanopositioning book: moving and measuring to better than a nanometre (Queensgate Instruments) [14] Atherton P D 1998 Nanometre precision mechanisms Measurement þ Control 31 37–42 [15] Bryan J B 1979 The Abbe´ principle revisited: an updated interpretation Precision Engineering 1 129–132 [16] Vermeulen M M P A 1999 High precision 3D coordinate measuring machine, design and prototype development (PhD thesis: Eindhoven University of Technlogy) [17] van Seggelen J K, Roseille P C J N, Schellenkens P H J, Spaan H A M, Bergmans R H, Kotte G J W L 2005 An elastically guided machine axis with nanometer repeatability Ann. CIRP 54 487–490 [18] Hearn E J 1997 Mechanics of materials volume 1: an introduction to the mechanics of elastic and plastic deformation of solids and structural materials (Butterworth-Heinneman) 3rd edition [19] Young W C, Budynas R 2001 Roark’s formulas for stress and strain (McGrawHill Professional) 7th edition
References
[20] Hughes E B 1996 Measurement of the linear thermal expansion coefficient of gauge blocks by interferometry Proc. SPIE 2088 179–189 [21] Edle´n B 1966 The refractive index of air Metrologia 2 71–80 [22] Birch K P, Downs M J 1994 Correction to the updated Edle´n equation for the refractive index of air Metrologia 31 315–316 [23] Cebon D, Ashby M F 1994 Materials selection for precision instruments Meas. Sci. Technol. 5 296–306 [24] Chetwynd D G 1987 Selection of structural materials for precision devices Precision Engineering 9 3–7 [25] Lindsey K 1992 Tetrafrom grinding Proc. SPIE 1573 129–135 [26] McKeown P A, Corbett J, Shore P, Morantz P 2008 Ultraprecision machine tools - design and development Nanotechnology Perceptions 4 5–14 [27] Reilly S P, Leach R K 2006 Critical review of seismic vibration isolation techniques NPL Report DEPC-EM 007 [28] Goldman S 1999 Vibration spectrum analysis: a practical approach (Industrial Press: New York) 2nd edition [29] Newell D B, Richman S J, Nelson P G, Stebbins R T, Bender P L, Mason J 1997 An ultra-low-noise, low-frequency, six degrees of freedom active vibration isolator Rev. Sci. Instrum. 68 3211–3219 [30] Araya A 2002 Ground noise studies using the TAMA300 gravitational-wave detector and related highly sensitive instruments Proc. 7th Int. Workshop on Accelerometer Alignment 367–378 [31] Weaver W, Timoshenko S P, Young D H 1990 Vibration problems in engineering (Wiley-IEEE) 5th edition [32] Beranek L L, Ve´r I L 1993 Noise and vibration control engineering: principles and applications (Wiley Interscience) [33] Filinski I, Gordon R A 1974 The minimization of ac phase noise in interferometric systems Rev. Sci. Instrum. 65 576–58 [34] Brenan C J H, Charette P G, Hunter I W 1992 Environmental isolation platform for microrobot system development Rev. Sci. Instrum. 63 3492–3498
53
This page intentionally left blank
CHAPTER 4
Length traceability using interferometry Dr. Han Haitjema Mitutoyo Research Centre Europe
4.1 Traceability in length A short historical overview of length measurement was given in chapter 2. This chapter will take one small branch of length measurement, that of static length standards, and discuss in detail how the most accurate length measurements are made on macro-scale length standards using the technique of interferometry. These macro-scale length standards and the specialist equipment used for their measurement may not appear, at first sight, to have much relevance to MNT. However, macro-scale length standards are measured to nanometre uncertainties and many of the concepts discussed in this chapter will have relevance in later chapters. For example, much of the information here that relates to static surface-based interferometry will be developed further or modified in chapter 5, which discusses the development of displacement interferometry. It is also important to discuss traditional macro-scale length standards, both specification standards and artefact standards, because the subject of this book is engineering nanometrology. In other words, this book is concerned with the tools, theory and practical application of nanometrology in an engineering context, rather than as an academic study. It is anticipated that the development of standards for engineering nanometrology will very much follow the route taken for macro-scale engineering in that problems concerning the interoperability of devices, interconnections, tolerancing and standardization will lead to the requirement for testing and calibration, and this in turn will lead to the writing of specification standards and the preparation of nanoscale artefact standards and the metrology tools with which to calibrate them. It may well be that a MNT version of the ISO Geometrical Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS Traceability in length Gauge blocks – both a practical and traceable artefact Introduction to interferometry Interferometer designs Gauge block interferometry References
55
56
C H A P T ER 4 : Length traceability using interferometry
Product Specification (GPS) matrix [1] will evolve to serve the needs for dimensional metrology at these small scales. A discussion on this subject is as presented in [2]. There is a large range of macro-scale length standards and length measuring instruments that are used throughout engineering, for example simple rulers, callipers, gauge blocks, setting rods, micrometers, step gauges, coordinate measuring machines, linescales, ring and plug gauges, verniers, stage micrometers, depth gauges, ball bars, laser trackers, ball plates, thread gauges, angle blocks, autocollimators, etc.; the list is quite extensive [3]. For any of these standards or equipment to be of any practical application to engineers, end users or metrologists, the measurements have to be traceable. Chapter 2 explained the concept of traceability and described the comparison chain for some quantities. In this chapter we will examine in detail the traceable measurement of some of the length standards with the most basic concepts known as gauge blocks and, in doing so, we will show many of the basic principles of interferometry – perhaps the most directly traceable measurement technique for length metrology.
4.2 Gauge blocks – both a practical and traceable artefact As discussed in section 2.3, the end standard is one of the basic forms of material length artefact (a line standard being the alternative form of artefact). It is not only the basic form of an end standard that makes them so popular, but also the fact that Johannsson greatly enhanced the practical usability of end standards by defining gauge block sizes so that they could be used in sets and be combined to give any length with micrometre accuracy [3,4]. For these reasons the end standard found its way from the NMIs through to the shop floor. In summary, the combination of direct traceability to the level of primary standards, the flexibility of combining them to produce any length with a minimal loss of accuracy, their availability in a range of accuracy classes and materials and the standardization of sizes and accuracies make end standards widespread, and their traceability well established and respected. The most commonly used gauge blocks have a standardized cross-section of 9 mm by 35 mm for a nominal length ln > 10 mm and 9 mm by 30 mm for nominal length 0.5 mm < ln < 10 mm. The flatness of the surfaces (less than 0.1 mm) is such that gauge blocks can be wrung on top of each other without causing a significant additional uncertainty in length.1 This is due to the 1
Wringing is the process of attaching two flat surfaces together by a sliding action [6]
Gauge blocks – both a practical and traceable artefact
definition of a gauge block, which states that the length is defined as the distance from the measurement (reference) point on the top surface to the plane of a platen (a flat plate) adjacent to the wrung gauge block [5]. This platen should be manufactured from the same material as the gauge block and have the same surface properties (surface roughness and refractive index). Figure 4.1 is a schematic, and Figure 4.2 is a photograph, of a gauge block wrung to a platen. The definition of the length of a gauge block enables the possibility of relating the length to optical wavelengths by interferometry. Also, there is no additional uncertainty due to the wringing as the auxiliary platen could be replaced by another gauge block, where the wringing would have the same effect as the wringing to the platen, which is included in the length definition. Gauge blocks are classified into accuracy classes. The less accurate classes are intended to be used in the workshop. Using mechanical comparators, these gauge blocks can be compared to reference gauge blocks that are related to wavelengths using gauge block interferometers. Table 4.1 gives the tolerances for gauge block classes K, 0, 1 and 2 according to ISO 3650 [5]. For those to be calibrated by interferometry (class K) the absolute length is not so critical as this length is explicitly measured. However, the demands on parallelism needed for good wringing, and an accurate length definition, are highest. ISO 3650 gives the basis of demands, tolerances and definitions related to gauge blocks. The method of gauge block calibration by interferometry is a basic example of how the bridge between the metre definition by wavelength and a material reference artefact can be made. It will be the main subject of the rest of this chapter.
FIGURE 4.1 Definition of the length of a gauge block.
57
58
C H A P T ER 4 : Length traceability using interferometry
FIGURE 4.2 A typical gauge block wrung to a platen.
Table 4.1 Class K 0 1 2
Gauge block classes according to ISO 3650 [5] Tolerance on length, L 0.20 0.12 0.20 0.45
mm mm mm mm
þ þ þ þ
4 2 4 8
6
10 106 106 106
L L L L
Tolerance on parallelism for length, L 0.05 0.10 0.16 0.30
mm mm mm mm
þ þ þ þ
2 3 5 7
107 107 107 107
L L L L
4.3 Introduction to interferometry 4.3.1 Light as a wave This chapter will introduce the aspects of optics that are required to understand interferometry. For a more thorough treatment of optics the reader is referred to [7].
Introduction to interferometry
For the treatment of light we will restrict ourselves to electromagnetic waves of optical frequencies, usually called ‘visible light’. From Maxwell’s equations it follows that the electric field of a plane wave, with speed, c, frequency, f, and wavelength, l, travelling in the z-direction, is given by Ex eiðkzutÞ Eðz; tÞ ¼ (4.1) Ey where u ¼ 2pf ¼ 2pc/l is the circular frequency and k is the circular wavenumber, k ¼ 2p/l. Here we use the convention that a measurable quantity, for example the amplitude, Ex, can be obtained by taking the real part of equation (4.1) and we assume that Ey ¼ 0, i.e. the light is linearly polarized in the x direction. At the location z ¼ 0, the electric field E ¼ Excosut. This means that the momentary electric field oscillates with a frequency f. For visible light, for example green light (l ¼ 500 nm), this gives, with the speed of light defined as c ¼ 299 792 458 m$s1, a frequency of f ¼ 6 1014 Hz. No electric circuit can directly follow such a high frequency; therefore light properties are generally measured by averaging the cosine function over time. The intensity is given by the square of the amplitude, thus IðzÞ ¼ hE,Ei ¼ ðE2x Þhcos2 uti:
(4.2)
A distortion at t ¼ 0, z ¼ 0, for example of the amplitude Ex in equation (4.1), will be the same as at time, t, at location z ¼ ut/k ¼ ct, so the propagation velocity is c indeed. In equation (4.1), the amplitudes Ex and Ey can both be complex. In that general case we speak of elliptical polarization; the E-vector describes an ellipse in space. If Ex and Ey are both real, the light is called linearly polarized. Another special case is when Ey ¼ iEx, in which case the vector describes a circle in space; for that reason this case is called circular polarization. When light beams from different sources, or from the same source but via different paths, act on the same location, their electric fields can be added. This is called the principle of superposition, and causes interference. Visible, stable interference can appear when the wavelengths are the same and there is a determined phase relationship between the superimposed waves. If the wavelengths are not the same, or the phase relationship is not constant, the effect is called beating, which means that the intensity may very with a certain frequency. A fixed phase relationship can be achieved by splitting light, coming from one source, into two beams and recombining the light again. An instrument that accomplishes this is called an interferometer. An example of an interferometer is shown in Figure 4.3.
59
60
C H A P T ER 4 : Length traceability using interferometry
FIGURE 4.3 Amplitude division in a Michelson/Twyman-Green interferometer where S is the source, A and B are lenses to collinate and focus the light respectively, C is a beam-splitter, D is a detector and M1 and M2 are plane mirrors.
Consider the fields E1(t) and E2(t) in the interferometer in Figure 4.3 which travel paths to and from M1 and M2 respectively and combine at the detector, D. According to the principle of superposition we can write EðtÞ ¼ E1 ðtÞ þ E2 ðtÞ:
(4.3)
Combining equations (4.1), (4.2) and (4.3), with some additional assumptions, gives finally, pffiffiffiffiffiffiffiffi 4pDL I ¼ I1 þ I2 þ 2 I1 I2 cos (4.4) l where DL is the path difference between the two beams and I are intensities, i.e. the squares of the amplitudes. Equation (4.4) is the essential equation of interference. Depending on the term 4pDL/l, the resultant intensity on a detector can have a minimum or a maximum, and it depends with a (co)sine function on the path difference or the wavelength. From equation (4.4) it is evident that the intensity has maxima for 4pDL/l ¼ 2pp, with p ¼ 0, 1, 2, ., so that DL ¼ pl/2 and minima for DL ¼ (p þ 0.5)l/2.
Introduction to interferometry
4.3.2 Beat measurement when u1 s u2 If either E1 or E2 are shifted in frequency, or if E1 and E2 originate from sources with a different frequency, we can write analogous to equation (4.4) pffiffiffiffiffiffiffiffi 4pL I ¼ I1 þ I2 þ 2 I1 I2 cos þ ðu2 u1 Þt : (4.5) l2 We obtain an interference signal that oscillates with the difference frequency, which can readily be measured by a photodetector if u1 and u2 are not significantly different.
4.3.3 Visibility and contrast If the intensities I1 and I2 are equal, equation (4.4) reduces to 4pDL 2pDL ¼ 4I1 cos : I ¼ 2I1 1 þ cos l l
(4.6)
This means that the minimum intensity is zero and the maximum intensity is 4I1. Also it is clear that if I1 or I2 are zero, the interference term in equation (4.4) vanishes and a constant intensity remains. The relative visibility, V, of the interference can be defined as pffiffiffiffiffiffiffiffi Imax Imin 2 I1 I2 V ¼ ¼ : (4.7) Imax þ Imin I1 þ I2 The effect of visibility is illustrated in Figure 4.4, for the cases I1 ¼ I2 ¼ 0.5 (V ¼ 1); I1 ¼ 0.95, I2 ¼ 0.05 (V ¼ 0.44) and I1 ¼ 0.995, I2 ¼ 0.005 (V ¼ 0.07). Figure 4.4 illustrates that, even with very different intensities of the two beams, still the fringes can be easily distinguished. Also note that increasing a single intensity whilst leaving the other constant diminishes the contrast but increases the absolute modulation depth.
FIGURE 4.4 Intensity as a function of phase for different visibility.
61
62
C H A P T ER 4 : Length traceability using interferometry
FIGURE 4.5 Intensity distribution for a real light source.
4.3.4 White light interference and coherence length Equation (4.4) suggests that the interference term will continue to oscillate up to infinite DL. However, there is no light source that emits a single wavelength l; in fact every light source has a finite bandwidth, Dl. Figure 4.5 shows the general case; if Dl/l < 0.01 we can speak of a monochromatic light source. However, for interferometry over a macroscopic distance, light sources with a very small bandwidth are needed. From equation (4.4) it is evident that an interference maximum appears for DL ¼ 0, independent of the wavelength, l. This phenomenon is called white light interference. If the light source emits a range of wavelengths, in fact for each wavelength a different interference pattern is formed and where the photodetector measures the sum of all of these patterns, the visibility, V, may deteriorate with increasing path difference, DL. In Figure 4.6 the effect of a limited coherence length is illustrated for a number of different light sources: 1. A white light source with the wavelength uniformly distributed over the visible spectrum, i.e. between l ¼ 350 nm and l ¼ 700 nm; 2. A green light source with the bandwidth uniformly distributed between l ¼ 500 nm and l ¼ 550 nm; 3. A monochromatic light source with l ¼ 525 nm. Note that for each wavelength (colour) a different pattern is formed. In practical white light interferometry these colours can be visibly distinguished over a few wavelengths. White light interference is only possible in interferometers where the path difference can be made approximately zero.
Introduction to interferometry
FIGURE 4.6 Illustration of the effect of a limited coherence length for different sources.
The path length, DL, over which the interference remains visible, i.e. the visibility decreases by less than 50 %, is called the coherence length and is given by DL ¼
l0 l0 ¼ Ql0 Dl
(4.8)
where l0 is the wavelength of the light source and Q is the quality factor which determines over how many wavelengths interference is easily visible. Table 4.2 gives a few characteristics of known light sources. In the early twentieth century, the cadmium spectral lamp was used for interference over macroscopic distances. Michelson’s determination of the cadmium lamp wavelength related to the metre standard was a breakthrough to a metre definition based on physical constants. The orange-red line of the 86 Kr spectral lamp was used as the metre definition from 1963 until 1983. This definition was possible as, with some effort, interference over a metre length difference was possible and a length up to one metre could be measured using interferometry. Table 4.2
The quality factor and coherence length of some light sources
Light source Bulb Hg lamp Cd lamp 86 Kr lamp He-Ne laser (multiple mode) He-Ne laser (single mode)
Q 1.8 1800 3.1 105 1.4 106 8 104 108
DL/m 6
0.8 10 1 103 0.2 0.8 0.05 60
l0/nm
Colour
525 546 644 565 633 633
white green red orange-red red red
63
64
C H A P T ER 4 : Length traceability using interferometry
4.4 Interferometer designs For precision measurements, many interferometer types are used. It is important that for almost all types of interferometer the principles outlined in section 4.3 are valid.
4.4.1 The Michelson and Twyman-Green interferometer Where Michelson was a major pioneer in interferometry and carried out experiments that achieved major breakthroughs in physics, one often refers to a Michelson interferometer where in fact a Twyman-Green interferometer is intended. The original Michelson interferometer does not operate with collimated light, but with a point source, S, as shown in Figure 4.7. A beam-splitter, A, with a 50 % coating splits the input beam. The interference fringes are detected from B. The compensator, C, is a glass plate with the same thickness as A which makes the optical path length through glass equal for both beams. This ensures that chromatic effects in glass plate, A, are compensated and white light interferometry is possible. Optically, the system as viewed from B consists of two sources, M1 and M2, behind each other. If the two image planes, M1 and M2, are parallel, this is equivalent to sources in line behind each other and one detects circular fringes. If M1 and M2 intersect, the crossover is the position of zero path difference and, as this region is a straight line of intersection, white light fringes will appear on the straight line of the intersection. The fringes appear to be localised at the front mirror, M1, i.e. the detector must be focused on this
FIGURE 4.7 Schema of the original Michelson interferometer.
Interferometer designs
surface in order to obtain the sharpest fringes. With increasing displacement the fringes become spherical because of the divergent light source.
4.4.1.1 The Twyman-Green modification In the Twyman-Green modification to the Michelson interferometer, the source is replaced by a point source, S, at the focus of a well-corrected concave lens (see Figure 4.8). The lens B collects the emerging light and the detector observes the interference pattern at the focal plane, D. Consider the case where the mirror and its image are parallel. Now the collimated point source leads to a field of uniform intensity. Variations of this ¨ster gauge block interferometer [8], displacement interferometer are the Ko measuring interferometers (see section 5.2) and the Linnik- and Mirau-type interference microscopes (see section 6.7.3.2). An important characteristic of the Twyman-Green interferometer is that the paths in both beams can be made equal so that white light interference occurs. A disadvantage is that both beams have a macroscopic path length and can be sensitive to turbulence and vibration. The reflectivity of both mirrors can be up to 100 %. If the reflectivity of the mirrors is different, the visibility decreases, as is illustrated in Figure 4.4. In the interferogram, the difference between the two mirrors is observed. For example, if both mirrors are slightly convex, and one mirror is slightly tilted, the interferogram will consist of straight lines (the same as with perfectly flat mirrors).
FIGURE 4.8 Schema of a Twyman-Green interferometer.
65
66
C H A P T ER 4 : Length traceability using interferometry
4.4.2 The Fizeau interferometer In Fizeau interferometry, the reference surface and the surface to be measured are brought close together. Compared to Figure 4.3, mirror M1 is transparent and partially reflecting, and the partial reflecting side is positioned close and almost parallel to mirror M2. This gives a configuration as shown in Figure 4.9. For a wedge angle, a, and perfectly flat mirrors, the intensity of the interference pattern between the mirrors is given by pffiffiffiffiffiffiffiffi (4.9) IðxÞ ¼ I1 þ I2 þ 2 I1 I2 cos 2kðDL þ xaÞ where x is the position of the interference pattern from the left edge of the mirrors. In two dimensions, with circular mirrors, this gives a characteristic interference pattern consisting of straight lines (see Figure 4.10). The Fizeau interferometer gives a direct way of observing geometrical features in an interferogram. If the distance DL is increased, the fringes will move from left to right (or right to left). If the tilt angle is changed, the distance between the fringes changes. If either of the mirrors is not flat, this is observed as distortions in the straightness of the fringes. If the interference term in equation (4.9) can be changed in some controlled manner, the phase f ¼ 2kDL can be determined by making
FIGURE 4.9 The Fizeau interferometer.
Interferometer designs
FIGURE 4.10 Typical interference pattern of a flat surface in a Fizeau interferometer.
intensity measurements in one location (x, y). The phase can be changed by a small displacement, DL, or by a wavelength change. If DL is changed in four steps of l /8 each, and the intensities are labelled as IA, IB, IC and ID, then it can be shown that IB ID 4ðx; yÞ ¼ arctan : (4.10) IA IC This is an example of deriving the phase, and DL, by phase stepping. This can only give an estimate of DL within an unknown integer number, N, of half wavelengths. Considered over the surface, the distance between the surfaces S1 and S2 can be expressed as ! 4S2 ðx; yÞ 4S1 ðx; yÞ l DLðx; yÞ ¼ : (4.11) Nþ 2p 2 If the upper surface deviations of both S1 and S2 are to be considered positive in the glass–air interface direction then, apart from a constant term and a constant tilt, the deviations can be expressed as S2 ðx; yÞ ¼ 42 ðx; yÞl=4p and S1 ðx; yÞ ¼ 41 ðx; yÞl=4p
(4.12)
In S1 the coordinates can be (x, y) or (x, y), depending on the definition and the (flipping) orientation of the (optical) surface. However, in a Michelson interferometer for S1 the equivalent of S2 holds. If S1 is perfectly flat, or has a known flatness deviation, the form of the other surface can be derived, either by visually observing the interference pattern or by analysing the phase using equation (4.11). This method of surface interferometry is a research field of its own and is covered in several textbooks (see [9,10]).
67
68
C H A P T ER 4 : Length traceability using interferometry
Because the lateral resolution is usually limited, this is form rather than surface texture measurement. Uncertainties can be in the nanometre region in the direction perpendicular to the surface. Limitations are in the roughness and the maximum angle that can be measured. For engineered surfaces this method is applicable for polished, and precision turned, lapped and ground surfaces. For such surfaces, Fizeau interferometry is a very powerful tool to obtain very rapidly the complete geometry of the surface. Some characteristics of Fizeau interferometers should be mentioned, also in comparison to Michelson set-ups: -
white light interference is not possible; one always needs a light source with a coherence length of a few millimetres or more;
-
the reference mirror must be partially transmitting, and the back side of this reference mirror should not interfere with its front side. This can be achieved by, for example, an anti-reflection coating or by a wedge;
-
if mirror S2 has a reflectivity of around 100 %, it is difficult to achieve good visibility, as the reference mirror must be transmitting;
-
the ambiguity of N can be a problem if it varies over the surface in a complicated way (i.e. the fringe pattern is complex and/or noisy). The determination of the proper variation in N over the surface can be complicated; this process is called phase unwrapping;
-
as mirror S1 is held upside-down, the interferometer measures the sum of the surface deviations of both surfaces. This enables an absolute flatness calibration when a third flat is used. However, because of the coordinate flipping the measurement in all three combinations must be combined with additional rotations of one of the flats [11]. In a Michelson set-up an absolute calibration is not possible;
-
instead of flats, spheres can be measured and, with some modifications, even parabolas can be measured. This is outside the scope of this book (but see [12]).
4.4.3 The Jamin and Mach-Zehnder interferometers The Jamin interferometer is depicted in Figure 4.11. The beams are split in A and recombine at D. A first important application of the Jamin interferometer was the measurement of the refractive index of gases (T1 and T2 represent gas cells in Figure 4.11). The Jamin arrangement can also be used to make an image interfere with itself, but slightly displaced, for example by tilting one mirror relative to the other. This is called shearing interferometry.
Interferometer designs
FIGURE 4.11 Schema of a Jamin interferometer.
A modification of the Jamin arrangement is known as the Mach-Zehnder interferometer and is depicted in Figure 4.12. As in the Michelson interferometer, white light interference is possible and there is no limitation to the reflectance at, for example, points C and F. The Mach-Zehnder interferometer can be used for refractometry, i.e. for measurement of the refractive index of a medium in either arm. It can also be modified in order to enable displacement measurement.
FIGURE 4.12 Schema of a Mach-Zehnder interferometer.
69
70
C H A P T ER 4 : Length traceability using interferometry
4.4.4 The Fabry-Pe´rot interferometer If in the Fizeau interferometer in Figure 4.9 both mirrors are placed almost parallel and the reflectance of both mirrors is increased, a particular type of interferometer is obtained, called the Fabry-Pe´rot interferometer (see Figure 4.13). Light enters from the left, and B and B’ are the reflecting faces between which the interference occurs. P and P’ are spacers to put flats B and B’ as parallel as possible. Between B and B’ multiple reflections occur. Equation (4.4) no longer holds if the reflectance, R, of both plates becomes significantly large, for example, R > 0.1. Summation of all reflected and transmitted components leads to an infinite series, which can be expressed as F sin2 T ¼
L l
1 þ F sin
2L
(4.13)
l
where F is defined as F ¼
4R ð1 RÞ2
:
(4.12)
The reflectance of the whole system is given by R ¼ 1 T, where T is given by equation (4.13) and where it is assumed that no absorption takes place. The transmittance as a function of the distance, L, between the plates, for a wavelength l ¼ 600 nm, is shown in Figure 4.14. Figure 4.14 shows (co)sine-like behaviour similar to that described in equation (4.4) for low reflectances, but for high reflectance of the mirrors there are sharp transmittance peaks. This has the disadvantage that inbetween the peaks the position is hard to estimate, but it has the advantage
FIGURE 4.13 Schematic of the Fabry-Pe´rot interferometer.
Interferometer designs
FIGURE 4.14 Transmittance as a function of distance, L, for various reflectances.
that once a peak reflectance is achieved, one is very sure that a displacement of exactly an integer number of half wavelengths has taken place. The reciprocal of the full width of a fringe at half of the maximum intensity expressed as a fraction of the distance between two maxima is given by pffiffiffiffi p R ppffiffiffi NR ¼ ¼ F: (4.15) 1R 2 The term NR is called the finesse of the interferometer. For example, for R ¼ 0.9, NR ¼ 30. This means that 1/30th of a half wavelength can readily be resolved by this interferometer; compare this to half of a half wavelength using the same criterion for the cosine function in equation (4.4). At a fixed distance, L, the possible frequencies that fit in the cavity can be calculated as follows l c c c / fm ¼ m /Df ¼ fmþ1 fm ¼ : L ¼ m ¼ m 2 2nf 2nL 2nL
(4.16)
Here m ¼ 0, 1, 2, . and n is the air refractive index, which is approximately 1. The frequency difference between two successive possible frequencies is called the free spectral range. For example, for a cavity length L ¼ 100 mm, Df ¼ 1.5 GHz. Clearly, in a Fabry-Pe´rot interferometer white light interferometry is not possible. The interferometer can also be made with spherical mirrors. In this case the equation for the finesse changes somewhat. This and other details of the Fabry-Pe´rot interferometer are extensively treated in [13]. Fabry-Pe´rot interferometers have many applications in spectroscopy. However, in engineering nanometrology they are used as the cavity in lasers and they can be used to generate very small, very well defined displacements, either as part of a laser (the so-called ‘measuring laser’) or as an external cavity. This is treated in more detail in section 5.7.1.2.
71
72
C H A P T ER 4 : Length traceability using interferometry
4.5 Gauge block interferometry 4.5.1 Gauge blocks and interferometry As discussed in section 4.2 the length of a gauge block wrung to a platen can be measured using interferometry. The ISO definition of a gauge block length has a two-fold purpose: (1) to ensure that the length can be measured by interferometry, and (2) to ensure that there is no additional length due to wringing. An issue that is not obvious from the definition is whether the twosided length of a gauge block after calibration by interferometry coincides with the mechanical length, for example as measured by mechanical probes coming from two sides. Up to now no discrepancies have been found that exceed the measurement uncertainty, which is in the 10 nm to 20 nm range. Figure 4.15 shows a possible definition for a mechanical gauge block length. A gauge block with length L is probed from both sides with a perfectly round probe of diameter d, being typically a few millimetres in diameter. The mechanical gauge block length, L, is the probe displacement, D, in the limit of zero force, minus the probe diameter, or L ¼ D d.
4.5.2 Gauge block interferometry In order to measure gauge blocks in an interferometer, a first requirement for the light source is to have a coherence length that exceeds the gauge block length. Gauge block interferometers can be designed as a Twyman-Green or a Fizeau configuration, where the former is more common. For the majority of the issues discussed in this section either configuration can be considered. Figure 4.16 is a schema of a gauge block interferometer containing a gauge block. The observer sees the fringe pattern that comes from the platen as shown in Figure 4.10. If the platen has a small tilt this will be a set of straight,
FIGURE 4.15 Possible definition of a mechanical gauge block length.
Gauge block interferometry
FIGURE 4.16 Schema of a gauge block interferometer containing a gauge block.
parallel interference fringes. However, at the location of the gauge block, a parallel plate can also be observed, but the fringe pattern may be displaced (see Figure 4.17). If the fringes are not distorted, then an integer number of half wavelengths will fit in the length of the gauge block. In general this will not be the case, and the shift of fringes gives the fractional length of the gauge block. The length of the gauge block is given by ½4block ðtopÞ 4ref ðtop areaÞ ½4platen ðbaseÞ 4ref ðbase areaÞ ln L¼ Nþ 2p 2nðlÞ ln ðN þ fÞ ¼ 2nðlÞ (4.17) where N is the number of half wavelengths between the gauge block top and the position on the platen for wavelength l, n is the air refractive index and f is the fraction f ¼ a/b in Figure 4.17. fblock (top) is the phase on top of the gauge block, fref (top area) is the phase at the reference plate at the location of the top area, fplaten (base) is the phase on the platen next to the gauge block and fref (base area) is the phase at the reference plate at the
73
74
C H A P T ER 4 : Length traceability using interferometry
FIGURE 4.17 Theoretical interference pattern of a gauge block on a platen.
location next to the image of the gauge block. For a flat reference surface, the phase for the areas corresponding to the base and top of the gauge block are the same (fref (top area) ¼ fref (base area)) and equation (4.17) simplifies accordingly. Equation (4.17) is the basic equation that links an electromagnetic wavelength, l, to a physical, mechanical length, L. Some practical issues that are met when applying equation (4.17) are treated in the next sections.
4.5.3 Operation of a gauge block interferometer 4.5.3.1 Fringe fraction measurement – phase stepping As indicated in Figure 4.17, the fringe fraction can be estimated visually. For this purpose, as a visual aid some fiducial dots or lines can be applied to the reference mirror. Experienced observers can obtain an accuracy of 5%, corresponding to approximately 15 nm. However, more objective and accurate methods for determining the fringe fraction are possible by phase shifting; this means that the optical distance of either the reference mirror or the platen–gauge block combination is changed in a controlled way [14]. Established methods for phase shifting include: -
displacing the reference mirror or the gauge block with platen using piezoelectric displacement actuators;
-
positioning an optical parallel in the beam. Giving the optical parallel a small rotation generates a small controllable phase shift.
Having the possibility of shifting the phase, the fraction can be derived in a semi-manual way. For example, the fringes on the platen can be adjusted to a reference line then on the gauge block, and then the next fringe on the platen can be adjusted to this reference line. Reading the actuator signal or
Gauge block interferometry
a rotary position of the optical parallel at these three settings gives the possibility of deriving a fringe fraction, f. Recording complete images and applying equation (4.9) is probably the most objective and accurate method to determine f. This is similar to Fizeau interferometry, although in this case it is usually done with multiple spectral or laser lines in a Michelson configuration.
4.5.3.2 Multiple wavelength interferometry analysis If just a single fraction of a single wavelength is known, the gauge block length must be known beforehand within an uncertainty of 0.15 mm in order to define N in equation (4.17) within one integer unit. For gauge blocks to be calibrated this level of prior knowledge is usually not the case – see Table 4.1 – and it is common practice to solve this problem by using multiple wavelengths. In the original gauge block interferometers this was usually possible as spectral lamps were used that emitted several lines with an appropriate coherence length. In modern interferometers laser sources are normally used, and the demand for multiple wavelength operation is met with multiple laser sources. For multiple wavelengths, li (i ¼ 1, 2, .), equation (4.17) can be rewritten as l1;n l2;n l3;n ðN1 þ f1 Þ ¼ ðN2 þ f2 Þ ¼ ðN3 þ f3 Þ ¼ . L ¼ 2nðl1 Þ 2nðl2 Þ 2nðl3 Þ (4.18) However, because of a limited uncertainty in the fringe fraction determinations there is not a single length that can meet the requirements of equation (4.18) for all wavelengths and fractions. There are several strategies for finding an optimal solution for the length; for example, for the longest wavelength a set of possible solutions around the nominal length can be taken and for each of these lengths the closest solution for the possible lengths for the measurements at the other wavelengths can be calculated. The average of the length with the least dispersion is then taken as the final value. This method is known as the method of exact fractions and has similarities with reading a vernier on a ruler. More generally, equation (4.18) can be written as a least-squares problem. The error function is given by c2 ¼
K X i¼1
!2 li;n ðNi þ fi Þ Le 2nðli Þ
(4.19)
where K is the number of wavelengths used and Le is the estimated length. For ideal measurements, c2 ¼ 0 for Le ¼ L. For real measurements, the best
75
76
C H A P T ER 4 : Length traceability using interferometry
estimate for L is the value for Le where c2 is minimal. For any length, Le, first the value of Ni that gives a solution closest to L has to be calculated for each wavelength before calculating c2. As equation (4.19) has many local minima (every 0.3 mm), it must be solved by a broad search around the nominal value, for example as was implicitly done in the procedure described above. To distinguish between two adjacent solutions the fringe fractions must be determined accurately enough. This demand is higher if the wavelengths are closer together. For example, for wavelength l1 ¼ 633 nm (red) and l2 ¼ 543 nm (green), the fractions must be determined within 15% in order to ensure that a solution that is 0.3 mm in error is not found. Multiple wavelengths still give periodic solutions where c2 is minimal, but instead of 0.3 mm, these are further apart; in the example of the two wavelengths just given, this period becomes 2.5 mm. If two wavelengths are closer together, the demand on the accuracy of the fringe fraction determination is increased accordingly, and the period between solutions increases. Using more than two wavelengths further increases the period of the solutions; the wavelength range determines the demand on the accuracy of the fraction determination. A common strategy for obtaining an approximate value for Le, with an uncertainty at least within the larger periodicity, is to carry out a mechanical comparison with a calibrated gauge block.
4.5.3.3 Vacuum wavelength The uncertainty in the length of a gauge block measured by interferometry directly depends on the accuracy of the determination of the vacuum wavelength. In the case of spectral lamps these are more or less natural constants, and the lines of krypton and cadmium are (still) even defined as primary standards. Stabilised lasers must be calibrated using a beat measurement against a primary standard, as described in section 2.9.5. When using multiple wavelengths, especially for larger lengths up to 1 m, a small deviation of the vacuum wavelength can cause large errors because a solution is found one or more fringe numbers in error. For example, for a 1 m gauge block an error of 4 108 in wavelength will result in the wrong calculated value for N such that the error is 3 107 (one fringe in a metre). This limits the maximum length that can be determined depending on the accuracy of the wavelengths.
4.5.3.4 Thermal effects The reference temperature for gauge block measurements is defined in the specification standard ISO 1 [15] to be 20 C, exactly. The reason that it is necessary to specify a temperature is because all gauge blocks will change size when their temperature changes due to thermal expansion. The amount by
Gauge block interferometry
which the material changes length per degree temperature change is the coefficient of thermal expansion, a. For a typical steel gauge block, the coefficient of thermal expansion is about 11.5 106 K1 and for a tungsten carbide gauge block it is nearer 4.23 106 K1. In order to correct for the change in length due to thermal expansion, it is necessary to measure the temperature of the gauge block at the same time as the length is being measured. The correction factor can be derived from LðTÞ ¼ Lð20Þ ð1 þ a½T 20Þ
(4.20)
where L(T) is the length at temperature, T (in degrees celsius), and L(20) is the length at 20 C. Equation (4.20) indicates that an accurate temperature measurement is more critical when a is large and that knowledge of a is more critical if the temperature deviates from 20 C. For example, for a one part per million per degree celsius error, Da, in the expansion coefficient, the error is 100 nm for a 100 mm gauge block at 21 C. For a ¼ 10 106 K1 a 0.1 C uncertainty in the temperature gives 100 nm uncertainty in a 100 mm gauge block.
4.5.3.5 Refractive index measurement The actual wavelength depends on the frequency and the refractive index of the air in the path adjacent to the gauge block. In very accurate interferometers, for long gauge blocks, the refractive index is measured directly by a refractometer that may effectively be described as a transparent gauge block containing a vacuum. The refractive index of air is directly related to the air density, which itself is influenced by: -
air temperature;
-
air pressure;
-
air humidity;
-
other gases in the air (for example, carbon dioxide).
The last of these influences, other gases, has a negligible effect and can usually be ignored. So we need to measure the air temperature, air pressure and humidity. We then use well-known equations to calculate the air refractive index from these measured parameters. These equations go by several names, depending on the exact equations used, and are known by the names of the scientists who derived them. Examples include Edle´n [16], Birch and Downs (also known as the modified
77
78
C H A P T ER 4 : Length traceability using interferometry
Table 4.3
Effect of parameters on refractive index: RH is relative humidity
Effect
Sensitivity
Air pressure Air temperature Air humidity Wavelength
2.7 9.3 1.0 2.0
107 107 108 108
Variation needed for change of 10 nm in 100 mm L/mbar L/ C L/%RH L/nm
0.37 mbar 0.11 C 10% RH –
¨nsch [20]. NIST has published all Edle´n equation) [17,18], Ciddor [19] and Bo the equations and considerations on their website, including an on-line calculator: see emtoolbox.nist.gov/Wavelength/Abstract.asp. It may be useful to note the sensitivity of the refractive index to these various parameters, as shown in Table 4.3. From Table 4.3 it can be seen that if one wishes to reduce the contribution of these potential error sources to below 1 107 L (i.e. 10 nm in 100 mm length) then one needs to make air pressure measurement with an uncertainty below 0.4 mbar, air temperature measurement to better than 0.1 C and air humidity measurement to better than 10 % RH (relative humidity). Such measurements are not trivial, but well achievable with commercial instruments. The wavelength also needs to be known accurately enough – within small fractions of a nanometre (it is mentioned here for completeness, as the refractive index is also wavelength-dependent).
4.5.3.6 Aperture correction A subtle optical effect that is less obvious than the previous uncertainty influences is the so-called aperture correction. The figures in section 4.4 show the light sources as point sources, but in reality a light source has a finite aperture. This means that light does not only strike the gauge block and reference plane exactly perpendicular, but also at a small angle. This makes the gauge block appear shorter than it really is. The correction for this effect for a circular aperture is given by DL ¼
LD2 16f 2
(4.21)
where D is the aperture and f is the focal length of the collimating lens. Taking some typical numbers, D ¼ 0.5 mm, f ¼ 200 mm, L ¼ 100 mm, we find DL ¼ 0.04 mm. This correction is much larger in the case of an interference microscope, where it may amount to up to 10% of the measured height (see section 6.7.1).
Gauge block interferometry
In some interferometer designs there is a small angle between the impinging light and the observed light that gives rise to a similar correction known as the obliquity correction.
4.5.3.7 Surface and phase change effects As indicated in the gauge block definition, the gauge block has to be wrung on to a platen having the same material and surface roughness properties as the gauge block. In practice this can be approached, but never guaranteed. Sometimes glass or quartz is preferred as a platen because the wringing condition can be checked through the platen. Because of the complex refractive index of metals, light effectively penetrates into the material before being reflected, so a metal gauge block on a glass platen will be measured as too short. Additional to this is the gauge block roughness effect. Typical total correction values are 0.05 mm for a steel gauge block on a glass platen and 0.01 mm for a tungsten carbide gauge block on a glass platen. A very practical way of determining the surface effects is wringing a stack of two (or more) gauges together on a platen and comparing the length of the stack to the sum of the individually measured gauge blocks. This is illustrated for two gauge blocks in Figure 4.18, where g and p are the apparent displacements of the optical surface from the mechanical surface, f is the wringing film thickness and Li are the defined (mechanical) lengths of individual gauges. It can be shown that the measured length of the combined stack minus the individual measured length is the correction per gauge. This method can be extended to multiple gauge blocks to reduce the uncertainties. Here it is assumed that the gauge blocks are from the same material and have nominally the same surface texture. Other methods for measuring corrections for surface effects of the gauge block and platen have been proposed and are used in some NMIs (see for example [21]). Such methods can offer a slightly reduced uncertainty for the
FIGURE 4.18 Method for determining a surface and phase change correction.
79
80
C H A P T ER 4 : Length traceability using interferometry
phase correction, but are often difficult to set up and can give results that may be difficult to interpret.
4.5.4 Sources of error in gauge block interferometry In this section, some more detailed considerations are given on the errors generated by the different factors mentioned in section 4.5.3 (and see [22] for a more thorough treatment).
4.5.4.1 Fringe fraction determination uncertainty The accuracy of the fringe fraction determination is governed by the repeatability of the measurement process, the quality of the gauge block, and the flatness and parallelism of the end faces. With visual fringe fraction determination, an uncertainty of 5 %, corresponding to approximately 15 nm, is considered as a limit. With photoelectric determination this limit can be reduced to a few nanometres; however, the reproducibility of the wringing process is of the same order of magnitude.
4.5.4.2 Multi-wavelength interferometry uncertainty As previously mentioned, the determination of the correct interference order is the main issue when using multiple wavelength interferometry. For this purpose it is absolutely necessary that the fringe fractions are determined within 10 % to 15 %. Once the fringe fractions are less sure, the measurement becomes meaningless. Also a correct pre-determination of the gauge block length, for example by mechanical comparison, is essential if the gauge block is being calibrated for a first time.
4.5.4.3 Vacuum wavelength uncertainty The uncertainty in the wavelength used is directly reflected in the calculated length, so long as the fringe order is uniquely defined. Stabilized lasers need periodic re-calibration, preferably against a primary standard. If one laser is calibrated, other lasers can be calibrated using gauge blocks – from a known length and measured fraction, the real wavelength of a light source can be measured. An unknown fringe order now leads to a possible number of wavelengths. By repeating the procedure for different gauge block lengths, a wavelength can be uniquely determined [23].
4.5.4.4 Temperature uncertainty The temperature measurement is essential, and, if the temperature is different from 20 C, the expansion coefficient must also be known. Most temperature sensors can be calibrated to low uncertainties – calibration of
Gauge block interferometry
a platinum-resistance thermometer to 0.01 C is not a significant problem for a good calibration laboratory. The problem with temperature measurement of a material is that the temperature of the material must be transferred to the sensor. This depends on thermal conductivity, thermal equilibrium with the environment, self-heat of the sensor and other factors. For this reason long waiting times and multiple sensors attached to longer gauge blocks (L > 100 mm) are common. An approach that was already used in the first gauge block interferometers is to have a larger thermally conductive block near the gauge block. This block is measured with an accurate absolute sensor and the difference of the gauge block with this reference block is determined by a thermocouple. For the highest uncertainties the uncertainty in the temperature scale becomes relevant; for example when ITS-90 was introduced in 1990 [24], the longest gauge blocks made a small but significant jump in their length.
4.5.4.5 Refractive index uncertainty If the refractive index is established by indirect measurement of the air parameters, it is dependent on the determination of these parameters and in addition a small uncertainty of typically 2 108 in the equation itself must be taken into account. The air temperature measurement may be most problematic because of possible self-heating of sensors that measure air temperature. Also, when the air temperature is different from the gauge block temperature it is questionable exactly what is the air temperature near the gauge block that is measured.
4.5.4.6 Aperture correction uncertainty As the aperture correction is usually small, an error in this correction does not necessarily have dramatic consequences. If possible, the aperture can be enlarged or reduced to check whether the estimate is reasonable. The same applies for the obliquity effect if there is a small angle between the beams.
4.5.4.7 Phase change uncertainty Proper determination of the phase change correction is transferred to many measurements, so it is important to do multiple measurements with multiple gauge blocks in order to avoid making a systematic error when correcting large amounts of gauge blocks with the same value. For this determination it is customary to take small gauge blocks that can be wrung well (for example, 5 mm) so that length-dependent effects (refractive index, temperature) are minimal, and the fringe fraction determination and wringing repeatability are the determining factors.
81
82
C H A P T ER 4 : Length traceability using interferometry
4.5.4.8 Cosine error Cosine error is mainly mentioned as an illustration of how closely the Abbe principle is followed by gauge block interferometry (see section 5.2.8.3 for a description of the cosine error). The gauge block has to be slightly tilted in order to generate a number of fringes over the surface (with phase-stepping this is not required). Even if ten fringes are used over the gauge block length this gives a cosine error of 5 10–9 L, far within effects of common temperature uncertainties.
4.6 References [1] ISO/TR 14638: 1995 Geometrical product specification (GPS) - Masterplan (International Organization for Standardization) [2] Hansen H N, Carneiro K, Haitjema H, De Chiffre L 2006 Dimensional micro and nano metrology Ann. CIRP 55 721–743 [3] Flack D R, Hannaford J 2005 Fundamental good practice in dimensional metrology NPL Good practice guide No 80 (National Physical Laboratory) [4] Doiron T 1995 The gage block handbook (National Institute of Standards and Technology) [5] ISO 3650: 1998 Geometrical Product Specifications (GPS) - Length standards - Gauge blocks (International Organization for Standardization) [6] Leach R K, Hart A, Jackson K 1999 Measurement of gauge blocks by interferometry: an investigation into the variability in wringing film thickness NPL Report CLM 3 [7] Born M, Wolf E 1984 Principles of optics (Pergamon Press) ¨del R, Bo ¨nsch G 2003 Next generation Ko ¨sters interfer[8] Decker J E, Scho ometer Proc. SPIE 5190 14–23 [9] Malacara D 1992 Optical shop testing (Wiley) ˚svik K J 2002 Optical metrology (Wiley) [10] Ga [11] Evans C J, Kestner 1996 Test optics error removal Appl. Opt. 35 1015–1021 [12] Malacara D, Servin M, Malacara Z 1998 Interferogram analysis for optical testing (Marcel Dekker) [13] Vaughan J M 1989 The Fabry-Pe´rot interferometer (IOP Publishing Ltd: Bristol) ¨del R, Bo ¨nsch G 2004 Considerations for the evaluation of [14] Decker J E, Scho measurement uncertainty in interferometric gauge block calibration applying methods of phase stepping interferometry Metrologia 41 L11–L17 [15] ISO 1: 2002 Geometrical Product Specifications (GPS) - Standard reference temperature for geometrical product specification and verification (International Organization for Standardization) [16] Edle´n B 1966 The refractive index of air Metrologia 2 71–80
References
[17] Birch K P, Downs M J 1993 An updated Edle´n equation for the refractive index of air Metrologia 30 155–162 [18] Birch K P, Downs M J 1993 Correction to the updated Edle´n equation for the refractive index of air Metrologia 31 315–316 [19] Ciddor P E 1996 Refractive index of air: new equations for the visible and near infrared Appl. Opt 35 1566–1573 ¨nsch G, Potulski E 1998 Measurement of the refractive index of air and [20] Bo comparison with modified Edle´n’s formulae Metrologia 35 133–139 [21] Leach R K, Jackson K, Hart A 1997 Measurement of gauge blocks by interferometry: measurement of the phase change at reflection NPL Report MOT 11 [22] Decker J E, Pekelsky J R 1997 Uncertainty evaluation for the measurement of gauge blocks by optical interferometry Metrologia 34 479–493 [23] Haitjema H, Kotte G 1998 Long gauge block measurements based on a Twyman-Green interferometer and three stabilized lasers Proc. SPIE 3477 25–34 [24] Preston-Thomas H 1990 The International Temperature Scale of 1990 (ITS-90) Metrologia 27 3–10
83
This page intentionally left blank
CHAPTER 5
Displacement measurement 5.1 Introduction to displacement measurement At the heart of all instruments that measure a change in length, or coordinates, are displacement sensors. Displacement sensors measure the distance between a start position and an end position, for example the vertical distance moved by a surface measurement probe as it responds to surface features. Displacement sensors can be contacting or non-contacting, and often can be configured to measure velocity and acceleration. Displacement sensors can be used to measure a whole range of measurands such as deformation, distortion, thermal expansion, thickness (usually by using two sensors in a differential mode), vibration, spindle motion, fluid level, strain, mechanical shock and many more. Many length sensors are relative in their operation, i.e. they have no zero or datum. For this type of sensor the zero of the system is some arbitrary position at power-up. An example of a relative system is a laser interferometer. Many encoder-based systems have a defined datum mark that defines the zero position or have absolute position information encoded on the track. An example of an absolute sensor is a laser time-of-flight system or certain types of angular encoder. There are many types of displacement sensor that can achieve resolutions of the order of nanometres and less, and only the most common types are discussed here. The reader can consult several modern reviews and books that discuss many more forms of displacement sensor (see for example [1–3]). Displacement sensors are made up of several components, including the actual sensing device, a transduction mechanism to convert the measurement signal to an electrical signal, and signal-processing electronics. Only the measurement mechanisms will be covered here, but there are several comprehensive texts that can be consulted on the transduction and signal processing systems (see for example [4]). Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS Introduction to displacement measurement Displacement interferometry Capacitive displacement sensors Inductive displacement sensors Optical encoders Optical fibre sensors Calibration of displacement sensors References
85
86
C H A P T ER 5 : Displacement measurement
5.2 Displacement interferometry 5.2.1 Basics of displacement interferometry Displacement interferometry is usually based on the Michelson configuration or some variant of that basic design. In chapter 4 we introduced the Michelson and Twyman-Green interferometers for the measurement of static length and most of the practicalities in using such interferometers apply to displacement measurement. Displacement measurement, being simply a change in length, is usually carried out by counting the number of fringes as the object being measured (or reference surface) is displaced. Just as with gauge block interferometry the displacement is measured as an integer number of whole fringes and a fringe fraction. Most displacement interferometers require two fringe patterns that are 90 out of phase (referred to as phase quadrature) to allow bi-directional fringe counting and to simplify the fringe analysis. Photodetectors and digital electronics are used to count the fringes and the fraction is determined by electronically sub-dividing the fringe [5]. With this method, fringe sub-divisions of l/1000 are common, giving sub-nanometre resolutions. There are many homodyne and heterodyne interferometers commercially available and the realization of sub-nanometre accuracies in a practical set-up is an active area of research [6]. Many of the modern advances in high-accuracy interferometry come from the community searching for the effects of gravitational waves [7].
5.2.2 Homodyne interferometry Figure 5.1 shows a homodyne interferometer configuration. The homodyne interferometer uses a single frequency, f1, laser beam. Often this frequency is one of the modes of a two-mode stabilized laser (see section 2.9.3.1). The beam from the stationary reference is returned to the beam-splitter with a frequency f1, but the beam from the moving measurement path is returned with a Doppler shifted frequency of f1 df. These beams interfere in the beam-splitter and enter the photodetector. The Doppler shifted frequency gives rise to a count rate, dN/dt, which is equal to f (2v/c), where v is the velocity of the retro-reflector and c is the velocity of light. Integration of the count over time, t, leads to a fringe count, N ¼ 2d/l, where d is the displacement being measured. In a typical homodyne interferometer using a polarized beam, the measurement arm contains a quarter-wave plate, which results in the measurement and reference beams having a phase separation of 90 (for bidirectional fringe counting). In some cases, where an un-polarized beam is used [8], a coating is applied to the beam-splitter to give the required phase
Displacement interferometry
FIGURE 5.1 Homodyne interferometer configuration.
shift [9]. After traversing their respective paths, the two beams re-combine in the beam-splitter to produce an interference pattern. Homodyne interferometers have an advantage over heterodyne interferometers (see section 5.2.3) because the reference and measurement beams are split at the interferometer and not inside the laser (or at an acoustooptic modulator). This means that the light can be delivered to the interferometer via a standard fibre optic cable. In the heterodyne interferometer a polarization-preserving (birefringent) optical fibre has to be employed [10]. Therefore, fibre temperature or stress changes alter the relative path lengths of the interferometer’s reference and measurement beams, causing drift. A solution to this problem is to employ a further photo-detector that is positioned after the fibre optic cable [11]. Homodyne interferometers can have sub-nanometre resolutions and nanometre-level accuracies, usually limited by their non-linearity (see section 5.2.8.4). Their speed limit depends on the electronics and the detector photon noise; see also section 5.2.4. For a speed of 1 m$s1 and four counts per 0.3 mm cycle, a 3 MHz signal must be measured within 1 Hz. Maximum speeds of 4 m$s1 with nanometre resolutions are claimed by some instrument manufacturers.
5.2.3 Heterodyne interferometry Figure 5.2 shows a heterodyne interferometer configuration. The output beam from a dual-frequency laser source contains two orthogonal polarizations, one
87
88
C H A P T ER 5 : Displacement measurement
FIGURE 5.2 Heterodyne interferometer configuration.
with a frequency of f1 and the other with a frequency of f2 (separated by about 3 MHz using the Zeeman effect [12] or some other means – see section 2.9.4). A polarizing beam-splitter reflects the light with frequency f1 into the reference path. Light with frequency f2 passes through the beam-splitter into the measurement path where it strikes the moving retro-reflector causing the frequency of the reflected beam to be Doppler shifted by df. This reflected beam is then combined with the reference light in the beam-splitter and returned to a photodetector with a beat frequency of f2 f1 df. This signal is mixed with the reference signal that continuously monitors the frequency difference, f2 f1. The beat difference, df, gives rise to a count rate, dN/dt, which is equal to f (2v/c), where v is the velocity of the retro-reflector and c is the velocity of light. Integration of the count over time, t, leads to a fringe count, N ¼ 2d/l, where d is the displacement being measured. With a typical reference beat of around 3 MHz, it is possible to monitor df values up to 3 MHz before introducing ambiguities due to the beat crossing through zero. This limits the target speed possible in this case to less than 1 m$s1, which could be a constraint in some applications. An alternative method of producing a two-frequency laser beam is to use an acousto-optic frequency shifter. This method has the advantage that the frequency difference can be much higher, so that higher count rates can be handled [13]. Many variations on the theme in Figure 5.2 have been developed which improve both the speed of response, measurement accuracy and resolution. Modern commercial heterodyne interferometers can be configured to measure both displacement and angle (see for example the xy interferometers in [14]).
Displacement interferometry
5.2.4 Fringe counting and sub-division There are two main types of optical fringe counting methods: hardware fringe counting and software fringe counting [15]. Hardware fringe counting [5] utilises hardware circuits to subdivide and count interference fringes. Its principle of operation is as follows. Two interference signals (sine and cosine) with p/2 phase difference are converted into two square waves by means of a trigger circuit. Activated by the rising edge of the sine-equivalent square wave, a reversible counter adds or subtracts counts according to the moving direction of the measured object, which is determined by the level of the cosine-equivalent square wave that corresponds to the rising edge of the sineequivalent square wave. The advantages of the hardware fringe counting method are good real-time performance and relatively simple realization. However, the electronically countable shift of p/2 corresponds to a phase shift of l/4 (or l/8 in a double pass interferometer – see section 5.2.5), which defines the resolution limit for most existing hardware fringe counting systems. Software fringe counting mainly uses software to subdivide and count interference fringes [16]. Its basic principle is that the sine and cosine interference signals, when properly amplified, can be converted by an analogue-todigital converter (ADC) and then processed by a digital computer to give the number of counts. Compared with hardware fringe counting, software fringe counting can overcome the effect of counting results that are due to random interference signal oscillation, and has better intelligence in discriminating the direction of movement. However, a measurement system that uses software fringe counting can deal only with low-frequency interference signals owing to the relatively slow conversion rate of ADCs.
5.2.5 Double-pass interferometry The simple Michelson interferometer requires a high degree of alignment and requires that alignment to be maintained. The use of retro-reflectors relaxes the alignment requirements but it may not always be possible to attach a retro-reflector (usually a cube-corner or a cat’s eye) to the target. The Michelson interferometer may be rendered insensitive to mirror misalignment by double-passing each arm of the interferometer and inverting the wavefronts between passes. An arrangement is shown in Figure 5.3 where double passing is achieved with a polarizing beam-splitter and two quarterwave plates, and wavefront inversion by a cube-corner retro-reflector. Note that the beams are shown as laterally separated in Figure 5.3. This separation is not necessary but may be advantageous to stop light travelling back to the source. Setting up the components appropriately [17] allows a high degree of
89
90
C H A P T ER 5 : Displacement measurement
FIGURE 5.3 Optical arrangement to double pass a Michelson interferometer.
alignment insensitivity. Note that such an arrangement has been used in the differential interferometer in section 5.2.6.
5.2.6 Differential interferometry Figure 5.4 is a schema of a differential plane mirror interferometer developed at NPL [18]. The beam from the laser is split by a Jamin beam-splitter, creating two beams that are displaced laterally and parallel to each other. Figure 5.4 shows how polarization optics can be used to convert the Michelson part of the interferometer into a plane mirror configuration, but a retro-reflecting configuration could just as easily be employed. After a double passage through the wave-plate, the beams are transmitted back to the Jamin beam-splitter where they recombine and interfere. The design of the Jamin beam-splitter coating is such that the two signals captured by the photo-detectors are in phase quadrature and so give the optimum signal-tonoise conditions for fringe counting and sub-dividing. In this configuration only the differential motion of the mirrors is detected. The differential nature of this interferometer means that many sources of uncertainty are common to both the reference and measurement paths, essentially allowing for common noise rejection. For example, with a conventional Michelson configuration, where the reference and measurement paths
Displacement interferometry
FIGURE 5.4 Schema of a differential plane mirror interferometer.
are orthogonal, changes in the air refractive index in one path can be different from those in the other path. Differential interferometers can have sub-nanometre accuracies, as has been confirmed using X-ray interferometry [19]. When a Heydemann correction is applied (see section 5.2.8.5), such interferometers can have non-linearities of a few tens of picometres.
5.2.7 Swept-frequency absolute distance interferometry Swept-frequency interferometry using laser diodes or other solid-state lasers is becoming popular due to the versatility of its sources and its ability to measure length absolutely. Currently such interferometers achieve high resolution but relatively low accuracies and tend to be used for applications over metres. Consider the case of a laser diode aligned to an interferometer of free spectral range, nR. If the output of the laser is scanned through a frequency range ns, N fringes are generated at the output of the
91
92
C H A P T ER 5 : Displacement measurement
interferometer [20]. Provided the frequency scan range is accurately known, the free spectral range and hence the optical path length, L, may be determined from counting the number of fringes. For a Michelson or Fabry-Pe´rot interferometer in vacuum, the optical path length is given by L ¼
c Nc ¼ : 2nR 2ns
(5.1)
It is generally convenient to use feedback control techniques to lock the laser to particular fringes at the start and finish of the scan and so make N integral. For scans of up to several gigahertz, two lasers are typically used, which are initially tuned to the same frequency. One laser is then scanned by ns, and the difference frequency counted directly as a beat by means of a fast detector with several gigahertz of frequency response. This, together with the number of fringes scanned, enables the optical path length to be determined. The number and size of the sweeps can be used to improve the accuracy and range of the interferometer [21].
5.2.8 Sources of error in displacement interferometry Many of the sources of uncertainty discussed in section 4.5.4 also apply to displacement interferometry. There will be two types of error sources that will lead to uncertainties. Firstly, there will be error sources that are proportional to the displacement being measured, L, commonly referred to as cumulative errors. Secondly, there will be error sources that are independent of the displacement being measured, commonly referred to as non-cumulative errors. When calculating the measurement uncertainty, the standard uncertainties due to the cumulative and non-cumulative error sources need to be combined in an appropriate manner (see section 2.8.3), and an expanded uncertainty calculated. An example of an uncertainty calculation for the homodyne displacement interferometers on a traceable surface texture measuring instrument is given elsewhere [22] and the most prominent error sources are discussed here. The effects of the variation in the vacuum wavelength and the refractive index of the air will be the same as described in section 4.5.4, and the effect of the Abbe error is described in section 3.4.
5.2.8.1 Thermal expansion of the metrology frame All measuring instruments have thermal and metrology loops (see section 3.6). In the case of a Michelson interferometer, with reference to Figure 4.7, both loops run from the laser, follow the optical beam paths though the optics, and travel back to the laser via whatever mechanical base the optics
Displacement interferometry
are mounted on. Any thermal expansion in these components, due to changes in the ambient temperature, will cause an error in the length measured by the interferometer. Such errors can be corrected for as described in section 3.7.1 and need to be considered in the instrument uncertainty analysis. Thermal expansion errors are cumulative. The change in length due to thermal expansion, Dl, of a part of length, l, is given by Dl ¼ alDq
(5.2)
where a is the coefficient of linear thermal expansion and Dq is the change in temperature.
5.2.8.2 Deadpath length Deadpath length, d, is defined as the difference in distance in air between the reference and measurement reflectors and the beam-splitter when the interferometer measurement is initiated. Deadpath error occurs when there is a non-zero deadpath and environmental conditions change during a measurement. Equation (5.3) yields the displacement, D, for a single pass interferometer D ¼
Nlvac Dnd n2 n2
(5.3)
where N is half the number of fringes counted during the displacement, n2 is the refractive index at the end of the measurement, Dn is the change in refractive index over the measurement time: that is n2 ¼ n1 þ Dn, and n1 is the refractive index at the start of the measurement. The second term on the right-hand side of equation (5.3) is the deadpath error, which is noncumulative (although it is dependent on the deadpath length). Deadpath error can be eliminated by presetting counts at the initial position to a value equivalent to d.
5.2.8.3 Cosine error Figure 5.5 shows the effect of cosine error on an interferometer. The moving stage is at an angle to the laser beam (the scale) and the measurement will have a cosine error, Dl, given by Dl ¼ lð1 cosqÞ
(5.4)
where l and q are defined in Figure 5.5. The cosine error will always cause a measurement system to measure short and is a cumulative effect. The obvious way to minimize the effect of cosine error is to correctly align the interferometer. However, despite how perfectly aligned the system appears to be, there will always be a small, residual cosine error. This residual error
93
94
C H A P T ER 5 : Displacement measurement
FIGURE 5.5 Cosine error with an interferometer.
needs to be taken into account in the uncertainty analysis of the system. For small angles, equation (5.4) can be approximated by Dl ¼
lq2 : 2
(5.5)
Due to equation (5.5) cosine error is often referred to as a second-order effect, contrary to the Abbe error, which is a first-order effect. The secondorder nature means that it quickly diminishes as the alignment is improved, but has the disadvantage that its magnitude is difficult to estimate once it becomes relevant.
5.2.8.4 Non-linearity Both homodyne and heterodyne interferometers are subject to non-linearities in the relationship between the measured phase difference and the displacement. The many sources of non-linearity in heterodyne interferometers are discussed in [23], and further discussed, measured and extended in [24,25]. These sources include misalignment of laser polarization axes with respect to the beam-splitter, ellipticity of the light from the laser source, differential transmission between the two arms of the interferometer, rotation of the plane of polarization by the retro-reflectors, leakage of light with the unwanted polarization through the beam-splitter, and lack of geometrical perfection of the wave plates used. For homodyne interferometers, the main source of non-linearity [26] is attributed to polarization mixing caused by imperfections in the polarizing beam-splitters, although there are several other sources [6]. The various sources of non-linearity give rise to periodic errors – a first-order phase harmonic having a period of one cycle per fringe and a second harmonic with a period of two cycles per fringe. Errors due to non-linearities are usually
Displacement interferometry
of the order of a few nanometres but can be reduced to below a nanometre with careful alignment and high-quality optics. There have been many attempts to correct for non-linearity in interferometers with varying degrees of success (see for example [27,28]). Recently researchers have developed a heterodyne interferometer for which a zero periodic non-linearity is claimed [29].
5.2.8.5 Heydemann correction When making displacement measurements at the nanometre level, the sine and cosine signals from interferometers need to be corrected for dc offsets, differential gains and a quadrature angle that is not exactly 90 . The method described here is that due to Birch [5] and is a modified version of that originally developed by Heydemann [30]. There are many ways to implement such a correction in both software and hardware but the basic mathematics is that presented here. The full derivation is given, as this is an essential correction in many MNT applications of interferometry. This method only requires a single-frequency laser source (homodyne) and does not require polarization optics. Birch [5] used computer simulations of the correction method to predict a fringe-fractioning accuracy of 0.1 nm. Other methods, which also claim to obtain sub-nanometre uncertainties, use heterodyne techniques [31] and polarization optics [32]. Heydemann used two equations that describe an ellipse U1d ¼ U1 þ p
(5.6)
U2 cos a U1 sin a þq G
(5.7)
and U2d ¼
where U1d and U2d represent the noisy signals from the interferometer containing the correction terms p, q and a as defined by equations (5.14), (5.15) and (5.12) respectively, G is the ratio of the gains of the two detector systems and U1 and U2 are given by U1 ¼ RD cosd
(5.8)
U2 ¼ RD sind
(5.9)
where d is the instantaneous phase of the interferograms. If equations (5.6) to (5.7) are combined they describe an ellipse given by R2D ¼ ðU1d pÞ2 þ
½ðU2d qÞG þ ðU1d pÞsina2 : cosa
(5.10)
95
96
C H A P T ER 5 : Displacement measurement
If equation (5.10) is now expanded out and the terms are collected together, an equation of the following form is obtained 2 2 AU1d þ BU2d þ CU1d U2d þ DU1d þ EU2d ¼ 1
(5.11)
with A ¼ ½R2D cos2 a p2 G2 q2 2Gpq sina1 B ¼ AG2 C ¼ 2AG sina D ¼ 2A½p þ Gq sina E ¼ 2AG½Gq þ p sina Equation (5.11) is in a form suitable for using a linearized least squares fitting routine [33] to derive the values of A through E, from which the correction terms can be derived from the following set of transforms " # C 1 a ¼ sin (5.12) ð4ABÞ1=2 1=2 B G ¼ A
RD ¼
(5.13)
p ¼
2BD EC C2 4AB
(5.14)
q ¼
2AE DC C2 4AB
(5.15)
½4Bð1 þ Ap2 þ Bq2 þ CpqÞ1=2 : 5AB C2
(5.16)
Consequently, the interferometer signals are corrected by using the two inversions 0
U1 ¼ U1d p
(5.17)
and 0
U2 ¼ 0
0
ðU1d pÞsina þ GðU2d qÞ cosa
(5.18)
where U1 and U2 are now the corrected phase quadrature signals and, therefore, the phase of the interferometer signal is derived from the arctan0 0 gent of ðU2 =U1 Þ.
Displacement interferometry
The arctangent function varies from p/2 to þp/2, whereas for ease of fringe-fractioning a phase, q, range of 0 to 2p is preferable. This is satisfied by using the following equation q ¼ tan1 ðU1 =U2 Þ þ p=2 þ L
(5.19)
where L ¼ 0 when U1d > p and L ¼ p when U1d < p. The strong and weak points of a Heydemann-corrected system are that it appears correct in itself and refers to its own result to predict residual deviations (for example, deviations from the ellipse). However, there are uncertainty sources that still give deviations even when the Heydemann correction is applied perfectly, for example, so-called ghost-reflections.
5.2.8.6 Random error sources There are many sources of random error that can affect an interferometer. Anything that can change the optical path or mechanical part of the metrology loop can give rise to errors in the measured displacement. Examples include seismic and acoustic vibration (see section 3.9), air turbulence (causing random fluctuations of the air refractive index) and electronic noise in the detectors and amplifier electronics. Random errors are usually non-cumulative and can be quantified using repeated measurements. Homodyne systems measure phase by comparing the intensities of two sinusoidal signals (sine and cosine). By contrast, modern heterodyne systems measure phase by timing the arrival of zero crossings on a sinusoidal signal. Because the signal slope at the zero crossings is nominally 45 , phase noise is approximately equal to intensity noise. Therefore, the influence of noise on both systems is effectively the same.
5.2.8.7 Other sources of error in displacement interferometers There are many sources of error that only have a significant effect when trying to measure to accuracies of nanometres or less using interferometry. Due to the very high spatial and temporal coherence of the laser source, stray light can interfere with beams reflected from the surfaces present in the reference and measurement arms of the interferometer. The dominant effects are usually due to unwanted reflections and isolated strong point scatterers, both leading to random and non-random spatial variations in the scattered phase and amplitude [13]. These effects can be of the order of a nanometre (see for example [22]). To minimize the effects of stray reflections all the optical components should be thoroughly cleaned, the retroreflectors (or mirrors) should be mounted at a non-orthogonal angle to the beam propagation direction (to avoid reflections off the front surfaces) and all
97
98
C H A P T ER 5 : Displacement measurement
the non-critical optical surfaces should be anti-reflection coated. It is extremely difficult, if not impossible, to measure the amplitude of the stray light, simply because it propagates in the same direction as the main beams. Also due to the laser source, the shift of the phase and changes in the curvature of the wavefronts lead to systematic errors and diffraction effects [34]. There will also be quantum effects [35] and even photon bounce [36]. These effects are very difficult to quantify or measure but are usually significantly less than a nanometre.
5.2.9 Angular interferometers In the discussion on angle in section 2.6 the possibility of determining an angle by the ratio of two lengths was discussed. This method is applicable in interferometry. Figure 5.6 shows a typical optical arrangement of an interferometer set up for angular measurements. The angular optics are used to create two parallel beam paths between the angular interferometer and the angular reflector. The distance between the two beam paths is found by measuring the separation of the retro-reflectors in the angular reflector. This measurement is made either directly or by calibrating a scale factor against a known angular standard. The beam that illuminates the angular optics contains two frequencies, f1 and f2 (heterodyne). A polarizing beam-splitter in the angular interferometer splits the frequencies, f1 and f2, that travel along separate paths. At the start position the angular reflector is assumed to be approximately at a zero position (i.e. the angular measurements are relative). At this position the two paths have a small difference in length. As the angular reflector is
FIGURE 5.6 Schema of an angular interferometer.
Capacitive displacement sensors
rotated relative to the angular interferometer the relative lengths of the two paths will change. This rotation will cause a Doppler shifted frequency change in the beam returned from the angular interferometer to the photodetector. The photodetector measures a fringe difference given by (f1 Df1) (f2 D f2). The returned difference is compared with the reference signal, (f1 f2). This difference is related to velocity and then to distance. The distance is then converted to an angle using the known separation of the reflectors in the angular interferometer. Other arrangements of angular interferometer are possible using plain mirrors but the basic principle is the same. Angular interferometers are generally used for measuring small angles (less than 10 ) and are commonly used for measuring guideway errors in machine tools and measuring instruments.
5.3 Capacitive displacement sensors Capacitive sensors are widely used for non-contact displacement measurement. Capacitive sensors can have very high dynamic responses (up to 50 kHz), sub-nanometre resolution, ranges up to 10 mm, good thermal stability and very low hysteresis (mainly due to their non-contact nature). Capacitive sensors measure the change in capacitance as a conducting target is displaced with respect to the sensor. Figure 5.7 shows a capacitive sensor and measurement target. In this parallel plate capacitor arrangement, the capacitance, C, is given by C ¼
3A d
(5.20)
where 3 is the permittivity of the medium between the sensor and target, A is the effective surface area of the sensor and d is the distance between the sensor and the target surface. This relationship is not highly dependent on the target conductivity and hence capacitance sensors can be used with
FIGURE 5.7 A typical capacitance sensor set-up.
99
100
C H A P T ER 5 : Displacement measurement
a range of materials. Note that capacitance sensors can also be used to measure dielectric thickness and density by varying 3 and keeping d constant. Due to the effect of stray capacitance and the need to measure very low values of capacitance (typically from 0.01 pF to 1 pF), capacitance sensors usually require the use of a guard electrode to minimise stray capacitance. Capacitance sensors are used in the semiconductor, disk drive and precision manufacturing industries, often to control the motion of a rotating shaft. Modern MEMS devices also employ thin membranes and comb-like structures to act as capacitance sensors (and actuators) for pressure, acceleration and angular rate (gyroscopic) measurement [37,38]. High-accuracy capacitance sensors are used for control of MNT motion devices [39] and form the basis for a type of near-field microscope (the scanning capacitance microscope) [40]. The non-linear dependence of capacitance with displacement can be overcome by using a cylindrical capacitor or by moving a flat dielectric plate laterally between the plates of a parallel plate capacitor [41]. These configurations give a linear change of capacitance with displacement. The environment in which it operates will affect the performance of a capacitance sensor. As well as thermal expansion effects, the permittivity of the dielectric material (including air) will change with temperature and humidity [42]. Misalignment of the sensor and measurement surface will also give rise to a cosine effect. Capacitance sensors are very similar to some inductive or eddy current sensors (i.e. sensors that use the electromagnetic as opposed to the electrostatic field). Many of the points raised above relate to both types of sensor. See [42] for a fuller account of the theory and practice behind capacitive sensors.
5.4 Inductive displacement sensors As discussed above, inductive sensors are very similar to capacitive sensors. However, inductive sensors are not dependent upon the material in the sensor/target gap so they are well adapted to hostile environments where fluids may be present in the gap. They are sensitive to the target material and must be calibrated for each material that they are used with. They also require a certain thickness of target material to operate (usually fractions of a millimetre, dependent on the operating frequency). Whilst they may have nanometre resolutions, their range of operation is usually some millimetres. Their operating frequencies can be 100 kHz and above. Another form of contacting sensor, based on inductive transduction, is the linear variable differential transformer (LVDT). An LVDT probe consists
Inductive displacement sensors
of three coils wound on a tubular former. A centre-tapped primary coil is excited by an oscillating signal of between 50 Hz and 30 kHz and a nonmagnetic rod, usually with an iron core, moves in and out of the tube. Figure 5.8 illustrates this design. As the rod moves the mutual inductance between the primary and two other, secondary, coils changes. A voltage opposition circuit gives an output potential difference that is directly proportional to the difference in mutual inductance of the two secondary coils that is in turn proportional to the displacement of the rod within the tube. When the core is central between the two secondary coils, the LVDT probe is at its null position and the output potential difference is zero. LVDTs have a wide variety of ranges, typically 100 mm to 500 mm and linearities of 0.5 % or better. LVDTs have a number of attractive features. First, there is no physical contact between the movable core and the coil structure, which results in frictionless measurement. The zero output at its null position means that the signal can be amplified by an unlimited amount, and this essentially gives an LVDT probe infinite resolution, the only limitation being caused by the external signal-conditioning electronics. There is complete isolation between the input and output, which eliminates the need for buffering when interfacing to signal-conditioning electronics. The repeatability of the null position is inherently very stable, making an LVDT
FIGURE 5.8 Schematic of an LVDT probe.
101
102
C H A P T ER 5 : Displacement measurement
FIGURE 5.9 Error characteristic of an LVDT probe.
probe a good null-position indicator. Insensitivity to radial core motion allows an LVDT probe to be used in applications where the core does not move in an exactly straight line. Lastly, an LVDT probe is extremely rugged and can be used in relatively harsh industrial environments (although they are sensitive to magnetic fields). Figure 5.9 shows the ‘bow-tie’ error characteristic of a typical LVDT probe over its linear or measuring range. Probes are usually operated around the null position, for obvious reasons, although, depending on the displacement accuracy required, a much larger region of the probe’s range can be used. LVDTs find uses in advanced machine tools, robotics, construction, avionics and computerised manufacturing. Air-bearing LVDTs are now available with improved linearities and less damping. Modern LVDTs can have multiple axes [43] and use digital signal processing [44] to correct for non-linearities and to compensate for environmental conditions and fluctuations in the control electronics [45].
5.5 Optical encoders Optical encoders operate by counting scale lines with the use of a light source and a photodetector. They usually transform the light distribution into two sinusoidal electrical signals that are used to determine the relative position between a scanning head and a linear scale. The grating pitch (resolution) of the scales varies from less than 1 mm to several hundred micrometres. As with interferometers, electronic interpolation of the signals can be used to
Optical encoders
produce sub-nanometre resolution [5] and some of the more advanced optical encoders can have accuracies at this level [46–48]. The most common configuration of an optical encoder is based upon a double grating system; one grating acts as the scale and the other is placed in the reading head. The grating pair produces a fringe pattern at a certain distance from the second grating (usually a Lau or moire´ pattern). The reading head has a photodetector that transforms the optical signal into an electrical signal. When a relative displacement between the reading head and the scale is produced, the total light intensity at the photodetector varies periodically. The electronic signals from the photodetector are analysed in the same manner as the quadrature signals from an interferometer (see section 5.2.4). Figure 5.10 is a schema of a commercial optical encoder system capable of sub-nanometre resolution. The period of the grating is 512 nm. The reading head contains a laser diode, collimating optics and an index grating with a period of 1024 nm (i.e. twice the period of the scale). The signals collected by the detectors are transformed into quadrature signals with a period of 128 nm (i.e. a quarter of the scale period). There are a number of errors that can affect the performance of an optical encoder, which can be mechanical, electrical or optical [49]. Mechanical
FIGURE 5.10 Schema of an optical encoder.
103
104
C H A P T ER 5 : Displacement measurement
errors arise from deformation of the parts, thermal expansion and vibration. There may also be errors in the production of the gratings or dust particles on the gratings. Variations in the light intensity, mechanical rotations between the two gratings or variations in the amplification of the optical signals may also occur. Correct design of the scanning head so that the encoder is robust to variations in the distances between the parts, rotations, variations in illumination conditions, etc. can minimize many of the error sources. Optical encoders can be linear or rotary in nature. The rotary version simply has the moving grating encoded along a circumference. The linear and angular versions often have integral bearings due to the difficulty of aligning the parts and the necessity for a constant light intensity. Optical encoders are often used for machine tools, CMMs, robotics, assembly devices and precision slideways. A high-accuracy CMM that uses optical encoders is discussed in section 9.4.1.1. Some optical encoders can operate in more than one axis by using patterned gratings [50].
5.6 Optical fibre sensors Optical fibre displacement sensors are non-contact, relatively cheap and can have sub-nanometre resolution, millimetre ranges at very high operating frequencies (up to 500 kHz). Optical fibres transmit light using the property of total internal reflectance; light that is incident on a media’s interface will be totally reflected if the incident angle is greater than a critical angle (known as Brewster’s angle [51]). This condition is satisfied when the ratio of the refractive index of the fibre and its cladding is in proper proportion (see Figure 5.11). The numerical aperture, NA, of an optical fibre is given by NA ¼ sin1 ðn21 n22 Þ
FIGURE 5.11 Total internal reflectance in an optical fibre.
(5.21)
Optical fibre sensors
where n1 and n2 are the refractive indexes of the fibre core and cladding respectively. This refractive index ratio also governs the efficiency at which light from the source will be captured by the fibre; the more collimated the light from the source, the more light that will be transmitted by the fibre. A multimode optical fibre cable (i.e. one that transmits a number of electromagnetic modes) has a multilayered structure including the fibre, the cladding, a buffer layer, a hard braid and a plastic outer jacket. There are three types of reflective optical fibre sensors, known as bifurcated sensors: hemispherical, fibre pair and random [52]. These three configurations refer to fibre bundles at one end of the sensor (see Figure 5.12). The bundles have one common end (for sensing) and the other end is split evenly into two (for the source and detector) (see Figure 5.13). As the target is moved towards the sensing end the intensity of the reflected light follows the curve shown in Figure 5.14. Close to the fibre end the response is linear, but follows a 1/d2 curve as the distance from the fibre end increases (d is the distance from the fibre end to the target). The performance of a bifurcated fibre optic sensor is a function of the cross-sectional geometry of the bundle, the illumination exit angle and the distance to target surface. Tilt of the target surface with respect to the fibre end significantly degrades the performance of a sensor. Optical fibre sensors are immune to electromagnetic interference, very tolerant of temperature changes and bending or vibration of the fibre does not significantly affect their performance. As a consequence optical fibre sensors are often used in difficult or hazardous environments. Note that only bifurcated fibre optic displacement sensors have been considered here. However, fibre optic sensors can be used to measure a wide range of measurands [53] and can be the basis of very environment-tolerant displacement measuring interferometers [54], often used where there is not sufficient space for bulk
FIGURE 5.12 End view of bifurcated optical fibre sensors, (a) hemispherical, (b) random and (c) fibre pair.
105
106
C H A P T ER 5 : Displacement measurement
FIGURE 5.13 Bifurcated fibre optic sensor components.
FIGURE 5.14 Bifurcated fibre optic sensor response curve.
optics. Fibre sensing and delivery has been used by some surface topography measuring instruments [55], and fibre sensors are used to measure the displacement of atomic force microscope cantilevers [56].
5.7 Calibration of displacement sensors There are many more forms of displacement sensors other than those described in this chapter (see [1,2]). Examples include sensors that use the Hall effect, the piezoelectric effect, ultrasonics, electrical resistance, magnetism and the simple use of a knife-edge in a laser beam [57]. Also, some MNT devices, including MEMS and NEMS sensors, use quantum mechanical effects such as tunnelling and quantum interference [58]. It is often claimed that a sensor has a resolution below a nanometre but it is far
Calibration of displacement sensors
from trivial to prove such a statement. Accuracies of nanometres are even more difficult to prove and often there are non-linear effects or sensor/target interactions that make the measurement result very difficult to predict or interpret. For these reasons, traceable calibration of displacement sensors is essential, especially in the MNT regime.
5.7.1 Calibration using optical interferometry In order to characterise the performance of a displacement sensor a number of interferometers can be used (provided the laser source has been traceably calibrated; see section 2.9.5). A homodyne or heterodyne set-up (see sections 5.2.2 and 5.2.3 respectively) can be used by rigidly attaching or kinematically mounting an appropriate reflector so that it moves collinearly with the displacement sensor. One must be careful to minimize the effects of Abbe offset (see section 3.4) and cosine error (see section 5.2.8.3), and to reduce any external disturbances. A differential interferometer (see section 5.2.6) can also be used but over a reduced range. As displacement sensor characteristics are very sensitive over short distances, the limits and limiting factors of interferometric systems for very small displacement become critical. For the most common interferometers it is the non-linearity within one wavelength that becomes critical. Even with the Heydemann correction applied this can be the major error source.
5.7.1.1 Calibration using a Fabry-Pe´rot interferometer The Fabry-Pe´rot interferometer, as described in section 4.4.4, can be used for an accurate calibration at discrete positions. If one mirror in the cavity is displaced, parallel interference extrema appear in steps of half a wavelength. If the sensor to be calibrated at the same time measures the mirror displacement, a calibration can be carried out. Such a system was described by [59], where it was used to calibrate a displacement generator with a capacitive feedback system with 0.2 nm uncertainty. As a capacitive system can be assumed to have a smoothly varying non-linear behaviour, discrete steps can be feasibly used. However, fringe periodic deviations as they may appear in interferometric systems cannot be detected. A continuous calibration system is possible if the wavelength can be tuned and accurately measured simultaneously (see section 2.9.5).
5.7.1.2 Calibration using a measuring laser The stability of an iodine-stabilized He-Ne laser is considered to be one part in 1011 (see section 2.9.3). Relating this stability to the typical length of
107
108
C H A P T ER 5 : Displacement measurement
a laser cavity (a Fabry-Pe´rot cavity) of, say, 15 cm one could conclude that the cavity length is fixed with an uncertainty of 1.5 pm. Of course there are many disturbing factors, such as temperature effects in the air, that make such a small uncertainty in a true displacement measurement hard to achieve. In the set-up described in [60], the iodine-standard is stabilized on its successive iodine peaks, and a sensor can be calibrated at a number of discrete points. Thermal drift effects mainly determine the uncertainty; the frequency stability itself contributes only 1.5 pm to the uncertainty. This is probably one of the most obvious traceable displacement measurements possible, although difficult to realize in practice. Separate measuring lasers can be used to give a continuous measurement [61,62]. Here the laser frequency can be tuned by displacing one of its mirrors, while the laser frequency is continuously monitored by a beat measurement. Mounting the laser outside the cavity removes the major thermal (error) source, but further complicates the set-up. In [63] a piezoelectric controller accounts for a displacement that is applied to a mirror and is measured by both a sensor and a Fabry-Pe´rot system. The slave laser is stabilized to the Fabry-Pe´rot cavity, i.e. its frequency is tuned such that it gives a maximum when transmitted through the cavity. At the same time the slave laser frequency is calibrated by a beat measurement against the iodinestabilized laser. Also here the uncertainties from the frequency measurement are in the picometre range, and still thermal and drift effects dominate [63]. Design considerations are in the cavity length, the tuning range of the slave laser, the demand that the slave laser has a single-mode operation and the range that the frequency counter can measure. Typical values are 100 mm cavity length and 1 GHz for both the tuning range of the slave laser and the detection range of the photodiode and frequency counter. For a larger frequency range the cavity length can be reduced, but this increases the demands on the ability to measure a larger frequency range. With tuneable diode lasers the cavity length can be reduced to the millimetre level, but this requires different wavelength measurement methods [59].
5.7.2 Calibration using X-ray interferometry The fringe spacing for a single pass two-beam optical interferometer is equal to half the wavelength of the source radiation and this is its basic resolution before fringe sub-division is necessary. The fringe spacing in an X-ray interferometer is independent of the wavelength of the source; it is determined by the spacing of diffraction planes in the crystal from which X-rays are diffracted [64]. Due to its ready availability and purity, silicon is the most common material used for X-ray interferometers. The atomic lattice
Calibration of displacement sensors
FIGURE 5.15 Schema of an X-ray interferometer.
parameter of silicon can be accurately measured (by diffraction) and is regarded as a traceable standard of length. Therefore, X-ray interferometry allows a traceable measurement of displacement with a basic resolution of approximately 0.2 nm (0.192 nm for the (220) planes in silicon). Figure 5.15 shows a schema of a monolithically manufactured X-ray interferometer made from a single crystal of silicon. Three, thin, vertical and equally spaced lamella are machined with a flexure stage around the third lamella (A). The flexure stage has a range of a few micrometres and is driven by a piezoelectric actuator (PZT). X-rays are incident at the Bragg angle [10] on lamella B and two diffracted beams are transmitted. Lamella A is analogous to a beam-splitter in an optical interferometer. The transmitted beams are incident on lamella M that is analogous to the mirrors in a Michelson interferometer. Two more pairs of diffracted beams are transmitted and one beam from each pair is incident on lamella A, giving rise to a fringe pattern. This fringe pattern is too small to resolve individual fringes, but when lamella A is translated parallel to B and M, a moire´ fringe pattern between the coincident beams and lamella A is produced. Consequently the intensity of the beams transmitted through lamella A varies sinusoidally as lamella A is translated. The displacements measured by an X-ray interferometer are free from the non-linearity in an optical interferometer (see section 5.2.8.4). To calibrate an optical interferometer (and, therefore, measure its non-linearity), the X-ray interferometer is used to make a known displacement that is compared against the optical interferometer under calibration. By servo-controlling the PZT it is possible to hold lamella A in a fixed position or move it in discrete
109
110
C H A P T ER 5 : Displacement measurement
steps equal to one fringe period [65]. Examples of the calibration of a differential plane mirror interferometer and an optical encoder can be found in [19] and [46] respectively. In both cases periodic errors with amplitudes of less than 0.1 nm were measured once a Heydemann correction (see section 5.2.8.5) had been applied. X-ray interferometry can also be used to calibrate the characteristics of translation stages in two orthogonal axes [66] and to measure nanoradian angles [67]. One limitation of X-ray interferometry is its short range. To overcome this limitation, NPL, PTB and Instituto di Metrologia ‘G. Colonetti’ (now known as Instituto Nazionale di Recerca Metrologica – the Italian NMI) collaborated on a project to develop the Combined Optical and X-ray Interferometer (COXI) [68] as a facility for the calibration of displacement sensors and actuators up to 1 mm. The X-ray interferometer has an optical mirror on the side of its moving mirror that is used in the optical interferometer (see Figure 5.16). The optical interferometer is a double-path differential system with one path measuring displacement of the moving mirror on the X-ray interferometer with respect to the two fixed mirrors above the translation stage. The other path measures the displacement of the mirror (M)
FIGURE 5.16 Schema of a combined optical and X-ray interferometer.
References
moved by the translation stage with respect to the two fixed mirrors either side of the moving mirror in the X-ray interferometer. Both the optical and X-ray interferometers are servo-controlled. The X-ray interferometer moves in discrete X-ray fringes, the servo system for the optical interferometer registers this displacement and compensates by initiating a movement of the translation stage. The displacement sensor being calibrated is referenced to the translation stage and its measured displacement is compared with the known displacements of the optical and X-ray interferometers.
5.8 References [1] Wilson J S 2005 Sensor technology handbook (Elsevier: Oxford) [2] Fraden J 2003 Handbook of modern sensors: physics, designs and applications (Springer) 3rd edition [3] Bell D J, Lu T J, Fleck N A, Spearing S M 2005 MEMS actuators and sensors: observations of their performance and selection for purpose J. Micromech. Microeng. 15 S153–S154 [4] de Silva C W 2007 Sensors and actuators: control system instrumentation (CRC Press) [5] Birch K P 1990 Optical fringe sub-division with nanometric accuracy Precision Engineering 12 195–198 [6] Peggs G N, Yacoot A 2002 A review of recent work in sub-nanometre displacement measurement using optical and x-ray interferometry Phil. Trans. R. Soc. Lond. A 260 953–968 [7] Winkler W, Danzmann K, Grote H, Hewitson M, Hild S, Hough J, Lu ¨ ck H, Malec M, Freise A, Mossavi K, Rowan S, Ru ¨ diger A, Schilling R, Smith J R, Strain K A, Ward H, Willke B 2007 The GEO 600 core optics Opt. Comms. 280 492–499 [8] Downs M J, Birch K P, Cox M G, Nunn J W 1995 Verification of a polarizationinsensitive optical interferometer system with subnanometric capability Precision Engineering 17 1–6 [9] Raine K W, Downs M J 1978 Beam-splitter coatings for producing phase quadrature interferometer outputs Optica Acta 25 549–558 [10] Hecht E 2003 Optics (Pearson Education) 4th edition [11] Knarren B A W H, Cosijns S J A G, Haitjema H, Schellekens P H J 2005 Validation of a single fibre-fed heterodyne laser interferometer with nanometre uncertainty Precision Engineering 29 229–236 [12] Williams D C 1992 Optical methods in engineering metrology (Kluwer Academic Publishers) [13] Hariharan P 2006 Basics of interferometry (Academic Press) 2nd edition [14] Leach R K, Flack D R, Hughes E B, Jones C W 2008 Development of a new traceable areal surface texture measuring instrument Wear 266 552–554
111
112
C H A P T ER 5 : Displacement measurement
[15] Chen B, Luo J, Li D 2005 Code counting of optical fringes: methodology and realisation Appl. Opt. 44 217–223 [16] Su S, Lu H, Zhou W, Wang G 2000 A software solution to counting and subdivision of moire´ fringes with wide dynamic range Proc. SPIE 4222 308–312 [17] Bennett S J 1972 A double-passed Michelson interferometer Opt. Commun. 4 428–430 [18] Downs M J, Nunn J W 1998 Verification of the sub-nanometric capability of an NPL differential plane mirror interferometer with a capacitance probe Meas. Sci. Technol. 9 1437–1440 [19] Yacoot A, Downs M J 2000 The use of x-ray interferometry to investigate the linearity of the NPL Plane Mirror Differential Interferometer Meas. Sci. Technol. 11 1126–1130 [20] Barwood G P, Gill P, Rowley W R C 1998 High-accuracy length metrology using multiple-stage swept-frequency interferometry with laser diodes Meas. Sci. Technol. 9 1036–1041 [21] Bechstein K-H, Fuchs W 1998 Absolute interferometric distance measurements applying a variable synthetic wavelength J. Opt. 29 179–182 [22] Leach R K 1999 Calibration, traceability and uncertainty issues in surface texture metrology NPL Report CLM7 [23] Rosenbluth A E, Bobroff N 1990 Optical sources of non-linearity in heterodyne interferometers Precision Engineering 12 7–11 [24] Bobroff N 1993 Recent advances in displacement measuring interferometry Meas. Sci. Technol. 4 907–926 [25] Cosijns S J A G, Haitjema H, Schellekens P H J 2002 Modelling and verifying non-linearities in heterodyne displacement interferometry Precision Engineering 26 448–455 [26] Augustyn W, Davis P 1990 An analysis of polarization mixing in distance measuring interferometers J. Vac. Sci. Technol. B8 2032–2036 [27] Xie Y, Wu Y 1992 Zeeman laser interferometer errors for high precision measurements Appl. Opt. 31 881–884 [28] Eom T, Kim J, Joeng K 2001 The dynamic compensation of nonlinearity in a homodyne laser interferometer Meas. Sci. Technol. 12 1734–1738 [29] Kim H S, Schmitz T L, Beckwith J F, Rueff M C 2008 A new heterodyne interferometer with zero periodic error and tuneable beat frequency Proc. ASPE, Portland, Oregon, USA, Oct. 136–139 [30] Heydemann P L M 1981 Determination and correction of quadrature fringe measurement errors in interferometers Appl. Opt. 20 3382–3384 [31] Link A, von Martens H-J 1998 Amplitude and phase measurement of the sinusoidal vibration in the nanometer range using laser interferometry Measurement 24 55–67 [32] Usada T, Dobonsz M, Kurosawa T 1998 Evaluation method for frequency characteristics of linear actuators in the sub-mm stroke range using a modified Michelson-type interferometer Nanotechnology 9 77–84
References
[33] Forbes A B 1987 Fitting an ellipse to data NPL Report DITC 95/87 [34] Mana G 1989 Diffraction effects in optical interferometers illuminated by laser sources Metrologia 26 87–93 [35] Meers B J, Strain K A 1991 Modulation, signal and quantum noise in optical interferometers Phys. Rev. A44 4693–4703 [36] Fujimoto H, Mana G, Nakayama K 2000 Light bounces in two-beam scanning laser interferometers Jpn. J. Appl. Phys. 39 2870–2875 [37] Rai-Choudhury P 2001 MEMS and MOEMS technology and applications (The International Society of Optical Engineering: Washington) [38] Reilly S P, Leach R K, Cuenat A, Awan S A, Lowe M 2006 Overview of MEMS sensors and the metrology requirements for their manufacture NPL Report DEPC-EM 008 [39] Hicks T R, Atherton P D 1997 The nanopositioning book: moving and measuring to better than a nanometre (Queensgate Instruments) [40] Williams C C 1999 Two-dimensional dopant profiling by scanning capacitance microscopy Annual Review of Material Science 29 471–504 [41] Leach R K, Oldfield S, Awan S A, Blackburn J, Williams J M 2004 Design of a bi-directional electrostatic actuator for realising nanonewton to micronewton forces NPL Report DEPC-EM 001 [42] Baxter L K 1996 Capacitance sensors: design and applications (Wiley IEEE Press) [43] Kano Y, Hasebe S, Huang C, Yamada T 1989 New type of linear variable differential transformer position transducer IEEE Trans. Instrum. Meas. 38 407–409 [44] Ford R M, Weissbach R S, Loker D R 2001 A novel DSP-based LVDT signal conditioner IEEE Trans. Instrum. Meas. 50 768–773 [45] Saxena S C, Seksena S B 1989 A self-compensated smart LVDT transducer IEEE Trans. Instrum. Meas. 38 748–753 [46] Yacoot A, Cross N 2003 Measurement of picometre non-linearities in an optical grating encoder using x-ray interferometry Meas. Sci. Technol. 14 148–152 [47] Holzapfel W 2008 Advances in displacement metrology based on encoder systems Proc. ASPE, Portland, Oregon, USA, Oct. 71–74 [48] Heilmann T K, Chen C G, Konkola P T, Schattenburg M L 2004 Dimensional metrology for nanometre scale science and engineering: towards subnanometre accurate encoders Nanotechnology 15 S504–S511 [49] Sanchez-Brea L M, Morlanes T 2008 Metrological errors in optical encoders Meas. Sci. Technol. 19 115104 [50] Sandoz P 2005 Nanometric position and displacement measurement of six degrees of freedom by means of a patterned surface element Appl. Opt. 44 1449–1453 [51] Long H, Hecht J 2005 Understanding fiber optics (Pearson Higher Education)
113
114
C H A P T ER 5 : Displacement measurement
[52] Slocum A H 1992 Precision machine design (Society of Manufacturing Engineers: USA) [53] Udd E 2006 Fiber optic sensors: an introduction for engineers and scientists (Wiley Blackwell) [54] Domanski A W, Wolinski T R, Bock W J 1995 Polarimetric fibre optic sensors: state of the art and future Proc. SPIE 2341 21–26 [55] Jiang X, Lin D, Blunt L, Zhang W, Zhang L 2006 Investigation of some critical aspects of on-line surface measurement by a wavelength-divisionmultiplexing technique Meas. Sci. Technol. 17 483–487 [56] Yacoot A, Koenders L, Wolff H 2007 An atomic force microscope for the study of the effects of tip-sample interactions on dimensional metrology Meas. Sci. Technol. 18 350–359 [57] Puppin E 2005 Displacement measurements with resolution in the 15 pm range Rev. Sci. Instrum. 76 105107 [58] Kalantar-zadeh K, Fry B 2007 Nanotechnology-enabled sensors (Springer) [59] Haitjema H, Rosielle N, Kotte G, Steijaert H 1998 Design and calibration of a parallel-moving displacement generator for nano-metrology Meas. Sci. Technol. 9 1098–1104 [60] Ottmann S, Sommer M 1989 Absolute length calibration of microindicators in the nanometre range VDU Berichte 761 371–376 [61] Wetzels S F C L, Schellekens P H J 1996 Calibration of displacement sensors with nanometer accuracy using a measuring laser Proc. IMEKO, Lyngby, Denmark, Oct. 91–100 [62] Brand U, Herrmann K 1996 A laser measurement system for the high-precision calibration of displacement transducers Meas. Sci. Technol. 7 911–917 [63] Cosijns S 2004 Displacement laser interferometry with sub-nanometer uncertainty (PhD Thesis: Eindhoven University of Technology) [64] Wilkening G, Koenders L 2005 Nanoscale calibration standards and methods: dimensional and related measurements in the micro- and nanometer range (Wiley VCH) [65] Bergamin A, Cavagnero G, Mana G 1997 Quantised positioning of x-ray interferometers Rev. Sci. Instrum. 68 17–22 [66] Chetwynd D G, Schwarzenberger D R, Bowen D K 1990 Two dimensional x-ray interferometry Nanotechnology 1 19–26 [67] Kuetgens U, Becker P 1998 X-ray angle interferometry: a practical set-up for calibration in the microrad range with nanorad resolution Meas. Sci. Technol. 12 1660–1665 [68] Basile G, Becker P, Bergamin G, Cavagnero G, Franks A, Jackson K, Keutgens U, Mana G, Palmer E W, Robbie C J, Stedman M, Stumpel J, Yacoot A, Zosi G 2000 Combined optical and x-ray interferometer for high precision dimensional metrology Proc. R. Soc. A 456 701–729
CHAPTER 6
Surface topography measurement instrumentation 6.1 Introduction to surface topography measurement Most manufactured parts rely on some form of control of their surface features. The surface is usually the feature on a component or device that interacts with the environment in which the component is housed or the device operates. The surface topography (and of course the material characteristics) of a part can affect things such as how two bearing parts slide together, how light interacts with the part, or how the part looks and feels. The need to control and, hence, measure surface features becomes increasingly important as we move into a miniaturized world. The surface features can become the dominant functional features of a part and may become large in comparison to the overall size of an object. There is a veritable dictionary-sized list of terminology associated with the field of surface measurement. In this book I have tried to be consistent with ISO specification standards and the NPL good practice guides [1,2]. We define surface topography as the overall surface structure of a part (i.e. all the surface features treated as a continuum of spatial wavelengths), surface form as the underlying shape of a part (for example, a cylinder liner has cylindrical form) and surface texture as the features that remain once the form has been removed (for example, machining marks on the cylinder liner). The manner in which a surface governs the functionality of a part is also affected by the material characteristics and sub-surface physics, or surface integrity. Surface integrity is not covered in this book as it falls under material science (see [3]). This book will concentrate on the measurement of surface texture, as this is the main feature that will affect MNT parts and processes. In many ways form becomes texture as the overall size of the part approaches that of its surface features, so this distinction is not always clear-cut. In the field of optics manufacturing the surface form and texture often both need to be controlled to nanometric accuracy. A recent example where the macro-world Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS Introduction to surface topography measurement Spatial wavelength ranges Historical background of classical surface texture measuring instrumentation Surface profile measurement Areal surface texture measurement Surface topography measuring instrumentation Optical instruments Capacitive instruments Pneumatic instruments Calibration of surface topography measuring instruments
115
116
C H A P T ER 6 : Surface topography measurement instrumentation
CONTENTS Uncertainties in surface topography measurement Comparisons of surface topography measuring instruments Software measurement standards References
meets the MNTworld is the proposal for a 42 m diameter off-axis ellipsoidal primary mirror for the E-ELT optical telescope [4,5]. This will be made from several 1.42 m across-flats hexagonal mirror segments that need phenomenal control of their surface topography. Such mirrors are not usually thought of as MNT devices, but they clearly need engineering nanometrology. We will only consider surface texture in this book; the measurement of surface form in the optics industry is covered in many other text books and references (see for example [6]). Surface texture measurement has been under research for over a century and it was naturally taken up by most of the NMIs as their first MNTsubject. However, it is still a hot area of research, especially as the new areal surface texture specification standards have now started to be introduced. The reader is referred elsewhere for more in-depth treatment of the area of surface measurement [7–10]. To rationalize the information content I have split the chapters on surface topography measurement in this book into three. Chapters 6 and 7 discuss the instrumentation used to measure surface topography (see section 6.2 for a discussion of why I have used two instrumentation chapters). Chapter 8 then discusses the characterization of surface topography – essentially how the data that are collected from a surface topography measuring instrument are analysed.
6.2 Spatial wavelength ranges A chapter on surface topography, primarily surface texture, measurement could include a large range of instrumentation, with stylus and optical instruments at one end of the range and scanning probe and electron microscopes at the other end. However, this would make for a very large chapter that would include a huge range of measurement technologies. I have, therefore, split surface topography into instruments that measure spatial wavelength features that are 500 nm and larger, for example, stylus and most far-field optical methods, and instruments that measure features that are 500 nm and smaller, for example, scanning probe and electron microscopes. This division is not hard and fast, but will suffice to rationalize the information content per chapter. It is worth noting that the magnitude of 500 nm has not been chosen for purely arbitrary reasons; it is also a form of natural split. The stylus instrument is limited to spatial wavelengths that are greater than the stylus radius, typically 2 mm or more, and far-field optical instruments are diffraction limited, typically to around 300 nm or so. Scanning probe instruments are also limited by the radius of the tip, typically tens of nanometres, and electron
Historical background of classical surface texture measuring instrumentation
FIGURE 6.1 Amplitude-wavelength space depicting the operating regimes for common instruments.
microscopes tend to be used for spatial wavelengths that cannot be measured using far-field optical techniques. Figure 6.1 is an amplitude-wavelength (AW) space graph that shows the range of amplitudes and spatial wavelengths that can be measured using three common instruments. AW space is a useful method for depicting the operating regimes of surface measuring instruments that assumes a surface can be mathematically generated by a series of sinusoidal functions [11–13]. AW space has been extended recently to include the instrument measuring speed and probing force [14].
6.3 Historical background of classical surface texture measuring instrumentation Before the turn of the nineteenth century the measurement of surface texture was primarily carried out by making use of our senses of sight and touch. By simply looking at a surface one can easily tell the difference between a freshly machined lump of glass and one that has been lapped and fine-polished. Touch was utilized by running a finger or fingernail along a surface to be measured and feeling any texture present on the surface. With a few technological modifications, these two methods for measuring surface texture are still the most widely used today. One of the earliest attempts at controlling surface texture was made in the USA by a company that mounted samples of textures produced by different
117
118
C H A P T ER 6 : Surface topography measurement instrumentation
methods in cases [15] which were given to the machinist, who was expected to obtain a texture on his or her workpiece as near to that specified as possible. This was a suitable method for controlling the appearance of the workpiece but did not in any way indicate the magnitude of the surface texture. Perhaps the first stylus method was to drag a sapphire needle attached to a pick-up arm across the surface being tested [16]. As with a gramophone, the vibration so produced gave rise to sound in a speaker and variation in the electrical current reading on a voltmeter. The method was calibrated by comparing the measured results to those obtained with a sample having a texture that should have been given to the workpiece. This method did not give rise to many benefits over the visual appearance method and it would be expected that the amplitude of the current reading will bear a greater relation to the pitch of the texture rather than its depth. Few metrologists can doubt the influence on the world of surface texture measurement, and indeed the entire field of engineering metrology, played by two brothers named Thomas Smithies Taylor and William Taylor, plus their associate William S. Hobson. The three men went into business in Leicester, England, in 1886 manufacturing optical, electrical and scientific instruments [17]. In the 1880s, photography was developing rapidly and Taylor, Taylor and Hobson (TTH) started making photographic lenses. The present company still holds a leading position in the world for cinematograph and television lenses. The first metrology instrument manufactured by TTH was a screw diameter measuring machine (originally designed by Eden at NPL). This instrument was used extensively for armaments manufacture during the First World War. In 1945 J. Arthur Rank, the British flour miller and millionaire film magnate, purchased shares in the company. Until 1996, Rank Taylor Hobson was still part of the Rank Organisation. Richard Reason [18], who was employed by TTH, attributed the origin of surface stylus measurements to Gustav Schmaltz of Germany in 1929. Schmaltz [19] used a pivoted stylus drawn over the surface with a very lightweight mirror being attached to the stylus. A beam of light reflected in the mirror traced a graph on a moving photographic chart, providing a magnified, although distorted, outline of the surface profile. In 1934 William Taylor learned of the work of Abbott and Firestone [20] in developing methods for measuring surface texture. In their 1933 paper Abbott and Firestone discuss the use of a similar instrument to that of Schmaltz and name it a profilograph. Abbott’s instrument was put on the market in 1936. Schmaltz later produced a microscope (known as the light-section microscope) that observed the surface at an angle of incidence of 45 . This gave
Historical background of classical surface texture measuring instrumentation
additional magnification (O2) to that of the microscope but was only suitable for quite coarse surface textures since the optical magnification was necessarily limited. In the mid-1930s the area where accurate surface measurement was required was mainly in finely finished bearing surfaces, such as those used in aircraft engines. The stylus and mirror arrangement was limited to about 4000 magnification but an order of magnitude more was needed. Therefore, Reason rejected optical magnification and used the principles of a stylus drawn across the surface with a variable inductance pick-up and electronic amplification. Along the lines of Abbott, in 1940 Rolt (at NPL) was pressing for surface texture measurement to produce a single number that would define a surface and enable comparisons to be made. The number most readily obtainable from a profile graph was the average value, obtained using a planimeter. Eventually, TTH put the Talysurf onto the market. (Note that the name Talysurf comes from the Latin talea, which roughly translates to ‘measurement’, and not from the name Taylor.) This instrument provided a graph and the average surface roughness value read directly from a meter. Figure 6.2 is a photograph of the original Talysurf instrument. Another method for measuring surface texture was due to Linnik of the Mendelleif Institute in Leningrad (1930) and interferometers for this method were made by Hilger and Watts, and by Pitter Valve Engineering in Britain. These interferometric instruments were diffraction limited but paved the
FIGURE 6.2 The original Talysurf instrument (courtesy of Taylor Hobson).
119
120
C H A P T ER 6 : Surface topography measurement instrumentation
way for a range of non-contacting instruments that is still being increased to date (see section 6.7). In 1947 Reason turned his attention to the measurement of roundness and in 1949 the first roundness testing machine, the Talyrond, was produced. The Talyrond used a stylus arm and electrical transducer operating on the same principle as the Talysurf. These two, plus other instruments, paved the way for the Talystep instrument, which uses the sensitive electronic transducer technique to measure very small steps or discontinuities in a surface and is thus able to measure thin-film steps of near-molecular thickness [21]. Further developments in surface texture measurement will be discussed in the following sections of this chapter.
6.4 Surface profile measurement Surface profile measurement is the measurement of a line across the surface that can be represented mathematically as a height function with lateral displacement, z(x). With a stylus or optical scanning instrument, profile measurement is carried out by traversing the stylus across a line on the surface. With an areal (see section 6.7.3) optical instrument, a profile is usually extracted in software after an areal measurement has been taken (see section 6.5). Figure 6.3 shows the result of a profile measurement extracted from an areal measurement. When using a stylus instrument, the traversing direction for assessment purposes is defined in ISO 4287 [22] as perpendicular to the direction of the lay unless otherwise indicated. The lay is the direction of the predominant surface pattern. Lay usually derives from the actual production process used to manufacture the surface and results in directional striations across the
FIGURE 6.3 Example of the result of a profile measurement.
Areal surface texture measurement
surface. The appearance of the profile being assessed is affected by the direction of the view relative to the direction of the lay and it is important to take this into account when interpreting surface texture parameters [1].
6.5 Areal surface texture measurement Over the past three decades there has been an increased need to relate surface texture to surface function. Whilst a profile measurement may give some functional information about a surface, to really determine functional information, a three-dimensional, or ‘areal’, measurement of the surface is necessary. Control of the areal nature of a surface allows the manufacturer to alter how a surface interacts with its surroundings. In this way optical, tribological, biological, fluidic and many other properties can be altered [23,24]. For example, control of surface texture is important for: -
surface structuring to encourage the binding of biological molecules, for example proteins, cells or enzymes;
-
micro-lens arrays for displays and photo-voltaics;
-
prismatic arrays for safety clothing, signage and LED lighting;
-
nanostructured surfaces that affect plasmonic interactions for antireflection coatings, waveguides and colour control;
-
surfaces of microfluidic channels for flow control, mixing, lab-on-achip and biological filtering;
-
deterministic patterning to control tribological characteristics such as friction, rheology and wear.
There are inherent limitations with 2D surface measurement and characterization. A fundamental problem is that a 2D profile does not necessarily indicate functional aspects of the surface. For example, consider the most commonly used parameter for 2D surface characterisation, Ra (see section 8.2.7.1). Figure 6.4 shows the profiles of two surfaces, both of which return the same Ra value when filtered under the same conditions. It can be seen that the two surfaces have very different features and consequently very different functional properties. With profile measurement and characterization it is often difficult to determine the exact nature of a topographic feature. Figure 6.5 shows a 2D profile and a 3D surface map of the same component covering the same measurement area. With the 2D profile alone a discrete pit is measured on
121
122
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.4 Profiles showing the same Ra with differing height distributions. FIGURE 6.5 A profile taken from a 3D measurement shows the possible ambiguity of 2D measurement and characterization.
the surface. However, when the 3D surface map is examined, it can be seen that the assumed pit is actually a valley and may have far more bearing on the function of the surface than a discrete pit. The measurement of areal surface texture has a number of benefits over profile measurement. Areal measurements give a more realistic representation of the whole surface and have more statistical significance. Also, there is less chance that significant features will be missed by an areal method and the manufacturer gains a better visual record of the overall structure of the surface.
6.6 Surface topography measuring instrumentation Over the past one hundred years, and especially in the last thirty years, there has been an explosion in the number of instruments that are available to measure surface texture. The instruments can be divided into three broad
Surface topography measuring instrumentation
classes: line profiling, areal topography measuring and area-integrating methods [25]. Line profiling methods produce a topographic profile, z(x). Areal topography methods produce topographic images, z(x, y). Often, z(x, y) is developed by juxtaposing a set of parallel profiles. Area-integrating methods measure a representative area of a surface and produce numerical results that depend on area-integrating properties of the surface. This chapter will highlight the most popular instruments available at the time of writing and more instruments are discussed in [7–10]. Scanning probe and electron beam instruments are described in chapter 7.
6.6.1 Stylus instruments Stylus instruments are by far the most common instruments for measuring surface texture today, although optical instruments and scanning probe microscopes are becoming more common in MNT manufacturing facilities. A typical stylus instrument consists of a stylus that physically contacts the surface being measured and a transducer to convert its vertical movement into an electrical signal. Other components can be seen in Figure 6.6 and include: a pickup, driven by a motor and gearbox, which draws the stylus over the surface at a constant speed; an electronic amplifier to boost the signal
FIGURE 6.6 Schema of a typical stylus instrument.
123
124
C H A P T ER 6 : Surface topography measurement instrumentation
from the stylus transducer to a useful level; and a device, also driven at a constant speed, for recording the amplified signal [1,26,27]. The part of the stylus in contact with the surface is usually a diamond tip with a carefully manufactured shape. Commercial styli usually have tip radii of curvature ranging from 2 mm to 10 mm, but smaller or larger styli are available for specialist applications and form measurement respectively. Owing to their finite shape, some styli on some surfaces will not penetrate into valleys and will give a distorted or filtered measure of the surface texture. Consequently, certain parameters will be more affected by the stylus shape than others. The effect of the stylus shape has been extensively covered elsewhere (see for example [7,28–30]). The effect of the stylus force can have a significant influence on the measurement results and too high a force can cause damage to the surface being measured (see Figure 6.7). ISO 3274 [26] states that the stylus force should be 0.75 mN but this is rarely checked and can vary significantly from the value given by the instrument manufacturer. The value of 0.75 mN was chosen so as not to cause scratches in metals with a 2 mm radius stylus, but it does cause scratches in aluminium. Smaller forces limit the measurement speed due to the risk of ‘stylus flight’. Some researchers ([31,32] and, more recently [33]) have developed constant-force stylus instruments to improve the fidelity between the surface and the stylus tip plus reduce surface damage and dynamic errors. To enable a true cross-section of the surface to be measured, the stylus, as it is traversed across the surface, must follow an accurate reference path that has the general profile of, and is parallel to, the nominal surface. Such
FIGURE 6.7 Damage to a brass surface due to a high stylus force.
Surface topography measuring instrumentation
a datum may be developed by a mechanical slideway; for examples see [34] and [35]. The need for accurate alignment of the object being measured is eliminated by the surface datum device in which the surface acts as its own datum by supporting a large radius of curvature spherical (or sometimes with different radii of curvature in two orthogonal directions) skid fixed to the end of the hinged pickup. At the front end of the pickup body the skid rests on the specimen surface (note that skids are rarely seen on modern instruments and not covered by ISO specification standards). All the aspects of stylus instruments are discussed in great detail elsewhere [7]. The main sources of error associated with a stylus instrument are simply listed below: -
surface deformation;
-
amplifier distortion;
-
finite stylus dimensions;
-
lateral deflection;
-
effect of skid or other datum;
-
relocation upon repeated measurements;
-
effect of filters – electrical or mechanical;
-
quantization and sampling effects;
-
dynamic effects;
-
environmental effects;
-
effect of incorrect data-processing algorithms.
The lateral resolution of a stylus instrument, or the shortest wavelength, l, of a sinusoidal signal where the probe can reach the bottom of the surface, is given by pffiffiffiffiffi l ¼ 2p ar (6.1) where a is the amplitude of the surface and r is the radius of the stylus tip. Note that equation (6.1) only applies for a sinusoidal profile. Quantization effects and the noise floor of the instrument will determine the axial, or height, resolution. Modern stylus instruments regularly obtain measurements of surface texture with sub-nanometre resolution but struggle to obtain true traceability of these measurements in each of their axes. It is worth pointing out here that many of the pitfalls of mechanical stylus techniques are often highly
125
126
C H A P T ER 6 : Surface topography measurement instrumentation
exaggerated [36]. For example, the wear on the surface caused by a stylus is often stated as its fundamental limit, but even if a stylus does cause some damage, this may not affect the functionality of the surface. There have been some proposals to speed up the performance of a stylus by vibrating it axially [37]. One drawback of a stylus instrument when operated in an areal scanning mode is the time to take a measurement. It is perfectly acceptable to take several minutes to make a profile measurement, but if the same number of points are required in the y direction (orthogonal to the scan direction) as are measured in the x direction, then measurement times can be up to several hours. For example, if the drive mechanism can scan at 0.1 mm$s1 and 1000 points are required for a profile of 1 mm, then the measurement will take 10 s. If a square grid of points is required for an areal measurement, then the measurement time will increase to 105 s or approximately 2.7 hours. This sometimes precludes the use of a stylus instrument in a production or in-line application. This is one area where some of the optical instruments offer an advantage over the stylus instruments.
6.7 Optical instruments There are many different types of optical instrument that can measure surface topography, both surface texture and surface form. The techniques can be broken down into two major areas – those that measure the actual surface topography by either scanning a beam or using the field of view (profile or areal methods), and those that measure a statistical parameter of the surface, usually by analysing the distribution of scattered light (areaintegrating methods). Whilst both these methods operate in the optical far field, there is a third area of instruments that operate in the near field – these are discussed in chapter 7. The instruments that are discussed in sections 6.7.2 to 6.7.4 are the most common instruments that are available commercially. There are many more optical instruments, or variations on the instruments presented here, most of which are listed in [27] with appropriate references. At the time of writing, only the methods described in sections 6.7.2.2, 6.7.3.1, 6.7.3.2 and 6.7.3.4 are being actively standardized in the appropriate ISO committee (ISO 213 working group 16). Optical instruments have a number of advantages over stylus instruments. They do not physically contact the surface being measured and hence do not present a risk of damaging the surface. This non-contact nature can also lead to much faster measurement times for the optical scanning
Optical instruments
instruments. The area-integrating and scattering methods can be faster still, sometimes only taking some seconds to measure a relatively large area. However, more care must be taken when interpreting the data from an optical instrument. Whereas it is relatively simple to predict the output of a stylus instrument by modelling it as a ball of finite diameter moving across the surface, it is not such a trivial matter to model the interaction of an electromagnetic field with the surface. Often many assumptions are made about the nature of the incident beam or the surface being measured that can be difficult to justify in practice [38]. The beam-to-surface interaction is so complex that one cannot decouple the geometry or material characteristics of the surface being measured from the measurement. For this reason, it is often necessary to have an a priori understanding of the nature of the surface before an optical measurement is attempted.
6.7.1 Limitations of optical instruments Optical instruments have a number of limitations, some of which are generic, and some that are specific to instrument types. This section briefly discusses some of these limitations and section 6.12 discusses a number of comparisons that show how the limitations may affect measurements and to what magnitude. Many optical instruments use a microscope objective to magnify the features on the surface being measured. Magnifications vary from 2.5 to 100 depending on the application and the type of surface being measured. Instruments employing a microscope objective will have two fundamental limitations. Firstly, the numerical (or angular) aperture (NA) determines the largest slope angle on the surface that can be measured and affects the optical resolution. The NA of an objective is given by NA ¼ n sin a
(6.2)
where n is the refractive index of the medium between the objective and the surface (usually air, so n can be approximated by unity) and a is the acceptance angle of the aperture (see Figure 6.8, where the objective is approximated by a single lens). The acceptance angle will determine the slopes on the surface that can physically reflect light back into the objective lens and hence be measured. For instruments based on interference microscopy it may be necessary to apply a correction to the interference pattern due to the effect of the NA. Effectively the finite NA means that the fringe distance is not equal to half the wavelength of the source radiation [39]. This effect also accounts for the aperture correction in gauge block interferometry (see section 4.5.4.6), but it has
127
128
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.8 Numerical aperture of a microscope objective lens.
a larger effect here; it may cause a step height to be measured up to 15 % short. This correction can usually be determined by measuring a step artefact with a calibrated height value and it can be directly determined using a grating [40]. The second limitation is the optical resolution of the objective. The resolution determines the minimum distance between two lateral features on a surface that can be measured. The resolution is approximately given by r ¼
l 2NA
(6.3)
where l is the wavelength of the incident radiation [41]. For a theoretically perfect optical system with a filled objective pupil, the optical resolution is given by the Rayleigh criterion, where the ½ in equation (6.3) is replaced by 0.61. Yet another measure of the optical resolution is the Sparrow criterion, or the spatial wavelength where the instrument response drops to zero and where the ½ in equation (6.3) is replaced by 0.82. Equation (6.3), and the Rayleigh and Sparrow criteria, can be used almost indiscriminately, so the user should always check which expression has been used where optical resolution is a limiting factor. Also, equation (6.3) sets a minimum value. If the objective is not optically perfect (i.e. aberration-free) or if a part of the beam is blocked (for example, in a Mirau interference objective, or when a steep edge is measured) the value becomes higher (worse).
Optical instruments
Table 6.1
Minimum distance between features for different objectives
Magnification
NA
Resolution/mm
Pixel spacing/mm
10 20 50
0.3 0.4 0.5
1.00 0.75 0.60
1.75 0.88 0.35
For some instruments, it may be the distance between the pixels (determined by the image size and the number of pixels in the camera array) in the microscope camera array that determines the lateral resolution. Table 6.1 gives an example for a commercial microscope – for the 50 objective, it is the optical resolution that determines the minimum distance between features, but with the 10 objective it is the pixel spacing. The optical resolution of the objective is an important characteristic of an optical instrument, but its usefulness can be misleading. When measuring surface texture, one must consider the ability to measure the spacing of points in an image along with the ability to accurately determine the heights of features. We need an optical equivalent of equation (6.1) for stylus instruments. This is not a simple task and, at the time of writing, the exact definitions have not been decided on. Also, there may not be a common expression that can be used for all optical instruments. One such definition is the lateral (50 %) resolution or the wavelength at 50 % depth modulation. This is defined as one half the spatial period of a sinusoidal profile for which the instrument response (measured feature height compared to actual feature height) falls to 50 %. The instrument response can be found by direct measurement of the instrument transfer function (see [42] and annex C in [43]). Note that this definition is not without its faults – the value of the lateral (50 %) resolution will vary with the height of the features being measured (as with equation (6.1) for a stylus instrument). Another important factor for optical instruments that magnify the surface being measured is the optical spot size. For scanning type instruments the spot size will determine the area of the surface measured as the instrument scans. To a first approximation, the spot size mimics the action of the tip radius on a stylus instrument, i.e. it acts as a low-pass filter [44]. The optical spot size is given by fl d0 ¼ (6.4) w0 where f is the focal length of the objective lens and w0 is the beam waist (the radius of the 1/e2 irradiance contour at the plane where the wavefront is flat [41]).
129
130
C H A P T ER 6 : Surface topography measurement instrumentation
In a non-scanning areal instrument it will be the field of view that determines the lateral area that is measured. In the example given in Table 6.1 the areas measured are 0.3 mm 0.3 mm and 1.2 mm 1.2 mm for the 50 and 10 objectives respectively. Many optical instruments, especially those utilizing interference, can be affected by the surface having areas that are made from different materials [45,46]. For a dielectric surface there is a p phase change on reflection (at normal incidence), i.e. a p phase difference between the incident and reflected beams. For materials with free electrons at their surfaces (i.e. metals and semiconductors) there will be a (p d) phase change on reflection, where d is given by 2n1 k2 d ¼ (6.5) 1 n22 k22 where n and k are the refractive and absorption indexes of the surrounding air (medium 1) and the surface being measured (medium 2) respectively. For the example of a chrome step on a glass substrate, the difference in phase change on reflection gives rise to an error in the measured height of approximately 20 nm (at a wavelength of approximately 633 nm) when measured using an optical interferometer. A stylus instrument would not make this error in height. In the example of a simple step, it may be possible to correct for the phase change on reflection (if one has prior knowledge of the optical constants of the two materials) but, when measuring a multi-material engineered surface, this may not be so easy to achieve. Most optical instruments can experience problems when measuring features with very high slope angles or discontinuities. Examples include steep-sided vee-grooves and steps. The NA of the delivery optics will dictate the slope angle that is detectable and in the case of a microscope objective it will be the acceptance angle. For variable focus and confocal instruments (see sections 6.7.2.2 and 6.7.3.1) sharp, overshooting spikes are seen at the top of steps and often the opposite at the bottom of the step. These are usually caused by the instrument not measuring the topography correctly, sometimes due to only a single pixel spanning the discontinuity. For lowcoherence interferometers (see section 6.7.3.4) there can be problems that are caused by diffraction and interference from the top and bottom surface when a step height is less than the coherence length of the source [47,48]. These effects give rise to patterns known as batwings (see Figure 6.9). In general, care should be taken when measuring steep slopes with optical instruments. Note that some optical instruments can extend the slope limitation of the objective by making use of diffusely scattered light. This can only be achieved when the surface of the slope is sufficiently rough to obtain
Optical instruments
FIGURE 6.9 Example of the batwing effect when measuring a step using a coherence scanning interferometer.
enough diffuse scatter. It is also possible to extend the slope limitation with some surfaces using controlled tilting of the sample and specialist image processing [49]. Many optical instruments for measuring surface topography utilize a source that has an extended bandwidth (for example, coherence scanning interferometers and confocal chromatic microscopy). Such instruments can be affected by dispersion in the delivery optics or due to thin films at the sample surface. For example, due to dispersion, coherence scanning interferometers can miscalculate the fringe order, giving rise to what are referred to as 2p discontinuities or ghost steps [50]. Dispersion effects can also be field or surface gradient dependent [51]. Also, all optical instruments will be affected by aberrations caused by imperfections in the optical components and these will affect the measurement accuracy and optical resolution (such systems will not be diffraction limited). Finally it is important to note that surface roughness plays a significant role in measurement quality when using optical instrumentation. Many researchers have found that estimates of surface roughness derived from optical measurements differ significantly from other measurement techniques [52–55]. The surface roughness is generally over-estimated by optical instrumentation (this is not necessarily true when considering areaintegrating instruments) and this can be attributed to multiple scattering. Although it may be argued that the local gradients of rough surfaces exceed the limit dictated by the NA of the objective and, therefore, would be classified as beyond the capability of optical instrumentation, measured values with high signal-to-noise ratio are often returned in practice. If, for example, a silicon vee-groove (with an internal angle of approximately 70 ) is
131
132
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.10 Over-estimation of surface roughness due to multiple scattering in veegrooves.
measured using coherence scanning interferometry, a clear peak is observed at the bottom of the profile due to multiple reflections (scattering) [56]. Although this example is specific to a highly polished vee-groove fabricated in silicon it is believed to be the cause for over-estimation of surface roughness since a roughened surface can be considered to be made up of lots of randomly oriented grooves with random angles (see Figure 6.10). Note that recent work has shown that, whilst multiple scattering may cause problems in most cases for optical instruments, it is possible to extend the dynamic range of the instrument by using the multiple scatter information and effectively solving an inverse problem. For example, [57] have recently discussed the measurement of vertical sidewalls and even undercut features using this method.
6.7.2 Scanning optical techniques Scanning optical techniques measure surface topography by physically scanning a light spot across the surface, akin to the operation of a stylus instrument. For this reason scanning optical instruments suffer from the same measurement-time limitations discussed for stylus instruments (although in many cases the optical instruments can have higher scanning speeds due to their non-contact nature). The measurement will also be affected by the dynamic characteristics of the scanning instrumentation and by the need to combine, or stitch, the optical images together. Stitching can be a significant source of error in optical measurements [58,59] and it is important that the process is well characterized for a given application.
6.7.2.1 Triangulation instruments Laser triangulation instruments measure the relative distance to an object or surface. Light from a laser source is projected usually using fibre optics on to the surface, on which the light scatters. The detector/camera is fitted with optics that focus the scattered light to a spot on to a CCD line array or position-sensitive detector. As the topography of the surface changes this
Optical instruments
causes the spot to be displaced from one side of the array to the other (see Figure 6.11). The line array is electronically scanned by a digital signalprocessor device to determine which of the pixels the laser spot illuminates and to determine where the centre of the electromagnetic energy is located on the array. This process results in what is known as sub-pixel resolution and modern sensors claim to have between five and ten times higher resolution than that of the line array. Triangulation sensors came to the market at the beginning of the 1980s but initially had many problems. For example, they gave very different measurement results for surfaces with different coefficients of reflectance. So, historically laser triangulation sensors were used in applications where a contact method was not practical or perhaps possible, for example, hot, soft or highly polished surfaces. Many of these early problems have now been
FIGURE 6.11 Principle of a laser triangulation sensor.
133
134
C H A P T ER 6 : Surface topography measurement instrumentation
minimized and modern triangulation sensors are used to measure a large array of different surfaces, often on a production line. Triangulation instruments usually use an xy scanning stage with linear motor drives giving a flatness of travel over the typically 150 mm by 100 mm range of a few micrometres. Over 25 mm the flatness specification is usually better than 0.5 mm. These instruments are not designed to have the high resolution and accuracy of the interferometric, confocal or variable focus methods, having typical height resolutions of 100 nm over several millimetres of vertical range. For these reasons, triangulation instruments are used for measuring surfaces with relatively large structure such as paper, fabric, structured plastics and even road surfaces. The main benefit of triangulation sensors is the speed with which the measurement can be taken and their robustness for in-process applications. Typical instruments are usually much cheaper than their higher-resolution brethren. Triangulation instruments do suffer from a number of disadvantages that need to be borne in mind for a given application. Firstly, the laser beam is focused through the measuring range, which means that the diameter of the laser beam varies throughout the vertical range. This can be important when measuring relatively small features as the size of the spot will act as an averaging filter near the beginning and end of the measuring range as the beam will have a larger diameter here. Also, the measurement depends on an uninterrupted line of sight between laser, surface and camera/detector. Therefore, if a step is to be measured the sensor must be in the correct orientation so that the laser spot is not essentially hidden by the edge [60]. Note that triangulation is one form of what is referred to as structured light projection in ISO 25178 part 6 [25]. Structured light projection is a surface topography measurement method whereby a light image with a known structure or pattern is projected on to a surface and the pattern of reflected light together with knowledge of the incident structured light allows one to determine the surface topography. When the structured light is a single focused spot or a fine line, the technique is commonly known as triangulation.
6.7.2.2 Confocal instruments Confocal instruments, the principle of which is shown in Figure 6.12, differ from a conventional microscope in that they have two additional pinhole apertures; one in front of the light source and one in front of the detector [61]. The pinholes help to increase the lateral optical resolution over the limits defined by equation (6.2) or the Abbe criterion. This so-called super resolution is possible because Abbe assumed an infinitely large field of view. The
Optical instruments
FIGURE 6.12 Confocal set-up with (a) object in focus and (b) object out of focus.
optical resolution can be increased further by narrowing down the field of view with the pinholes to an area smaller than the Abbe limit. A second effect of the confocal set-up is the depth discrimination. In a normal bright field microscope set-up the total energy of the image stays constant while changing the focus. In a confocal system the total image energy rapidly decreases when the object is moved out of focus [62] as shown in Figure 6.12b. Only surface points in focus are bright, while out of focus points remain dark. Figure 6.13 shows an example illustrating the difference between normal bright field imaging and confocal imaging. When using a confocal instrument to measure a surface profile, a focus scan is needed [63]. An intensity profile whilst scanning through the focus position is shown in Figure 6.14. The location of the maximum intensity is said to be the height of the surface at this point. The full width at half maximum (FWHM) of the confocal curve determines the depth discrimination [64] and is mainly influenced by the objective’s numerical aperture.
135
136
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.13 Demonstration of the confocal effect on a piece of paper: (a) microscopic bright field image; (b) confocal image. The contrast of both images has been enhanced for a better visualization.
FIGURE 6.14 Schematic representation of a confocal curve. If the surface is in focus (position 0) the intensity has a maximum.
Since the confocal principle measures only one point at a time, lateral scanning is needed. The first systems, for example [65], used a scanning stage moving the sample under the confocal light spot, which is very slow. Modern systems use either a pair of scanning mirrors or a Nipkow disk [66] to guide the spot over the measurement area. The Nipkow disk is well known from mechanical television cameras invented in the 1930s. Figure 6.15 shows a classical design of a Nipkow disk. As shown in Figure 6.16 the Nipkow disk is placed at an intermediate image in the optical path of a normal microscope. This avoids the need for two pinholes moving synchronously. Scanning mirrors are mainly used in confocal laser scanning microscopes, because they can effectively concentrate the whole laser energy on one spot.
Optical instruments
FIGURE 6.15 Schema of a Nipkow disk. The pinholes rotate through the intermediate image and sample the whole area within one revolution.
FIGURE 6.16 Schema of a confocal microscope using a Nipkow disk.
137
138
C H A P T ER 6 : Surface topography measurement instrumentation
Their disadvantage is a rather slow scanning speed of typically a few frames per second. The Nipkow disk is best suited for white light systems, because it can guide multiple light spots simultaneously through the intermediate image of the field of view. It does integrate the whole area within one revolution. Current commercial systems have scanning rates of about 100 frames per second, making a full 3D scan with typically 200 to 300 frames in a few seconds. Confocal microscopes suffer from the same limitations as all microscopic instruments as discussed in section 6.7.1. The typical working distance of a confocal microscope depends on the objective used. Microscope objectives are available with working distances from about 100 mm to a few millimetres. With increasing working distance the numerical aperture normally decreases. This results in reduced lateral and axial resolution. Depending on the application the objective parameters have to be chosen carefully. Low values of NA below 0.4 are in general not suitable for roughness analysis. Low apertures can be used for geometric analysis if the slope angle, ß, is lower than the aperture angle, a, from equation (6.1). For an NA of 0.4, ß is approximately 23 . The vertical measurement range is mainly limited by the working distance of the objective and thus by the NA. Therefore, it is not possible to make high-resolution measurements in deep holes. The field of view is limited by the objective magnification. Lower magnifying objectives with about 10 to 20 magnification provide a larger field of view of approximately one square millimetre. High magnifying objectives with 100 magnification have a field of view of about 150 mm by 150 mm. The lateral resolution is normally proportional to the value given by equation (6.2), if it is not limited by the pixel resolution of the camera. It ranges from above 0.3 mm to about 1.5 mm. The depth resolution can be given by the repeatability of axial measurements and at best has a standard deviation of a few nanometres on smooth surfaces and in suitable environments.
6.7.2.2.1 Confocal chromatic probe instrument The confocal chromatic probe instrument [67] avoids the rather timeconsuming depth scan by using a non-colour-corrected lens and white light illumination. Due to dispersion, light of different wavelengths is focused at different distances from the objective, as shown in Figure 6.17. By analysing the reflected light with a spectrometer, the confocal curve can be recovered from the spectrum. Closer points are imaged to the blue end of spectrum, while farther points are imaged to the red end [68]. The spectrometer
Optical instruments
FIGURE 6.17 Chromatic confocal depth discrimination.
comprises mainly a prism, or an optical grating and a CCD-line sensor to analyse the spectral distribution. The chromatic principle allows the design of remote sensor heads, coupled only with an optical fibre to the illumination and analysis optics. This is a significant advantage when using chromatic sensors in dirty or dangerous environments. Another advantage of chromatic sensors is the freedom to design the strength of depth discrimination, not only by changing the aperture, but also by choosing a lens glass type with appropriate dispersion. Pinhole confocal systems tend to have a smaller working distance with increasing aperture and better depth discrimination. Chromatic systems can be designed to have a large working distance up to a few centimetres while still being able to resolve micrometres in depth. Chromatic systems seem to be very elegant and flexible in design and application, so why are there other principles used in practice? The biggest drawback of chromatic sensors is their limitation to a single measurement point. There has been no success yet in creating a rapidly scanning area sensor. Multi-point sensors with an array of some ten by ten points are available but still far away from a rapid areal scan.
6.7.2.3 Point autofocus profiling A point autofocus instrument measures surface texture by automatically focusing a laser beam on a point on the specimen surface, moving the specimen surface in a fixed measurement pitch using an xy scanning stage, and measuring the specimen surface height at each focused point.
139
140
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.18 Schema of a point autofocus instrument.
Figure 6.18 illustrates a typical point autofocus instrument operating in beam offset autofocus mode. A laser beam with high focusing properties is generally used as the light source. The input beam passes through one side of the objective, and the reflected beam passes through the opposite side of the objective after focusing on a specimen surface at the centre of the optical axis. This forms an image on the autofocus sensor after passing through an imaging lens. Figure 6.18 shows the in-focus state. The coordinate value of the focus point is determined by the xy scanning stage position and the height is determined from the Z positioning sensor. Figure 6.19 shows the principle of point autofocus operation. Figure 6.19a shows the in-focus state where the specimen is in focus and Figure 6.19b shows the defocus state where the specimen is out of focus. The surface being measured is displaced downward (Z), and the laser beam position on the autofocus sensor changes accordingly (W). Figure 6.19c shows the autofocus state where the autofocus sensor detects the laser spot displacement and
Optical instruments
FIGURE 6.19 Principle of point autofocus operation.
feeds back the information to the autofocus mechanism in order to adjust the objective back to the in-focus position. The specimen displacement, Z1, is equal to the moving distance of the objective, Z2, and the vertical position sensor (typically a linear scale is used) obtains the height information of the specimen [70]. The disadvantage of the point autofocus is that it requires a longer measuring time than other non-contact measuring methods since it must obtain the coordinate values of each point by moving the mechanism of the instrument (as with chromatic confocal – see section 6.7.2.2.1). Also, the accuracy of the instrument will be determined by the laser spot size (see section 6.7.1) because of the uneven optical intensity within the laser spot (speckle) that generates focal shift errors [71]. Point autofocus instruments can have relatively high resolution. The lateral resolution is potentially diffraction limited but the axial resolution is determined by the resolution of the master scale, which can be down to 1 nm. The range is determined by the xy and z scanner, and can be typically 150 mm by 150 mm by 10 mm. The method is almost immune to the surface
141
142
C H A P T ER 6 : Surface topography measurement instrumentation
reflectance properties since the autofocus sensor detects the position of the laser spot (the limit is typically a reflectivity of 1 %). The point autofocus instrument irradiates the laser beam on to a specimen surface that causes the laser beam to scatter in various directions due to the surface roughness of the specimen. This enables the measurement of surface slope angles that are greater than the half aperture angle of the objective (less than 90 ) by capturing the scattered light that is sent to the autofocus sensor.
6.7.3 Areal optical techniques 6.7.3.1 Focus variation instruments Focus variation combines the small depth of focus of an optical system with vertical scanning to provide topographical and colour information from the variation of focus [69]. Figure 6.20 shows a schematic diagram of a focus
FIGURE 6.20 Schema of a focus variation instrument.
Optical instruments
variation instrument. The main component of the system is a precision optical arrangement that contains various lens systems that can be equipped with different objectives, allowing measurements with different lateral resolution. With a beam-splitting mirror, light emerging from a white light source is inserted into the optical path of the system and focused onto the specimen via the objective. Depending on the topography of the specimen, the light is reflected into several directions. If the topography shows diffuse reflective properties, the light is reflected equally strongly into each direction. In the case of specular reflections, the light is scattered mainly into one direction. All rays emerging from the specimen and hitting the objective lens are bundled in the optics and gathered by a light-sensitive sensor behind the beam-splitting mirror. Due to the small depth of field of the optics, only small regions of the object are sharply imaged. To perform a complete detection of the surface with full depth of field, the precision optical arrangement is moved vertically along the optical axis while continuously capturing data from the surface. This ensures that each region of the object is sharply focused. Algorithms convert the acquired sensor data into 3D information and a true colour image with full depth of field. This is achieved by analysing the variation of focus along the vertical axis. Various methods exist to analyse this variation of focus, usually based on the computation of the sharpness at a specific position. Typically, these methods rely on evaluating the sensor data in a small local area. In general, as an object point is focused sharper, the larger the variation of sensor values in a local neighbourhood. As an example, the standard deviation of the sensor values could be used as a simple measure for the sharpness. The vertical resolution of a focus variation instrument depends on the chosen objective and can be as low as 10 nm. The vertical scan range depends on the working distance of the objective and ranges from a few millimetres to approximately 20 mm or more. The vertical resolution is not dependent upon the scan height, which can lead to a high dynamic range. The xy range is determined by the objective and typically ranges from 0.14 mm by 0.1 mm to 5 mm by 4 mm for a single measurement. By using special algorithms and a motorised stage the xy range can be increased to around 100 mm by 100 mm. In contrast to other optical techniques that are limited to coaxial illumination, the maximum measurable slope angle is not dependent on the numerical aperture of the objective. Focus variation can be used with a large range of different illumination sources (such as a ringlight), which allows the measurement of slope angles exceeding 80 . Focus variation is applicable to surfaces with a large range of different optical reflectance values. Specimens can vary from shiny to diffuse
143
144
C H A P T ER 6 : Surface topography measurement instrumentation
reflecting, from homogeneous to compound materials, and from smooth to rough surface properties (but see below). Focus variation overcomes the aspect of limited measurement capabilities in terms of reflectance by using a combination of a modulated illumination source, controlling the sensor parameters and integrated polarization. In addition to the scanned height data, focus variation also delivers a colour image with full depth of field that is registered to the 3D data points. Since focus variation relies on analysing the variation of focus, it is only applicable to surfaces where the focus varies sufficiently during the vertical scanning process. Surfaces not fulfilling this requirement, such as transparent specimens or components with only a small local roughness, are difficult and sometimes impossible to measure. Typically, focus variation gives repeatable measurement results for surfaces with a local Ra of 10 nm or greater at a lc of 2 mm (see section 8.2.3).
6.7.3.2 Phase-shifting interferometry A phase-shifting interferometer (PSI) consists of an interferometer integrated with a microscope (see Figure 6.21) [72,43]. Within the interferometer, a beam-splitter directs one beam of light down a reference path, which has a number of optical elements including an ideally flat and smooth mirror from which the light is reflected. The beam-splitter directs a second beam of
FIGURE 6.21 Schema of a phase-shifting interferometer.
Optical instruments
light to the sample where it is reflected. The two beams of light return to the beam-splitter and are combined forming an image of the measured surface superimposed by an interference pattern on the image sensor array (camera). Usually a PSI uses a co-axial alignment, i.e. the two beams propagate in the same direction, but off-axis arrangements can be used [73]. The image of the surface can be either focused onto the detector or not. In the latter case a digital propagation algorithm is employed allowing numerical focusing [74]. The optical path in the reference arm is adjusted to give the maximum interference contrast. During measurement, several known shifts between the optical path to the measured surface and the optical path to the reference mirror are introduced and produce changes in the fringe pattern. Phase maps are then constructed from each shifted interferogram. There are several ways to shift the difference in optical paths. For example, the objective and reference mirror of the system are translated with the use of a piezoelectric actuator. Finally, the vertical height data are deduced from the phase maps. For specimens with vertical heights greater than half the wavelength [72], the 2p ambiguity can be suppressed by phase-unwrapping algorithms or the use of dual-wavelength methods [73,75]. PSI instruments usually come in one of two configurations depending on the arrangement of the microscope objective. Figure 6.22 shows a Mirau
FIGURE 6.22 Schematic diagram of a Mirau objective.
145
146
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.23 Schematic diagram of a Linnik objective.
configuration, where the components A, B and C are translated with reference to D, and Figure 6.23 shows a Linnik configuration, where components B and C are translated with reference to D and E. The Mirau is more compact and needs less adjustment than the Linnik. For both objectives, there must be white light interference when both the reference mirror and the object are in focus. For the Mirau objective this is accomplished in one setting of the tilt and position of the reference mirror. For the Linnik objective, both the reference mirror and the object must be in focus, but in addition both arms of the Linnik objective must be made equal within a fringe. Also, a Linnik objective consists of two objectives that must match together, at least doubling the manufacturing costs. An advantage of the Linnik is that no central area of the objective is blocked and no space underneath the objective is needed for attaching an extra mirror and beamsplitter. Therefore, with the Linnik objective, magnifications and resolutions can be achieved as with the highest-resolution standard optical microscope objectives. A further objective is based on a Michelson interferometer (see section 4.4.1). These are produced by placing a cube beamsplitter under the objective lens directing some of the beam to a reference surface. The advantage of the Michelson configuration is that the central part of the objective is not blocked. However, the cube beam-splitter is placed in a convergent part of the beam, which leads to aberrations and limits the instrument to small numerical apertures and large working distances.
Optical instruments
The light source used for PSI measurements typically consists of a narrow band of optical wavelengths as provided by a laser, light-emitting diode (LED), narrow-band filtered white light source, or spectral lamp. The accuracy of the central wavelength and the bandwidth of the illumination are important to the overall accuracy of the PSI measurement. The measurement of a surface profile is accomplished by using an image sensor composed of a linear array of detection pixels. Areal measurements of the surface texture may be accomplished by using an image sensor composed of a matrix array of detection pixels. The spacing and width of the image sensor pixels are important characteristics, which determine attributes of instrument lateral resolution (see section 6.7.1). PSI instruments can have sub-nanometre resolution and repeatability but it is very difficult to determine their accuracy, as this will be highly dependent on the surface being measured. Most of their limitations were discussed in section 6.7.1. Most PSI instruments usually require that adjacent points on a surface have a height difference of l/4. The range of PSI is limited to one fringe, or approximately half the central wavelength of the light source, so PSI instruments are usually only used for measuring approximately flat surfaces (a rule of thumb is that only surfaces with an Ra or Sa less than l/10 would be measured using PSI). This limitation can be overcome by combining the PSI instrument with a CSI instrument (see section 6.7.3.4), usually referred to as a vertical scanning mode. The accuracy of a PSI instrument can be enhanced to allow highly flat surfaces to be measured (surfaces that are flatter than the reference surface) using a process known as reference surface averaging [76]. Alternatively, it may be possible to characterize the reference surface using a liquid surface [77]. The xy range will be determined by the field of view of the objective and the camera size. Camera pixel arrays range from 256 by 256 to 1024 by 1024 or more, and the xy range can be extended to several tens of centimetres using scanning stages and stitching software. PSI instruments can be used with samples that have very low optical reflectance values (below 5 %), although the signal-to-noise ratio is likely to rise as the reflectance is decreased. An optimal contrast is achieved when the reflectance values of the reference and the measured surface match (see section 4.3.3).
6.7.3.3 Digital holographic microscopy A digital holographic microscope (DHM) is an interferometric microscope very similar to a PSI (see section 6.7.3.2), but with a small angle between the propagation directions of the measurement and reference beams as shown in Figure 6.24 [78]. The acquired digital hologram, therefore, consists of a spatial amplitude modulation with successive constructive and destructive
147
148
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.24 Schematic diagram of DHM with beam-splitter (BS), mirrors (M), condenser (C), microscope objective (MO) and lens in the reference arm (RL) used to
interference fringes. In the frequency domain, the difference between the coaxial geometry (PSI) and the off-axis geometry (DHM) is in the position of the frequency orders of the interference. In PSI, because the three orders (the zeroth-order or non-diffracted wavefront, and 1 orders or the real and virtual images) are superimposed, several phase shifts are necessary. In contrast, in DHM the off-axis geometry spatially separates the different frequency orders, which allows simple spatial filtering to reconstruct the phase map from a single digital hologram [79]. DHM is, therefore, a real-time phase imaging technique less sensitive to external vibrations than PSI.
Optical instruments
In most DHM instruments, contrary to most PSI instruments, the image of the object formed by the microscope objective is not focused on the camera. Therefore, DHM needs to use a numerical wavefront propagation algorithm that can use numerical optics to increase the depth of field [80], or compensate for optical aberrations [81]. The choice of source for DHM is large but is dictated by the source coherence length. A source with a short coherence length is preferred to minimize parasitic interference, but the coherence length has to be sufficiently large to allow interference over the entire field of view of the detector. Typically, coherence lengths of several micrometres are necessary. DHM has a similar resolution to PSI [82] and is limited in range to half the central wavelength of the light source when a single wavelength is used. However, dual-wavelength [83] or multiple-wavelength DHM [84] allows the vertical range to be increased to several micrometres. For low magnification, the field of view and the lateral resolution depends on the microscope objective and the camera pixel size; but for high magnification, the resolution is diffraction limited down to 300 nm with a 100 objective. As with PSI, scanning stages and stitching software can be used to increase the field of view.
6.7.3.4 Coherence scanning interferometry The configuration of a coherence scanning interferometer (CSI) is similar to that of a phase-shifting interferometer but in CSI a broadband (white light) or extended (many independent point sources) source is utilized [2,85]. CSI is often referred to as vertical scanning white light interferometry or white light scanning interferometry. With reference to Figure 6.25 the light from the broadband light source is directed towards the objective lens. The beamsplitter in the objective lens splits the light into two separate beams. One beam is directed towards the sample and one beam is directed towards an internal reference mirror. The two beams recombine and the recombined light is sent to the detector. Due to the low coherence of the source, the optical path length to the sample and the reference must be almost identical, for interference to be observed. Note that coherence is the measure of the average correlation between the values of a wave at any pair of times, separated by a given delay [41]. Temporal coherence tells us how monochromatic a source is. In other words, it characterizes how well a wave can interfere with itself at a different time (coherence in relation to CSI is discussed in more detail in [86] and in general in section 4.3.4). The detector measures the intensity of the light as the optical path is varied in the vertical direction (z axis) and finds the interference maximum. Each pixel of the camera
149
150
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.25 Schema of a coherence scanning interferometer.
measures the intensity of the light and the fringe envelope obtained can be used to calculate the position of the surface. A low-coherence source is used rather than monochromatic light because it has a shorter coherence length and, therefore, avoids ambiguity in determining the fringe order. Different instruments use different techniques to control the variation of the optical path (by moving either the object being measured, the scanning head or the reference mirror) and some instruments have a displacement-measuring interferometer to measure its displacement [87]. As the objective lens is moved a change of intensity due to interference will be observed for each camera pixel when the distance from the sample to the beam-splitter is the same as the distance from the reference mirror to the beam-splitter (within the coherence length of the source). If the objective is moved downwards the highest points on the surface will cause interference first. This information can be used to build up a three-dimensional map of the surface. Figure 6.26 shows how the interference is built up at each pixel in the camera array. There are a number of options for extracting the surface data from the CSI optical phase data. Different fringe analysis methods give advantages with
Optical instruments
FIGURE 6.26 Schematic of how to build up an interferogram on a surface using CSI.
different surface types, and many instruments offer more than one method. These are simply listed here but more information can be found in [85] and [86]. The fringe analysis methods include: -
envelope detection;
-
centroiding;
-
envelope detection with phase estimation;
-
scan domain convolution;
-
frequency domain analysis.
CSI instruments can have sub-nanometre resolution and repeatability but it is very difficult to determine their accuracy, as this will be highly dependent on the surface being measured. Most of their limitations were discussed in section 6.7.1 and are reviewed in [47]. The range of the optical path actuator, usually around 100 mm, will determine their axial range, although this can be increased to several millimetres using a long-range actuator and stitching software. The xy range will be determined by the field of view of the objective and the camera size. Camera pixel arrays range from 256 by 256 to 1024 by 1024 or more, and the xy range can be extended to several tens of centimetres using scanning stages and stitching software. CSI instruments can be used with samples that have very low optical reflectance values (below 5 %), although, as with PSI, the signal-to-noise ratio is likely to rise as the reflectance is decreased. To avoid the need to scan in the axial direction, some CSI instruments operate in a dispersive mode. Dispersive CSI generates the spectral distributions of the interferograms directly by means of dispersive optics without
151
152
C H A P T ER 6 : Surface topography measurement instrumentation
the need for depth scanning [88]. This method is well suited to in-line applications with high immunity to external vibration and high measurement speed. Researchers have recently developed a CSI technique that can be used to measure relatively large areas (several centimetres) without the need for lateral scanning [89]. As such a full-field method does not use a microscope objective, the lateral resolution is necessarily limited. Some CSI instruments have been configured to measure the dynamic behaviour of oscillating structures by using a stroboscopic source to essentially freeze the oscillating structure [90]. (Note that confocal instruments have also been used to measure the motion of vibrating structures [91].) CSI (and PSI) is often used for the measurement of the thickness of optical films by making use of the interference between reflections from the top surface and the different film interfaces [92,93]. Recent advances can also measure the individual thickness of a small number of films in a multilayer stack and the interfacial surface roughness [94].
6.7.4 Scattering instruments There are various theories to describe the scattering of light from a surface (see [95] for a thorough introduction and review). The theories are based on both scalar and vector scattering models and many were developed to describe the scattering of radio waves from the ocean surface. Light scattered from a surface can be both specular, i.e. the reflection as predicted by geometrical optics, and diffuse, i.e. reflections where the angle of reflection is not equal to the angle of incidence. Diffuse reflection is caused by surface irregularities, local variations in refractive index and any particulates present at the surface (for this reason cleanliness is important). From the theoretical models, the distribution of light scattered from smooth surfaces is found to be proportional to a statistical parameter of the surface (often Rq or Sq), within a finite bandwidth of spatial wavelengths [96,97]. Hence, scattering instruments do not measure the actual peaks and valleys of the surface texture; rather they measure some aspect of the surface height distribution. There are various methods for measuring light scatter and there are many commercially available instruments [98,99]. As scattering instruments sample over an area (they are area-integrating methods) they can be very fast and relatively immune to environmental disturbance. For these reasons, scattering methods are used extensively in on-line or in-process situations, for example measuring the effects of tool wear during a cutting process or damage to optics during polishing. It can be difficult to associate an absolute value to a surface parameter measured using a scattering technique, so scattering is often used to investigate process change.
Optical instruments
The function that describes the manner in which light is scattered from a surface is the bi-directional scatter distribution function (BSDF) [95]. The reflective properties of a surface are governed by the Fresnel equations [41]. Based upon the angle of incidence and material properties of a surface (optical constants), the Fresnel equations can be used to calculate the intensity and angular distribution of the reflected waves. The BSDF describes the angular distribution of scatter. The total integrated scatter (TIS) is equal to the light power scattered into the hemisphere above the surface divided by the power incident on the surface. The TIS is equal to the integral of the BSDF over the scattering hemisphere multiplied by a correction factor (known as the obliquity factor). Reference [100] derived a relationship between the TIS and Rq (or Sq) given by Rqz
l pffiffiffiffiffiffiffiffi TIS 4p
(6.6)
where the TIS is often approximated by the quotient of the diffusely scattered power to the specularly reflected power. The instrumentation for measuring TIS [101] consists of a light source (usually a laser), various filters to control the beam size, a device for collecting the scattered light, and detectors for measuring the scattered light and specularly reflected light. The scattered light is captured either using an integrating sphere or a mirrored hemisphere (a Coblentz sphere). Often phase-sensitive detection techniques are used to reduce the noise when measuring optical power. An integrating sphere is a sphere with a hole for the light to enter, another hole opposite where the sample is mounted and a third position inside the sphere where the detector is mounted (see Figure 6.27). The interior surface of the sphere is coated with a diffuse white material. Various corrections have to be applied to integrating sphere measurements due to effects such as stray light and the imperfect diffuse coating of the sphere [102]. With a Coblentz sphere the light enters through a hole in the hemisphere at an angle just off normal incidence, and the specularly reflected light exits through the same hole. The light scattered by the surface is collected by the inside of the hemisphere and focused onto a detector. A number of assumptions are made when using the TIS method. These include: -
the surface is relatively smooth (l >> 4pRq);
-
most of the light is scattered around the specular direction;
153
154
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.27 Integrating sphere for measuring TIS. -
scattering originates solely at the top surface, and is not attributable to material inhomogeneity or multilayer coatings;
-
the surface is clean.
TIS instruments are calibrated by using a diffusing standard usually made from white diffusing material (material with a Lambertian scattering distribution) [103]. When comparing the Rq value from a TIS instrument to that measured using a stylus instrument, or one of the optical instruments described in sections 6.7.2 and 6.7.3, it is important to understand the bandwidth limitations of the instruments. The bandwidth limitations of the TIS instrument will be determined by the geometry of the collection and detection optics (and ultimately by the wavelength of the source) [104]. TIS instruments can measure Rq values that range from a few nanometres to a few micrometres (depending on the source). Their lateral resolution is diffraction limited, but often the above bandwidth limits will determine the lower spatial wavelengths that can be sampled. Another scattering method that is commercially available is angleresolved scatter (ARS) [97,105,106]. However, ARS methods tend to be more complicated than TIS and the theory relating the ARS to a surface roughness
Capacitive instruments
parameter is not so clear. Basically, the angular distribution of the scattered light is measured either using a goniophotometer-type instrument or a dedicated scatterometer (see [98] for examples). The angular distribution of the scattered light can be expressed as the product of an optical factor and a surface factor. The optical factor can be calculated from the illuminating wavelength, the angles of incidence and scattering, the material properties of the surface, and the polarization of the incident and scattered beams. The surface factor is called the power spectral density (PSD) function and is a function of the surface roughness. From the PSD quantitative values for the height and spatial wavelength distributions can be obtained, although a good a priori model of the surface is required for accurate measurements. It is also possible to extract the BRDF from ARS data. The range and resolution of ARS instruments are very similar to those for TIS instruments. As with TIS instruments, ARS instruments do not measure the actual surface topography, but measure some aspect of the height and spatial wavelength distributions. For this reason ARS instruments are usually employed where process change needs to be monitored. TIS and ARS instruments are limited in the range of heights that they can measure. With visible illumination the heights are usually limited to 100 nm or less. The use of infrared illumination sources can increase this range limit. However, to employ scattering to measure larger surface heights, it is more common to use correlation methods, for example the use of laser speckle [107]. Such techniques will not be discussed here, as they are not common to surfaces encountered in MNT applications.
6.8 Capacitive instruments The use of capacitance [108,109] to measure surface texture has been around for about as long as stylus methods. A conducting plate is held over (or more usually mounted on) a conducting sample to be measured [7]. The capacitance between the plates is a function of the effective plate area, the separation of the plates and the dielectric constant of the medium between them (usually air) [110]. The mean capacitance will change with changes in surface texture as the top plate is scanned over the surface. Surface form can cause serious problems when using capacitance instruments to measure surface texture and, because the capacitance is related to the inverse of the surface texture, large peaks will be weighted differently to valleys. Note that the configuration described above is usually used to measure proximity (see section 5.3). Capacitance instruments for measuring surface texture can have significant problems and are difficult to calibrate. They are
155
156
C H A P T ER 6 : Surface topography measurement instrumentation
rarely used nowadays and do not find many applications in the MNT area. However, the scanning capacitance microscope is used extensively in many MNT applications.
6.9 Pneumatic instruments Pneumatic gauging has been around for many years. Basically an air flow is input to the surface by means of a hollow nozzle and the back pressure generated in the nozzle chamber is measured. This gives rise to a non-linear relationship between surface texture and back pressure, but a linear region exists over a restricted range [111]. The axial resolution can be less than 1 mm and the lateral resolution is limited to the nozzle diameter (usually much greater than 1 mm). Pneumatic gauging can be very fast and is self-purging, which is useful for on-line processes. It is not used extensively for MNT applications.
6.10 Calibration of surface topography measuring instruments Calibration and traceability for surface texture measuring instruments is a subject area that has received a great deal of attention in the past century and is still an active area of research. There are many unsolved problems and it is still impossible to calibrate a given surface texture measuring instrument for all surface types (this may well always be the case). The complex interaction of the probe with the surface being measured and the vast range of possible surface types confound the problem. This is especially true for optical instruments – it is non-trivial, but possible to calculate the trajectory of a spherical stylus as it traverses a surface, but it is much more difficult to calculate the interaction of an electromagnetic wave with a surface. Also, there is vast array of surface texture parameters and characterization methods (see chapter 8) with varying degrees of complexity. For example, there has been little attempt to calculate the uncertainty associated with areal feature parameters (see section 8.3.7). The following sections summarize the current state of the art in the area of calibration and traceability.
6.10.1 Traceability of surface topography measurements Traceability of surface topography measuring instruments can be split into two parts. Firstly, there is the traceability of the instruments and, secondly,
Calibration of surface topography measuring instruments
the traceability of the analysis algorithms and parameter calculations. Instrument traceability is achieved by calibrating the axes of operation of the instrument, usually using calibration artefacts (referred to as material measures in ISO standards). In some instances, it may also be possible to calibrate an instrument using a range of instrumentation to measure the various characteristics of the instrument, although this is a time-consuming process that is only usually required by NMIs [112]. Calibration artefacts are available in a range of forms for both profile and areal calibration, but a primary instrument must calibrate them. Primary instruments are usually kept at the NMIs and can be stylus (for example [113]) or optical (for example [114]) based. Most primary instrumentation achieves traceability by using interferometers that are traceable to the definition of the metre via a laser source (see section 2.9). Traceability of profile measuring instruments has been available now for many years, although it is still common to consider an instrument calibrated when only a single step height artefact has been measured – a dangerous assumption when using the instrument to measure both height and lateral dimensions, or when measuring surface texture parameters (see section 6.10.2). Traceability for areal instruments is still in its infancy and there are only a small number of NMIs that can offer an areal traceability service (see [113,115]). An important aspect of traceability is the measurement uncertainty of the primary instrument and the instrument being calibrated. Rigorous uncertainty analyses are usually carried out by the NMIs (see for example [116– 118]), but are surprisingly rare in industry for profile measurement using a stylus instrument and almost non-existent for areal measurements, especially when using an optical instrument [119]. Traceability for parameter calculations can be carried out by using calibrated artefacts that have associated parameters, for example the type D artefacts (see section 6.10.2) used for calibrating profile measuring instruments. However, the parameter calculations themselves should be verified using software measurement standards (see section 6.13), and for the calibrated artefact an uncertainty calculation has to be made by those institutions that can calibrate these standards.
6.10.2 Calibration of profile measuring instruments ISO 5436 part 1 [120] describes five types of artefacts that are used to calibrate the characteristics of profile measuring stylus instruments. Optical instruments are not covered in ISO 5436 part 1 but many of the artefacts described can be adapted to calibrate optical instruments in profile mode.
157
158
C H A P T ER 6 : Surface topography measurement instrumentation
Many groups have developed profile calibration artefacts that are available commercially (see [114] for a review). The use of the five types of profile calibration artefacts is presented in detail in [1] and they are summarized here ([1] also presents the analysis methods for the various artefacts). Some groups have developed dynamic techniques for calibrating the vertical characteristics of stylus instruments by using a vibrating platform to simulate the spatial frequencies on a surface, but such methods are not used extensively in industry (see [112] and [121]). ISO 12179 [122] describes the methodologies to be applied when calibrating a surface texture measuring instrument such as the need for repeat measurements, general instrument set-up and what to include on a calibration certificate. The five types of calibration artefacts described in [120] are: Type A – used to verify the vertical characteristics of an instrument. They come in two sub-groups: type A1 – a wide groove with a flat valley the size of which is dictated by the stylus tip, and type A2 – same as type A1 but with a rounded valley. Figure 6.28 shows how a type A1 artefact is analysed. Type B – used to investigate the geometry of the stylus tip. They come in three sub-groups: type B1 – narrow grooves proportioned to be sensitive to the dimensions of the stylus, type B2 – two grids of equal Ra value (see section 8.2.7.1), one sensitive to the tip dimensions, the other insensitive, and type B3 – has a fine protruding edge where the radius and apex angle must be smaller than the radius and apex angle of the stylus being assessed. Type C – used to verify the vertical and horizontal characteristics of an instrument. They consist of a repetitive groove of similar shape with low harmonic amplitudes. They come in four sub-groups: type C1 – sine wave profile, type C2 – triangular wave profile, type C3 – sine or triangular wave with truncated peaks and valleys and type C4 – arcuate wave profile.
FIGURE 6.28 Analysis of a type A1 calibration artefact.
Calibration of surface topography measuring instruments
Type D – used to verify the overall performance of an instrument when measuring surface texture parameters. They have an irregular profile in the direction of the traverse (similar to a ground profile) that repeats in the longitudinal direction after some number (usually five) of the sampling lengths (see section 8.2.3) for which it is designed. The profile shape is constant normal to the measuring direction of the artefact. Type E – used to verify the form measuring capability of the instrument or the straightness of the reference datum slideway (or its equivalent for an optical instrument). They come in two sub-groups: type E1 – a spherical dome-shaped artefact that is characterized by its radius and Pt (see section 8.2.6.5), and type E2 – a precision prism characterized by the angles between the surfaces and Pt on each surface.
6.10.3 Calibration of areal surface texture measuring instruments ISO/FDIS 25178 part 701 [123] describes six types of artefacts that are used to calibrate all the characteristics of areal surface measuring stylus instruments. Optical instruments will be covered in future ISO specification standards, but for now the artefacts described in [123] should be adapted where possible. Researchers [124,125] have developed a range of prototype artefacts for calibrating both contact and non-contact areal surface measuring instruments, and more artefacts are discussed in [114]. The six types of artefacts described in ISO/FDIS 25178 part 701 are: Type ER – measurement standards with two or more triangular grooves, which are used to calibrate the horizontal and vertical amplification coefficients of the instrument. Type ER standards are characterized by depth, d, angle between flanks, a, and the intersection line between their flanks. Type ER artefacts come in three variations: -
Type ER1 – two parallel grooves (see Figure 6.29) where the measurands are the groove spacing, l, and d.
-
Type ER2 – rectangular grooves (see Figure 6.30) where the measurands are the spacing between the grooves, l1 and l2, d and the angle between the grooves, q.
-
Type ER3 – circular grooves (see Figure 6.31) where the measurands are the diameter of the groove, Df, and d.
Type ES – sphere/plane measurement standards (see Figure 6.32) are used for calibrating the horizontal and vertical amplification factors, the xy
159
160
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.29 Type ER1 – two parallel groove standard.
FIGURE 6.30 Type ER2 – rectangular groove standard.
perpendicularity, the response curve of the probing system and the geometry of the stylus. The measurands are the largest distance of a point of the sphere to the plane P, d, the radius of the sphere, Sr, and the diameter of the circle obtained by the intersection between the sphere and the plane P, Di, given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (6.7) Di ¼ 2 Sr2 ðSr dÞ2
Calibration of surface topography measuring instruments
FIGURE 6.31 Type ER3 – circular groove standard.
Type CS – contour measurement standards (see Figure 6.33) are used for the overall calibration along one horizontal axis of the instrument. The measurands are the radius, R, of the arcs of circle, the distances, l1.ln, between the centres of the circles and/or the summits of the triangles with respect to the reference plane, and the heights, h1.hn, between the centres of the circles and/or the intersections of the flanks of the triangles. Type CG – cross grating standards, which are characterized by the average pitches in the x and y axes, and the angle between the x and y axes. Type CG standards come in two variations: Type CG1 – X/Y crossed gratings (see Figure 6.35), which are used for calibrating the horizontal amplification coefficients and the xy perpendicularity of the instrument. The measurands are the average pitches in the x and y axes, lx and ly, and the average angle between the x and y axes. Type CG2 – X/Y/Z crossed gratings (see Figure 6.35), which are used for calibrating the horizontal and vertical amplification coefficients and the xy perpendicularity of the instrument. The measurands are the same as the type CG1 standards but include the average depth of the flat-bottomed pits, d. Type DT – random topography standards that are composed of a series of unit sampling areas with pseudo-random surface topography. Type DT measurement standards are used for the overall calibration of the measuring instrument as with the type D profile standards. Isotropic and periodic surfaces are preferable and at least two by two unit measuring areas are needed. The unit
161
162
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.32 Type ES – sphere/plane measurement standard.
measuring area should be functionally closed so that the multiple sampling areas can be cyclic or periodic. The measurands are areal field parameters.
6.11 Uncertainties in surface topography measurement The calculation of uncertainties for surface texture measuring instruments is a very complex task that is often only carried out at the NMIs (see section 6.10.1). The biggest complication when calculating uncertainties in surface texture measurement is the contribution of the surface itself. Unlike less complicated measurements, such as displacement, the surface being
Uncertainties in surface topography measurement
FIGURE 6.33 Type CS – contour standard.
FIGURE 6.34 Type CG1 – X/Y crossed grating.
measured can have a significant effect on the measurement, either by directly affecting the measuring probe, or because the surface texture is so variable that repeat measurements in different locations on the surface give rise to a high degree of variability. It is often possible to calculate the instrument
163
164
C H A P T ER 6 : Surface topography measurement instrumentation
FIGURE 6.35 Type CG2 – X/Y/Z grating standard.
uncertainty, i.e. the uncertainty in measuring either (x, z) for profile or (x, y, z) for areal, but when the effect of the surface is taken into account this uncertainty value may significantly increase, often in an unpredictable manner. Where possible the guidelines in the GUM should be applied (see section 2.9.3) to calculate instrument uncertainties and the effect of the surface should be considered in as pragmatic a manner as possible. Examples of methods to calculate the uncertainty in a profile measurement using a stylus instrument are given in [116] and [117], but the methods are far from mathematically rigorous or applicable in all circumstances. A rigorous uncertainty is calculated in [126], using the GUM approach, for the use of a Gaussian profile filter but little work has been carried out for the uncertainty associated with areal parameters [127]. When the instrument uncertainty has been calculated it is then often necessary to find the uncertainty in a parameter calculation. Once again this is far from trivial and often the guidelines in the GUM cannot be easily applied. The problem is that for roughness parameters, some characteristics of a roughness measuring instrument have an obvious influence on a roughness parameter, but for others this is highly unclear. For example, for an Ra value it is obvious that an uncertainty of 1 % in the vertical axis calibration results in a 1 % uncertainty in the Ra value, but it is far less clear what will be the effect if the probe diameter is 5 mm or 10 mm, instead of the standard 2 mm, or what happens if the cut-off filter is not exactly Gaussian. For a spatial parameter such as RSm, the uncertainty in the vertical direction will not be significantly relevant, but the x ordinate calibration is essential. Moreover, such effects are surface-dependent; a very fine surface will be more
Comparisons of surface topography measuring instruments
sensitive to probe diameter deviations and deviations in the shortwavelength cut-off filter than a surface where most of the undulations are far within the wavelength band. Experiments [112] and simulations [127–129] were carried out taking into account the following effects: z axis calibration, x axis calibration, lc cutoff length, ls cut-off length, probe diameter, probe tip angle, probing force, straightness of reference and sampling density. All these influencing factors have different effects depending on the parameter and the surface measured. From a number of samples it became obvious that the precise definition of lc and the probe diameter can have larger effects than the z axis calibration, and of course for very smooth surfaces the reference guidance is a major factor. Some parameters such as RSm are very sensitive to many measurement conditions and can easily have a 20 % uncertainty for rough surfaces, which is hidden when an instrument is only calibrated using sinusoidal artefacts (type C1, see section 6.10.2). So the conclusion of this section is that it is not straightforward to calculate a rigorous uncertainty value for an instrument for all surfaces and for all parameters. Only a pragmatic approach can be applied for a given measurement scenario. At the very least repeated measurements should always be carried out and the standard deviation or the standard deviation of the mean quoted.
6.12 Comparisons of surface topography measuring instruments Many comparisons of surface topography measuring instruments have been conducted over the years. The spreads in the results can be quite alarming, especially when comparing contact and non-contact instruments. The authors of such comparisons are often surprised by the results but, upon closer inspection, most of the results can be explained. Often it is stated that the instruments do not compare because they have not been adequately calibrated. Whilst this may be a source of discrepancy, there are usually better reasons for instruments with different operating principles not comparing well. For example, a stylus acts as if a ball is rolled across the surface whilst an optical instrument relies on the reflection of an electromagnetic wave. Is it really so difficult to appreciate that such instruments can produce different results? Also, different instruments will sample different spatial wavelength bandwidths of the surface being measured and will have different physical limitations.
165
166
C H A P T ER 6 : Surface topography measurement instrumentation
In an early example [130] the measurement of groove depths was compared, where this groove could be measured by optical, mechanical and even AFM instruments (see chapter 7). From this comparison it became evident that grooves of some 40 nm could be measured with uncertainties in the nanometre level, but, for a 3 mm depth the results scattered far more than 1 %, even between NMIs. It is expected that since then, this situation has improved (see later). For example, the results of the measurements of a nickel sinusoid sample, with a period of 8 mm and an Ra of 152 nm, showed very different results for a number of different instruments (see Figure 6.36) [131]. The participants in this comparison were all experienced in surface texture measurement. In this example, NS IV refers to the traceable instrument at NPL (see section 6.10.1), Stylus 1 and Stylus 2 are different stylus instruments on the same site, Inter 1 and Inter 2 are the same model of CSI instrument on different sites and Conf refers to a confocal instrument. It was later found out that Stylus 2 had incorrectly applied a filter. A further triangulation instrument was also used in the comparison and the result was an Ra value of 2955 nm – far too large to plot on this figure! Many of the discrepancies above were explained after the comparison but the question remains: would a user in an industrial situation have the luxury of the hindsight that is afforded in such a comparison? This section is not intended to scare the reader into complete distrust of surface topography instruments – its purpose is to make the reader vigilant when measuring and characterising surface topography. Instruments should be properly calibrated and performance verified, results should be scrutinised and, where possible, different instruments should be used to measure the same surface. Once a stable measurement procedure is set up in a given situation, appropriate procedures should be in place to ensure that the
FIGURE 6.36 Results of a comparison of different instruments used to measure a sinusoidal sample.
Software measurement standards
instrument is operated within its limits and results are properly interpreted. Due care should especially be given to the types of filtering that are applied, both physical and digital. On a happier note a recent comparison carried out by European NMIs [132] of profile measurements using types A, C, D and F1 calibration artefacts (see sections 6.10.2 and 6.13) gave results that were in relatively close agreement. This shows that it is possible for different instruments to get comparable results. Note that many of the comparisons that are reported in the literature are for profile measurements. To date there have been relatively few comparisons of areal measurements (but see [133]).
6.13 Software measurement standards As can be seen from chapter 8, surface texture characterization involves a large array of filtering methods and parameter calculations. The software packages that are supplied with surface texture measuring instruments, and some stand-alone software packages, usually offer a bewildering range of options for characterization. Where possible, these software packages should be verified by comparing them to reference software. ISO 5436 part 2 [134] presents two types of software measurement standard for profile measurement and ISO/FDIS 25178 part 7 [135] presents the two areal counterparts. Only the profile software measurement standards will be discussed here but the general principles also apply in the areal case. The two types of software measurement standards [134] are: Type F1 – reference data files. These are digital representations of a profile that are used as input to the software under test. The results from the software under test are compared with the certified results provided with the type F1 software measurement standard. Type F1 software measurement standards are often referred to as softgauges. Type F2 – reference software. Reference software consists of traceable computer software against which software in a measuring instrument (or stand-alone package) can be compared. Type F2 software measurement standards are used to test software by inputting a common data set into both the software under test and the reference software and comparing the results. Of course the type F1 and F2 software measurement standards are related. Type F1 standards can be generated as mathematically known functions such as sinusoids, etc., for which parameters can be calculated analytically and independently. These can be input to candidate software, and if this software passes the acceptance test for many different type F1 software measurement standards it can be considered as type F2 software.
167
168
C H A P T ER 6 : Surface topography measurement instrumentation
Software measurement standards are available from some NMI web sites; see for example [136–138]. The user can either download type F1 standards or upload data files for type F2 analyses.
6.14 References [1] Leach R K 2001 The measurement of surface texture using stylus instruments NPL Good practice guide No. 37 (National Physical Laboratory) [2] Leach R K, Blunt L A, Brown L, Blunt R, Conroy M, Mauger D 2008 Guide to the measurement of smooth surface topography using coherence scanning interferometry NPL Good practice guide No. 108 (National Physical Laboratory) [3] Griffiths B 2001 Manufacturing surface technology (Penton Press: London) [4] Gilmozzi R, Spyromilio J 2007 The European Extremely Large Telescope (E-ELT) ESO Messenger 127 11–19 [5] Shore P 2008 Ultra precision surfaces Proc. ASPE, Portland, Oregon, USA, Oct. 75–78 [6] Malacara D 2007 Optical shop testing (Wiley Series in Pure and Applied Optics) 3rd edition [7] Whitehouse D J 2002 Handbook of surface and nanometrology (Taylor & Francis) [8] Mainsah E, Greenwood J A, Chetwynd D G Metrology and properties of engineering surfaces (Kluwer Academic Publishers: Boston) [9] Smith G T Industrial metrology: surfaces and roundness (Springer-Verlag: London) [10] Blunt L A, Jiang X 2003 Advanced techniques for assessment surface topography (Butterworth-Heinemann: London) [11] Church E L 1979 The measurement of surface texture and topography using dynamic light scattering Wear 57 93–105 [12] Stedman M 1987 Mapping the performance of surface-measuring instruments Proc. SPIE 83 138–142 [13] Stedman M 1987 Basis for comparing the performance of surfacemeasuring machines Precision Engineering 9 149–152 [14] Jones C W, Leach R K 2008 Adding a dynamic aspect to amplitudewavelength space Meas. Sci. Technol. 19 055105 [15] Shaw H 1936 Recent developments in the measurement and control of surface roughness J. Inst. Prod. Engnrs. 15 369–391 [16] Harrison R E W 1931 A survey of surface quality standards and tolerance costs based on 1929–1930 precision-grinding practice Trans. ASME paper no. MSP-53-12 [17] Hume K J 1980 A history of engineering metrology (Mechanical Engineering Publications Ltd)
References
[18] Reason R E, Hopkins M R, Garrod R I 1944 Report on the measurement of surface finish by stylus methods (Taylor, Taylor & Hobson: Leicester) ¨ ber Gla ¨tte und Ebenheit als physikalisches und [19] Schmaltz G 1929 U physiologisches Problem Zeitschrift des Vereines deutcher Ingenieure 73 1461 [20] Abbott E J, Firestone F A 1933 Specifying surface quality Mechanical Engineering 55 569–773 [21] Reason R E 1973 Stylus methods of surface measurement Bull. Inst. Phys. Oct. 587–589 [22] ISO 4287: 2000 Geometrical product specification (GPS) - Surface texture: Profile method - Terms, definitions and surface texture parameters (International Organization of Standardization) [23] Evans C, Bryan J 1999 ‘‘Structured,’’ ‘‘textured,’’ or ‘‘engineered’’ surfaces Ann. CIRP 48 451–456 [24] Bruzzone A A G, Costa H L, Lonardo P M, Lucca D A 2008 Advances in engineering surfaces for functional performance Ann. CIRP 57 750–769 [25] ISO/FDIS 25178 part 6: Geometrical product specification (GPS) - Surface texture: Areal - Classification of methods for measuring surface texture (International Organization of Standardization) [26] ISO 3274: 1996 Geometrical product specification (GPS) - Surface texture: Profile method - Nominal characteristics of contact (stylus) instruments (International Organization of Standardization) [27] ISO/FDIS 25178 part 601: Geometrical product specification (GPS) Surface texture: Areal - Nominal characteristics of contact (stylus) instruments (International Organization of Standardization) [28] McCool J I 1984 Assessing the effect of stylus tip radius and flight on surface topography measurements Trans. ASME 106 202–209 [29] DeVries W R, Li C -J 1985 Algorithms to deconvolve stylus geometry from surface profile measurements J. Eng. Ind. 107 167–174 [30] O’Donnell K A 1993 Effects of finite stylus width in surface contact profilometry Appl. Opt. 32 4922–4928 [31] Howard L P, Smith S T 1994 A metrological constant force stylus profiler Rev. Sci. Instrum 65 892–902 [32] Chetwynd D G, Liu X, Smith S T 1996 A controlled-force stylus displacement probe Precision Engineering 19 105–111 [33] Leach R K, Flack D R, Hughes E B, Jones C W 2008 Development of a new traceable areal surface texture measuring instrument Wear 266 552–554 [34] Garratt J, Mills M 1996 Measurement of the roughness of supersmooth surfaces using a stylus instrument Nanotechnology 7 13–20 [35] Leach R K 2000 Traceable measurement of surface texture at the National Physical Laboratory using NanoSurf IV Meas. Sci. Technol. 11 1162–1173
169
170
C H A P T ER 6 : Surface topography measurement instrumentation
[36] Whitehouse D J 1999 Surface measurement fidelity Proc. LAMBDAMAP 267–276 [37] Hidaka K, Saito A, Koga S, Schellekens P H J 2008 Study of a microroughness probe with ultrasonic sensor Ann. CIRP 57 489–492 [38] Coupland J M, Lobera J 2008 Holography, tomography and 3D microscopy as linear filtering operations Meas. Sci. Technol. 19 074012 [39] Creath K 1989 Calibration of numerical aperture effects in interferometric microscope objectives Appl. Opt. 15 3333–3338 [40] Greve M, Kru ¨ ger-Sehm R 2004 Direct determination of the numerical aperture correction factor of interference microscopes Proc. XI Int. Colloq. Surfaces, Chemnitz, Germany, Feb. 156–163 [41] Hecht E 2003 Optics (Pearson Education) 4th edition [42] de Groot P, Colonna de Lega X 2006 Interpreting interferometric height measurements using the instrument transfer function Proc. FRINGE 30–37. 2005 [43] ISO/FDIS 25178 part 603: Geometrical product specification (GPS) Surface texture: Areal - Nominal characteristics of non-contact (phase shifting interferometric) instruments (International Organization of Standardization) [44] Kru ¨ger-Sehm R, Fru ¨hauf J, Dziomba T 2006 Determination of the short wavelength cutoff for interferential and confocal microscopes Wear 264 439–443 [45] Harasaki A, Schmit J, Wyant J C 2001 Offset of coherent envelope position due to phase change on reflection Appl. Opt. 40 2102–2106 [46] Park M-C, Kim S-W 2001 Compensation of phase change on reflection in white-light interferometry for step height measurement Opt. Lett. 26 420–422 [47] Goa F, Leach R K, Petzing J, Coupland J M 2008 Surface measurement errors when using commercial scanning white light interferometers Meas. Sci. Technol. 18 015303 [48] Harasaki A, Wyant J C 2000 Fringe modulation skewing effect in the whitelight vertical scanning interferometry Appl. Opt. 39 2101–2106 [40] Marinello F, Bariani P, Pasquini A, De Chiffre L, Bossard M, Picotto G B 2007 Increase of maximum detectable slope with optical profilers, through controlled tilting and image processing Meas. Sci. Technol. 18 384–389 [50] Proertner A, Schwider J 2001 Dispersion error in white-light Linnik interferometers and its implications for evaluation procedures Appl. Opt. 40 6223–6228 [51] Lehmann P 2003 Optical versus tactile geometry measurement - alternatives or counterparts Proc. SPIE 5144 183–196 [52] Hillmann W 1990 Surface profiles obtained by means of optical methods are they true representations of the real surface? Ann. CIRP 39 581–583 [53] Rhee H, Vorburger T, Lee J, Fu J 2005 Discrepancies between roughness measurements obtained with phase-shifting and white-light interferometry Appl. Opt. 44 5919–5927
References
[54] Brand U, Flu ¨ gge J 1998 Measurement capabilities of optical 3D-sensors for MST applications Microelectronic Engineering 41/42 623–626 [55] McBride J W, Zhao Z, Boltryk P J 2008 A comparison of optical sensing methods for the high precision 3D surface profile measurement of grooved surfaces Proc. ASPE, Portland, Oregon, USA, Oct. 124–127 [56] Gao F, Coupland J, Petzing J 2006 V-groove measurements using white light interferometry Photon06, Manchester, Sept. [57] Coupland J M, Lobera J 2008 Measurement of steep surfaces using white light interferometry Strain doi: 10.1111/j.1475-1305.2008.00595.x [58] Bray M 2004 Stitching interferometry: recent results and absolute calibration Proc. SPIE 5252 305–313 [59] Zhang R 2006 Theoretical and experimental study on the precision of the stitching system Proc. SPIE 6150 61502Y [60] Zeng L, Matsumoto H, Kawachi K 1997 Two-directional scanning method for reducing the shadow effects in laser triangulation Meas. Sci. Technol. 8 262–266 [61] Wilson T 1984 Theory and practice of scanning optical microscopy (Academic Press) [62] Diaspro A 2002 Confocal and two-photon microscopy: foundations, applications and advances (Wiley Blackwell) [63] Wilson T 1990 Confocal microscopy (Academic Press) [64] Jordan H, Wegner M, Tiziani H 1998 Highly accurate non-contact characterization of engineering surfaces using confocal microscopy Meas. Sci. Technol. 9 1142–1151 ´n H, Hadravsk [65] Petra y M, Egger M D, Galambos R 1968 Tandem-scanning reflected-light microscope J. Opt. Soc. Am. 58 661–664 [66] Minsky M 1961 Microscopy apparatus (US patent 3.013.467) [67] ISO/FDIS 25178 part 602: 2008 Geometrical product specification (GPS) Surface texture: Areal - Nominal characteristics of non-contact (confocal chromatic probe) instruments (International Organization of Standardization) [68] Tiziani H J, Uhde H 1994 Three-dimensional image sensing by chromatic confocal microscopy Appl. Opt. 33 1838–1841 [69] Danzl R, Helmli F, Rubert P, Prantl M 2008 Optical roughness measurements on specially designed roughness standards Proc. SPIE 7102 71020M [70] Miura K, Okada M, Tamaki J 2000 Three-dimensional measurement of wheel surface topography with a laser beam probe Advances in Abrasive Technology III 303–308 [71] Fukatsu H, Yanagi K 2005 Development of an optical stylus displacement sensor for surface profiling instruments Microsyst. Technol. 11 582–589 [72] Creath K 1988 Phase-measuring interferometry techniques in Progress in optics (Elsevier Science Publishers: Amsterdam) [73] Kumar U P, Bhaduri B, Kothiyal M P, Mohan N K 2009 Two-wavelength micro-interferometry for 3-D surface profiling Opt. Lasers Eng. 47 223–229
171
172
C H A P T ER 6 : Surface topography measurement instrumentation
[74] Stenner M D, Neifeld M A 2006 Motion compensation and noise tolerance in phase-shifting digital in-line holography Opt. Express 14 4286–4299 [75] Yamaguchi I, Ida T, Yokota M 2008 Measurement of surface shape and position by phase-shifting digital holography Strain 44 349–356 [76] Creath K, Wyant J C 1990 Absolute measurement of surface roughness Appl. Opt. 29 3823–3827 [77] Lim J, Rah S 2006 Absolute measurement of the reference surface profile of a phase shifting interferometer Rev. Sci. Instrum. 77 086107 [78] Cuche E, Marquet P, Depeursinge C 1999 Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms Appl. Opt. 38 6994–7001 [79] Cuche E, Marquet P, Depeursinge C 2000 Spatial filtering for zero-order and twin-image elimination in digital off-axis holography Appl. Opt. 39 4070–4075 [80] Ferraro P, Grilli S, Alfieri D, Nicola S D, Finizio A, Pierattini G, Javidi B, Coppola G, Striano V 2005 Extended focused image in microscopy by digital holography Opt. Express 13 6738–6749 [81] Colomb T, Montfort F, Ku ¨ hn J, Aspert N, Cuche E, Marian A, Charrie`re F, Bourquin S, Marquet P, Depeursinge C 2006 Numerical parametric lens for shifting, magnification and complete aberration compensation in digital holographic microscopy J. Opt. Soc. Am. A 23 3177–3190 [82] Ku ¨ hn J, Charrie`re F, Colomb T, Cuche E, Montfort F, Emery Y, Marquet P, Depeursinge C Axial sub-nanometre accuracy in digital holographic microscopy Meas. Sci. Technol. 19 074007 [83] Ku ¨ hn J, Colomb T, Montfort F, Charrie`re F, Emery Y, Cuche E, Marquet P, Depeursinge C 2007 Real-time dual-wavelength digital holographic microscopy with a single hologram acquisition Opt. Express 15 7231– 7242 [84] Wada A, Kato M, Ishii Y 2008 Multiple-wavelength digital holographic interferometry using tuneable laser diodes Appl. Opt. 47 2053–2060 [85] ISO/FDIS 25178 part 604: Geometrical product specification (GPS) - Surface texture: Areal - Nominal characteristics of non-contact (coherence scanning interferometry) instruments (International Organization of Standardization) [86] Petzing J, Coupland J M, Leach R K 2009 Guide to the measurement of rough surface topography using coherence scanning interferometry NPL Good practice guide to be published (National Physical Laboratory) [87] Harasaki A, Schmit J, Wyant J C 2000 Improved vertical-scanning interferometry Appl. Opt. 39 2107–2115 [88] Ghim Y-S, You J, Kim S-W 2007 Simultaneous measurement of thin film thickness and refractive index by dispersive white-light interferometer Proc. SPIE 6674 667402 [89] You J, Kim S-W 2008 Optical inspection of complex patterns for microelectronic products Ann. CIRP 57 505–508
References
[90] de Groot P 2006 Stroboscopic white-light interference microscopy Appl. Opt. 45 5840–5844 ¨benstedt A 2006 Laser-scanning confocal vibrometer [91] Rembe C, Dra microscope: theory and experiments Rev. Sci. Instrum. 77 083702 [92] Mansfield D 2006 The distorted helix: thin film extraction from scanning white light interferometry Proc. SPIE 6186 210–220 [93] Kim S-W, Kim G-W 1999 Thickness-profile measurement of transparent thin-film layers using white-light scanning interferometry Appl. Opt. 38 5968–5974 [94] Mansfield D 2008 Extraction of film interface surfaces from scanning white light interferometry Proc. SPIE 7101 71010U [95] Olgilvy J 1991 Theory of wave scattering from random rough surfaces (Institute of Physics Publishing) [96] Church E L, Jenkinson H J, Zavada J M 1979 Relationship between surface scattering and microtopographic features Opt. Eng. 18 125–136 [97] Vorburger T V, Marx E, Lettieri T R 1993 Regimes of surface roughness measurable with light scattering Appl. Opt. 32 3401–3408 [98] Bennett J M, Mattsson L 1999 Introduction to surface roughness and scattering (Optical Society of America) 2nd edition [99] Stover J C 1995 Optical scattering: measurement and analysis (Society of Photo-Optical Instrumentation Engineering) [100] Davies H 1954 Reflection of electromagnetic waves from a rough surface Proc. Inst. Elec. Engrs. 101 209–214 [101] ASTM F1084–87: 1987 Standard test method for measuring the effect of surface roughness of optical components by total integrated scattering (American Society for Testing and Materials) [102] Leach R K 1998 Measurement of a correction for the phase change on reflection due to surface roughness Proc. SPIE 3477 138–151 [103] Clarke F J J, Garforth F A, Parry D J 1983 Goniophotometric and polarisation properties of white reflection standard materials Lighting Res. Technol. 15 133–149 [104] Elson J M, Rahn J P, Bennett J M 1983 Relationship of the total integrated scattering from multilayer-coated optics to angle of incidence, polarisation, correlation length, and roughness cross-correlation properties Appl. Opt. 22 3207–3219 [105] Vorburger T V, Teague E C 1981 Optical techniques for on-line measurement of surface texture Precision Engineering 3 61–83 [106] Valliant J G, Folley M 2000 Instrument for on-line monitoring of surface roughness of machined surfaces Opt. Eng. 39 3247–3254 [107] Dhanansekar B, Mohan N K, Bhaduri B, Ramamoothy B 2008 Evaluation of surface roughness based on monolithic speckle correlation using image processing Precision Engineering 32 196–206
173
174
C H A P T ER 6 : Surface topography measurement instrumentation
[108] Brecker J N, Fronson R E, Shum L Y 1977 A capacitance-based surface texture measuring system Ann. CIRP 25 375–377 [109] Lieberman A G, Vorburger T V, Giauque C H W, Risko D G, Resnick R, Rose J 1988 Capacitance versus stylus measurements of surface roughness Surface Topography 1 315–330 [110] Bruce N C, Garcı´a-Valenzuela A 2005 Capacitance measurement of Gaussian random rough surface surfaces with plane and corrugated electrodes Meas. Sci. Technol. 16 669–676 [111] Wooley R W 1992 Pneumatic method for making fast, high-resolution noncontact measurement of surface topography Proc. SPIE 1573 [112] Haitjema H 1998 Uncertainty analysis of roughness standard calibration using stylus instruments Precision Engineering 22 110–119 [113] Leach R K 2000 Traceable measurement of surface texture at the National Physical Laboratory using NanoSurf IV Meas. Sci. Technol. 11 1162–1172 [114] Wilkening G, Koenders L 2005 Nanoscale calibration standards and methods (Wiley-VCH) [115] Thompsen-Schmidt P, Kru ¨ ger-Sehm R, Wolff H 2004 Development of a new stylus contacting system for roughness measurement XI Int. Colloq. Surfaces, Chemnitz, Germany, Feb. 79–86 [116] Leach R K 1999 Calibration, traceability and uncertainty issues in surface texture metrology NPL Report CLM7 [117] Kru ¨ ger-Sehm R, Krystek M 2000 Uncertainty analysis of roughness measurement Proc. X Int. Colloq. Surfaces, Chemnitz, Germany, Jan./Feb. (in additional papers) [118] Giusca C, Forbes A B, Leach R K 2009 A virtual machine-based uncertainty evaluation for a traceable areal surface texture measuring instrument Rev. Sci. Instrum. submitted [119] Leach R K 2004 Some issues of traceability in the field of surface texture measurement Wear 257 1246–1249 [120] ISO 5436 part 1: 2000 Geometrical product specification (GPS) - Surface texture: Profile method - Measurement standards - Part 1 Material measures (International Organization of Standardization) [121] Leach R K, Cross N 2002 Low-cost traceable dynamic calibration of surface texture measuring instruments Meas. Sci. Technol. 14 N1–N4 [122] ISO 12179: 2000 Geometrical product specification (GPS) - Surface texture: profile method - Calibration of contact (stylus) instruments (International Organization for Standardization) [123] ISO/FDIS 25178 part 701: 2007 Geometrical product specification (GPS) Surface texture: Areal - Calibration and measurement standards for contact (stylus) instruments (International Organization of Standardization) [124] Haycocks J, Jackson K, Leach R K, Garratt J, MacDonnell I, Rubert P, Lamb J, Wheeler S 2004 Tackling the challenge of traceable surface texture measurement in three dimensions Proc. 5th Int. euspen Conf., Turin, Italy, May 253–256
References
[125] Leach R K, Chetwynd D G, Blunt L A, Haycocks J, Harris P M, Jackson K, Oldfield S, Reilly S 2006 Recent advances in traceable nanoscale dimension and force metrology in the UK Meas. Sci. Technol. 17 467–476 [126] Krystek M 2000 Measurement uncertainty propagation in the case of filtering in roughness measurement Meas. Sci. Technol. 12 63–67 [127] Morel M A A, Haitjema H 2001 Calculation of 3D roughness measurement uncertainty with virtual surfaces Proc. IMEKO, Cairo, Egypt 1–5 [128] Haitjema H, Morel M 2000 Traceable roughness measurements of products Proc. 1st euspen Conf. on Fabrication and Metrology in Nanotechnology, Denmark 354–357 [129] Haitjema H, Morel M 2000 The concept of a virtual roughness tester Proc. X Int. Colloq. Surfaces, Chemnitz, Germany, Jan./Feb. 239–244 [130] Haitjema H 1997 International comparison of depth-setting standards Metrologia 34 161–167 [131] Leach R K, Hart A 2002 A comparison of stylus and optical methods for measuring 2D surface texture NPL Report CBTLM 15 [132] Koenders L, Andreasen J L, De Chiffre L, Jung L, Kru ¨ ger-Sehm R 2004 EUROMET L.S11 Comparison on surface texture Metrologia 41 04001 [133] Vorburger T V, Rhee H-G, Renegar T B, Song J-F, Zheng A 2008 Comparison of optical and stylus methods for measurement of surface texture Int. J. Adv. Manuf. Technol. 33 110–118 [134] ISO 5436 part 2: 2000 Geometrical product specification (GPS) - Surface texture: Profile method - Measurement standards - Part 2 Software measurement standards (International Organization of Standardization) [135] ISO/FDIS 25178 part 7: Geometrical product specification (GPS) - Surface texture: Areal - Software measurement standards (International Organization of Standardization) [136] Blunt L, Jiang X, Leach R K, Harris P M, Scott P 2008 The development of user-friendly software measurement standards for surface topography software assessment Wear 264 389–393 [137] Bui S, Vorburger T V 2006 Surface metrology algorithm testing system Precision Engineering 31 218–225 [138] Jung L, Spranger B, Kru ¨ ger-Sehm R, Krystek M 2004 Reference software for roughness analysis - features and results Proc. XI Int. Colloq. Surfaces, Chemnitz, Germany, Feb. 164–170
175
This page intentionally left blank
CHAPTER 7
Scanning probe and particle beam microscopy Dr. Alexandre Cuenat National Physical Laboratory
As technology moves deeper into the realm of the microscopic by manufacturing smaller components, it becomes essential to measure at a suitable scale and resolution. This scale is in the nanometre range and the resolution expected is of the order of atomic distances or even smaller. In the late seventeenth century, the development of optical microscopes enabled scientists to observe structure on the scale of micrometres. Until the twentieth century, the optical microscope was the fundamental instrument that enabled progress in materials and biological sciences. However, the observation of single atoms requires far more resolution than visible light can provide. In the beginning of the twentieth century, the electron microscope was developed based on the newly discovered wave-like properties of the electron. Indeed, electrons with sufficient energy will have a wavelength comparable to the diameter of an atom or smaller. Unfortunately, electron optics limit the resolution that an electron microscope can reach and true atom-by-atom resolution is far from routine. A study of surface atoms is even more challenging and requires a different type of probe. Indeed, high-energy electrons will penetrate into the bulk material without providing surface information, and low-energy electrons will be scattered by the surface. For many years, scientists have used diffraction phenomena to study the atomic ordering at surfaces, but the lateral resolution is still of the order of a micrometre. The development of the scanning tunnelling microscope (STM) by Gerd Binnig and Heinrich Rohrer in 1982 [1] was a major tool in the development of a new field of human endeavour – nanotechnology. The STM enabled the next step in imaging and probing technology. The STM may Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS Scanning probe microscopy Scanning tunnelling microscopy Atomic force microscopy Scanning probe microscopy of nanoparticles Electron microscopy Other particle beam microscopy techniques References
177
178
C H A P T ER 7 : Scanning probe and particle beam microscopy
not have been the first scanning probe system, but the atomic resolution it demonstrated captured the imagination of the scientific community. Since then, a series of near-field methods have been developed, capable of probing or imaging many physical or chemical properties with nanometrescale resolution. All these new microscopes are based on the same principle: a very sharp tip, with a radius typically of a few nanometres, is scanned in close proximity to a surface using a piezoelectric scanner. The very localised detection of forces in the near-field is in marked contrast with previous instruments, which detected forces over much larger areas or used far-field wave phenomena. This chapter reviews the principal methods that have been developed to measure properties at the atomic to nanometre scale and the related metrology challenges, with a particular focus on the atomic force microscope (AFM). The reason for this choice is that the AFM is by far the most popular instrument to date and is the most likely candidate to be fully traceable – including force – in the near future. Electron microscopes, scanning and transmission, are also included in this chapter as they are capable of giving information in the same range and are also very popular. The chapter concludes with a few words on the focused ion beam microscope and the newly developed helium beam microscope.
7.1 Scanning probe microscopy Scanning probe microscopes (SPMs) are increasingly used as quantitative measuring instruments not only for dimensions, but also for physical and chemical properties at the nanoscale (see [2,3] for thorough introductions to SPM technology). Furthermore, SPM has recently entered the production and quality-control environment of semiconductor manufacturers. However, for these relatively new instruments, standardized calibration procedures still need to be developed. From an instrumentation perspective, the SPM is a serial measurement device, which uses a nanoscale probe to trace the surface of the sample based on local physical interactions (in a similar manner to a stylus instrument – see section 6.6.1). While the probe scans the sample with a predefined pattern, the signal of the interaction is recorded and is usually used to control the distance between the probe and the sample surface. This feedback mechanism and the scanning of a nanoscale probe form the basis of all scanning probe instruments. Figure 7.1 shows an example schema of an AFM. A sample is positioned on a piezoelectric scanner, which moves the sample in three dimensions relative to a transduction mechanism (in this
Scanning probe microscopy
FIGURE 7.1 Schematic image of a typical scanning probe system, in this case an AFM.
case a flexible mechanical cantilever) with a very sharp tip in very close proximity to the sample. Depending on the physical interactions used to probe the surface, the system can have different names, for example: -
scanning tunnelling microscopes (STMs) are based on the quantummechanical tunnelling effect (see section 7.2);
-
atomic force microscopes (AFMs) use interatomic or intermolecular forces (see section 7.3);
-
scanning near-field optical microscopes (SNOMs) probe the surface using near-field optics (sometimes referred to a electromagnetic tunnelling) (see [2,4]).
Many more examples of SPMs have been developed that use almost every known physical force, including: electrostatic, magnetic, capacitive, chemical and thermal. For each instrument, various modes of operation are possible. The most common modes used in engineering nanometrology are: Contact mode: the probe is in permanent contact with the surface, i.e. usually a repulsive force between the tip and the sample is used as feedback to control the distance between the tip and the sample.
179
180
C H A P T ER 7 : Scanning probe and particle beam microscopy
Non-contact mode: the probe oscillates slightly above the surface and interactions with the sample surface forces modify the oscillation parameters. One of the oscillation parameters (amplitude, frequency or phase shift) is kept constant with the feedback loop. Intermittent mode: non-contact mode in which the probe oscillates with a high amplitude and touches the sample for a short time (often referred to as tapping mode).
7.2 Scanning tunnelling microscopy As its name suggests, the scanning tunnelling microscope takes advantage of the quantum mechanical phenomenon of tunnelling. When an electron approaches a potential energy barrier higher than the electron’s energy, the electron is not completely reflected as one would expect classically, but rather the electron’s wavefunction exponentially decays as it travels through the barrier. With a sufficiently thin barrier, there is a small but non-negligible probability that the electron can be found on the other side of the barrier. In practice, the STM is realised based on the scanning of an ultra-sharp conductive tip close to a conductive sample. The electron probability densities of the tip and the substrate can overlap if the distance between the two is small enough; in which case the application of a potential difference between the tip and the sample will result in a current due to the electrons tunnelling through the insulating gap formed by the vacuum layer between the tip and the substrate. This tunnelling current is exponentially sensitive to the distance between the tip and the sample. With a barrier height (work function) of a few electron volts, a change in distance by an amount equal to the diameter of a single atom (approximately 0.2 nm) causes the tunnelling current to change by up to three orders of magnitude [1]. The key technology that has enabled the STM and subsequent scanning probe systems to be developed is the ability to move the tip by a controlled amount by such a small distance. This is possible using piezoelectric materials, which move the tip over the sample as well as scanning the substrate. Depending on the mode of operation, the feedback will control the piezoelectric actuator in the z direction in order to maintain a constant tunnelling current by keeping the tip at a constant height relative to the surface. With this constant current method, a topographical map of a surface is obtained. However, this procedure will yield purely topographical information only when used on an electronically homogeneous surface; when applied to an electronically inhomogeneous surface, the tunnelling current will depend on both the surface topography and the local electronic structure.
Atomic force microscopy
For example, if the effective local tunnelling barrier height increases or decreases at a scan site, then the feedback system must decrease or increase the tip-sample separation in order to maintain a constant tunnelling current. The final image obtained will thus contain electronic structure information convoluted with the topographical information. A solution to this problem is the so-called barrier-height imaging mode [5] used to measure varying work function (tunnelling barrier height) over inhomogeneous samples. In this mode, the tip is scanned over each measurement site and the distance between the tip and the sample is varied while recording dI/dz; the rate of tunnelling current, I, change with respect to tip-sample distance, z. From this information, the work function at each location can be determined and used to correct constant current measurement. One of the main limitations of STM is that it can be used only with conductive samples.
7.3 Atomic force microscopy The AFM [6,7] was developed to image insulating surfaces with atomic resolution. AFM is the most widely used member of the family of SPM techniques. Its versatility and the presence of a number of commercial instruments make it a method of choice for research laboratories, from academia to industry. Figure 7.2 is a block diagram of a standard AFM (it is in fact representative of most SPM types). Its essential components are as follow: -
z scanner;
-
xy scanner;
-
deflection detector, for example optical beam deflection method (see below), piezoresistive sensor [8] or Fabry-Pe´rot fibre interferometer [9];
-
cantilever and probe.
The sample is scanned continuously in two axes (xy) underneath a forcesensing probe consisting of a tip that is attached to, or part of, a cantilever. A scanner is also attached to the z axis (height) and compensates for changes in sample height, or forces between the tip and the sample. The presence of attractive or repulsive forces between the tip and the sample will cause the cantilever to bend and this deflection can be monitored in a number of ways. The most common system to detect the bend of the cantilever is the optical beam deflection system, wherein a laser beam reflects off the back of the cantilever onto a photodiode detector. Such an optical beam deflection system is sensitive to sub-nanometre deflections of the cantilever [10].
181
182
C H A P T ER 7 : Scanning probe and particle beam microscopy
FIGURE 7.2 Block diagram of a typical SPM.
7.3.1 Noise sources in atomic force microscopy The limitations of the metrological capabilities of an AFM due to thermal noise are well documented [11]. However, not only thermal but all noise sources need to be systematically investigated and their particular contributions to the total amount of the noise quantified for metrological purposes [12]. Note that most of the discussions on noise in AFM are also of relevance to other forms of SPM. Noise source can be either external, including: -
variations of temperature and air humidity;
-
air motion (for example, air-conditioning, air circulation, draughts, exhaust heat);
-
mechanical vibrations (for example, due to structural vibrations, pumps – see section 3.9);
-
acoustic (for example, impact sound, ambient noise – see section 3.9.6).
Atomic force microscopy
or internal noise (intrinsic noise), including: -
high-voltage amplifiers;
-
control loops;
-
detection systems;
-
digitization.
It is also well known that adjustments made by the user (for example, the control loop parameters, scan field size and speed) also have a substantial influence on the measurement [13]. To reduce the total noise, the subcomponents of noise must be investigated. The total amount of the z axis noise can be determined by static or dynamic measurements [14] as described in the following section.
7.3.1.1 Static noise determination To determine the static noise of an SPM, the probe is placed in contact with the sample, the distance is actively controlled, but the xy scan is disabled, i.e. the scan size is zero. The z axis signal is recorded and analysed (for example, RMS determination or calculation of the fast Fourier transform to identify dominant frequencies which then serve to identify causes of noise). An example of a noise signal for an AFM is shown in Figure 7.3; the RMS noise is 13 pm in this case (represented as an Rq parameter – see section 8.2.7.2).
7.3.1.2 Dynamic noise determination To determine the dynamic noise of an SPM the probe and sample are displaced in relation to one another (line or area scan). In this case, scan speed, scan range and measurement rate should be set to values typical of the subsequent measurements to be carried out. Usually the dynamic noise measurement is carried out at least twice with as small a time delay as possible. The calculation of the difference between the subsequent images is used to correct for surface topography and guidance errors inherent in the scanner.
7.3.1.3 Scanner xy noise determination The accurate determination of xy noise is extremely difficult for AFM as they have small xy position noise and thus require samples with surface roughness substantially smaller than the xy noise [12]. In individual cases, the noise of subcomponents can be determined. For xy stage, for example, the xy position noise can be measured with a laser interferometer.
183
184
C H A P T ER 7 : Scanning probe and particle beam microscopy
FIGURE 7.3 Noise results from an AFM. The upper image shows an example of a static noise investigation on a bare silicon wafer. The noise-equivalent roughness is Rq ¼ 0.013 nm. For comparison, the lower image shows the wafer surface: scan size 1 mm by 1 mm, Rq ¼ 0.081 nm.
For AFM, the following guidance deviations are usually observed: -
out-of-plane motions or scanner bow, i.e. any form of cross-talk of xy movements to the z axis;
-
line skips in the z direction;
-
distortions within the xy plane (shortening/elongation/rotation) due to orthogonality and/or angular deviations;
-
orthogonality deviations between the z and the x or y axis.
Guidance deviations can be due to the design and/or be caused by deviations in the detection or control loop. Guidance deviations show a strong dependence on the selected scan field size and speed as well as on the working point in the xy plane and within the z range of the scanner. When the reproducibility is good, such systematic deviations can be quantified and corrected for by calibration.
Atomic force microscopy
7.3.2 Some common artefacts in AFM imaging One of the reasons that AFMs have not yet fully been integrated into the production environment is the presence of numerous ‘artefacts’ in their images that are not due to surface topography of the surface being measured. Usually a high level of expertise is required to identify these artefacts. The availability of reference substrates and materials will allow industry to use AFMs (and other SPMs) more widely.
7.3.2.1 Tip size and shape Many of the most common artefacts in AFM imaging are related to the finite size and shape of the tip. Commonly used AFM probes, such as those manufactured from silicon nitride and silicon, have pyramidal shaped tips [15]. These tips can have a radius of curvature as small as 1 nm, but often the radius is much larger. When imaging vertical features that are several tens of nanometres or more in height, the tip half angle limits the lateral resolution. When the tip moves over a sharp feature, the sides of the tip, rather than just the tip apex, contact the edges of the feature (see Figure 7.4). For features with vertical relief less than approximately 30 nm, it is the radius of curvature of the tip that limits resolution, resulting in tip broadening of the feature of interest. The resulting image is a non-linear combination of the sample shape and the tip
FIGURE 7.4 Schematic of the imaging mechanism of spherical particle imaging by AFM. The geometry of the AFM tip prevents ‘true’ imaging of the particle as the apex of the tip is not in contact with the particle all the time and the final image is a combination of the tip and particle shape. Accurate sizing of the nanoparticle can only be obtained from the height measurement.
185
186
C H A P T ER 7 : Scanning probe and particle beam microscopy
shape. Various deconvolution (or its non-linear equivalent, erosion) methods, including commercial software packages, are available although such software must be used with caution [16–18]. There are also many physical artefacts that can be used to measure the shape of an AFM tip [19–21].
7.3.2.2 Contaminated tips An ideal AFM tip ends in a single point at its apex. However, manufacturing anomalies and/or contamination may lead to double or even multiple tip ends. When this occurs, the tips can map features on the sample surface more than once. For example, a double tip will result in a regular doubling of features. Such artefacts lead to what are commonly termed double- or multiple-tip images. Contaminants on a tip can also interact with a sample surface, leading to repeated patterns of the contaminants scattered across the surface. Cleaning of AFM tips and cantilevers is highly recommended [22].
7.3.2.3 Other common artefacts When the gain parameter of the control loop is too high, rippling artefacts can occur along the edges of features. These ripples tend to occur along the leading edge of a feature and will generally switch position when the scan direction is changed. Shadow artefacts generally occur along the trailing edge of a feature, when the feedback loop is unable to compensate for a rapid change in topography. Reducing the scan speed often minimises shadow artefacts. Sample damage or deformation during scanning is also a significant artefact, particularly for soft surfaces. Piezoelectric and/or thermal drift can distort images, particularly at the start of scanning. Measuring near to the centre of the z axis piezoelectric actuator’s range, and allowing the AFM and the sample to sit for a period to reach thermal equilibration can substantially improve drift-related problems.
7.3.3 Determining the coordinate system of an atomic force microscope There will always be some imperfections in the coordinate system for a given AFM. The calibration of the lateral scan axes is usually carried out using 1D or 2D lateral calibration artefacts. These artefacts are usually formed by equidistant structures with defined features whose mean spacing (the pitch) serves to calibrate the lateral axes. In Figure 7.5a a set of parallel regression lines along similar features of the structure is calculated. The mean distance between these lines is the pitch, px. In Figure 7.5b a set of parallel regression lines is calculated, each through a column of centres of similar features; the mean distance between these lines is the pitch, px in the x direction of the
Atomic force microscopy
FIGURE 7.5 Definition of the pitch of lateral artefacts: (a) 1D and (b) 2D.
grating. Similarly, another set of parallel regression lines is calculated, each through a series of centres of the grating; the mean distance of these grating lines is the pitch, py in the y direction of the grating. The orthogonality of the grating is the angle formed by the px and py vectors. Local deviations are a measure of the non-linearity of the axes. In addition, the orthogonality deviation and the cross-talk of the lateral scan axes can be determined. For the 2D lateral artefacts it is important not to confuse the pitches, px and py, and the mean spacings, ax and ay, of the individual grating: px and ax, or py and ay are identical only for perfectly orthogonal gratings. Where high-quality gratings are used, which are almost orthogonal, the difference can often be ignored in the calibration of the axes. These differences, however, become significant when a 2D artefact is used to check the orthogonality of the scanner axes. In measurements on lateral artefacts, the selection of the scan range and the scan speed or rate are important, because the calibration factors are strongly influenced by dynamic non-linearity and image distortions [23]. This is also true for systems with active position control. In calibration, the scan speed must, therefore, be adjusted to reflect the later measurements that are to be made.
7.3.4 Traceability of atomic force microscopy From the metrological point of view, AFMs are generally subdivided into the three following categories [12]: -
reference AFMs with integrated laser interferometers allowing direct traceability of the axis scales via the wavelength of the laser used to the
187
188
C H A P T ER 7 : Scanning probe and particle beam microscopy
SI unit of length (often referred to as metrological AFMs, see [24–27] for examples developed at NMIs); -
-
AFMs with position measurement using displacement transducers, for example, capacitive or inductive sensors, strain gauges or optical encoders. These sensors are calibrated by temporarily mounting laser interferometers to the device or by measuring high-quality calibration artefacts. Two types are to be distinguished here: ,
active position control AFMs that track to scheduled positions by means of a closed loop control system;
,
AFMs with position measurement but without closed loop for position control (open loop systems);
AFMs in which the position is determined from the electrical voltage applied to the piezoelectric scanners and, if need be, corrected using a look-up table. Such AFMs need to be calibrated using a transfer artefact that has itself been calibrated using a metrological AFM (highest accuracy) or an AFM with position measurement. These instruments will, however, suffer from hysteresis in the scanner.
Another important aspect of traceability is the uncertainty of measurement (see section 2.8.3). It is very rare to see AFM measurements quoted with an associated uncertainty as many of the points discussed in section 6.11 apply to AFMs (and SPMs in general). Uncertainties are usually only quoted for the metrological AFMs or for simple artefacts such as step heights [28] or 1D gratings [29].
7.3.4.1 Calibration of AFMs Calibration of AFMs is carried out using certified reference artefacts. Suitable sets of artefacts are available from various manufacturers (see www. nanoscale.de/standards.htm for a comprehensive list of artefacts). An alternative is to use laser interferometers to calibrate the axes, which offer a more direct method to traceability if stabilized lasers are used. The aim of the calibration is the determination of the axis scaling factors, Cx, Cy and Cz. Apart from these scaling factors, a total of twenty one degrees of freedom can be identified for the motion process of the SPM similar to a CMM operating in 3D (see section 9.2). A typical calibration for an AFM proceeds in the following manner [12]: -
the cross-talk of lateral scan movements to the z axis is investigated by measurements on a flatness artefact;
Atomic force microscopy
-
the cross-talk of the lateral scan axes and the orthogonality deviation is determined using a 2D lateral artefact. This artefact is usually used to calibrate Cx and Cy;
-
deviations from orthogonality can be determined using artefacts with orthogonal structures;
-
orthogonality deviations are measured using 3D artefacts. Calibration of the z axis, Cz, and deviations are achieved using 3D artefacts.
In most cases, different artefacts are used for these calibration steps (see Table 7.1). Alternatively, 3D artefacts can be used – with suitable evaluation software – to calibrate all three factors, Cx, Cy and Cz, and the cross-talk between all three axes.
7.3.5 Force measurement with AFMs Force measurements with an AFM are carried out by monitoring the cantilever deflection as the sample approaches, makes contact with, and then retracts from the cantilever. However, the raw cantilever deflection measurement is a measure of the deflection of the cantilever at some point and not directly of the force. For a beam deflection system, for example, the cantilever deflection is recorded in volts. An additional problem is that the distance (or separation) between the tip and the sample is not measured directly [30]; the AFM measures the displacement of the piezoelectric scanner that supports the sample. A force curve graph of cantilever deflection (in volts) and corresponding piezoelectric scanner displacement (in metres) (see Figure 7.6a) must be interpreted to give a force–distance curve (i.e. force of interaction in units of force against separation between the sample and the cantilever in units of length (see Figure 7.6b)). With reference to Figure 7.6a, when the tip and sample are far apart (i) they exhibit no interaction (zero
Table 7.1
Overview of guidance deviations, standards to be used and calibration measurements [12]
Calibration
Artefact required
What is measured
Cross-talk of the lateral movements to the z axis Orthogonality deviation Orthogonality deviation Cx and Cy deviations(non-linearities) Cross-talk of the lateral axes Cz deviations (non-linearities)
flatness artefact
out-of-plane movement of xy scan system
2D artefact 3D artefact 1D or 2D lateral artefact 2D lateral artefact step height artefact
angle formed by the two axes, on orthogonal structures Need description of what is measured for a 3D artefact pitch measurement, rotation, linearity pitch measurement, rotation, linearity step height measurement, linearity
189
190
C H A P T ER 7 : Scanning probe and particle beam microscopy
FIGURE 7.6 Schematic of a force curve (a) and force–distance curve (b).
force). As the sample approaches the tip, inter-molecular forces between the tip and the sample cause the cantilever to deflect upwards (ii) due to repulsive forces (in this case between a charged substrate and tip, but attractive forces are commonly observed as well). Eventually the tip makes contact with the sample (iii) and their movement becomes coupled (region of constant compliance). The sample is then retracted from the tip (iv) until the tip/ cantilever and sample return to their original positions completing one cycle. Hysteresis, shown here, may occur upon retraction due to adhesion forces. Interfacial forces are measured on approach and adhesion forces are measured upon retraction; repulsive forces are positive and attractive forces are negative.
Atomic force microscopy
To obtain the force part of the force–distance curve, the photodiode values are converted to force using F ¼ kcd, where F is the force, d is cantilever deflection and kc is the cantilever spring constant. To convert the cantilever deflection measured by the photodiode in volts to metres, a displacement conversion factor (also called the optical lever sensitivity) is obtained from the region of the force curve where the sample is in contact with the cantilever. For an infinitely hard contact, every displacement of the piezoelectric scanner displaces the sample or the tip; the cantilever is pushed upwards, which is recorded as a voltage output on the photodiode. The slope of the force curve in the region where the cantilever is in contact with the sample defines the optical lever sensitivity. This part of the force curve is called the region of constant compliance or region of contact. It is important to note that using the constant compliance region of the force curve to convert photodiode response to deflection will overestimate the force of interaction if the cantilever is not the most compliant component of the system. This is often the case when soft, deformable substances such as polymers are used in force measurements (either as a sample or linked to the tip/cantilever). If a compliant substrate is used, other methods are needed to accurately convert the measured deflection of the cantilever into a force of interaction [31]. In this case the optical lever sensitivity is determined by pressing the tip/cantilever against a hard sample (for example, mica), before and after it is used on a soft sample. However, often this method does not work as the optical lever sensitivity is strongly dependent upon a number of factors. These factors include the position and shape of the laser spot and the difficulty in precisely aligning the laser spot on the same position on the cantilever from experiment to experiment. Also, the use of a hard sample cannot be applied if it is the tip/ cantilever that supports the most compliant component of the system (for example, a molecule attached to the cantilever). Another method that relies on the ‘photodiode shift voltage’, a parameter that is very sensitive to the position and shape of the laser of the photodetector, can be used to convert volts of cantilever deflection into metres of deflection [32]. This method ensures that forces can be determined regardless of the compliance of the cantilever relative to any other component in the AFM, and also ensures the preservation of fragile macromolecules, which may be present on the sample or attached to the cantilever.
7.3.6 AFM cantilever calibration AFMs are sensitive to very small forces in the piconewton range. In order to measure these forces accurately, the stiffness of the probe must be
191
192
C H A P T ER 7 : Scanning probe and particle beam microscopy
determined. Stiffness calibration procedures rely on either imposing known forces on the probe, measuring the geometrical and material properties of the probe, or measuring its thermal fluctuations. The cantilever’s spring constant is essentially dependent upon its composition and dimensions [33]. Nominal values listed by manufacturers may be incorrect by an order of magnitude and it is, therefore, necessary to determine the spring constant for each cantilever or for each batch of cantilevers from a wafer [34]. Parameters such as Young’s modulus (related to composition), and cantilever length and thickness, can be used in theoretical equations to calculate a spring constant [35]. However, calculated values can be inaccurate due to the unknown material properties of the cantilever (the stoichiometry of silicon nitride, for example, can vary from Si3N4 to Si5N4 [36]). Furthermore, the measurement of cantilever thickness, which is a dominant parameter in theoretical equations, is extremely difficult. The spring constant depends on the cantilever thickness to the third power, so even small uncertainty in the thickness measurement will result in large variations in the calculated spring constant [37]. An accurate, but often destructive, way to measure spring constant is the added-mass method [38]. In this method beads of known mass are attached to the end of the cantilever. The additional mass causes the cantilever resonant frequency to decrease proportional to the mass. A graph of added mass against resonant frequency yields a straight line with a slope corresponding to the spring constant. A further method to determine the spring constant is the measurement of the force that an AFM imparts onto a surface by measuring the thermal fluctuations of the cantilever – in this method the cantilever is modelled as a simple harmonic oscillator (usually only in one degree of freedom) [39]. With knowledge of the potential energy of the system and applying the equipartition theorem, the spring constant of the cantilever can be calculated from the motion of the cantilever and its surrounding heat-bath temperature. The thermal method has three major problems [40]: (a) higher vibration modes cannot be ignored, (b) the method to measure deflection usually measures the inclination rather than the displacement, and (c) only the first modes are accessible due to the bandwidth limitations of the experiments. For directly traceable measurements of the force an AFM cantilever imparts on a surface, electrostatic balances can be used, but they are very costly and inconvenient (see section 10.3.3). Many of the devices discussed in section 10.3.4 can also be used to measure spring constant when used as passive springs.
Atomic force microscopy
7.3.7 Inter- and intra-molecular force measurement using AFM As discussed previously, the AFM images a sample by sensing and responding to forces between a tip and the sample. Because the force resolution of the AFM is so sensitive (0.1 pN to 1 pN), it is a powerful tool for probing the inter- and intra-molecular forces between two substances. Researchers have taken advantage of this sensitivity to quantify fundamental forces between a sample and some substance linked to the AFM cantilever or tip [41]. The AFM has enabled some truly remarkable advances in the physical sciences due to the sensitivity and ranges of force it can measure. A few examples will be discussed here. A basic understanding of the forces between the AFM tip and the sample is essential for a proper use of the instrument and the analysis of the data. A variety of forces that come into play between the tip and the sample are summarized in Table 7.2. The discussion that follows will focus on contact-mode AFM, which is the most commonly used imaging mode. A recent review highlights the effect of surface forces on dimensional measurements [30]. The total force between the tip and the sample results from the sum of various attractive and repulsive forces, as described below. As a model, consider the Lennard-Jones potential, which describes the change in intermolecular potential energy (f) that occurs as two particles, such as atoms or molecules (on tip and sample), are brought closer together. The model gives s 12 s6 (7.1) f ¼ 43 r r where s is approximately the atomic or molecular diameter (distance of closest approach), 3 is the minimum value of the potential energy or the depth of the potential energy well, and r is the separation distance [42]. As the particles are brought closer together from relatively distance separations, Table 7.2
Examples of surface forces commonly encountered in AFM measurement
Type of force
Dependence of energy on distance (d)
Energy (kJ$mol1)
Range (nm)
Intra-molecular (ionic or covalent) London dispersion H-bonding Dipoles Electrostatic Van der Waals Solvation Hydrophobic
1/d 1/d 6 1/d 3 1/d 3 ed 1/d ~ed ~ed
100s 1 to 3 15 to 20 5 to 10 10 to 100 1 to 5 1 to 10 1 to 5
<1 0.5 to 5 0.5 to 3 0.5 to 3 10s to 100s 5 to 10 <5 10s to 100s
193
194
C H A P T ER 7 : Scanning probe and particle beam microscopy
the (1/r)6 term (i.e. Van der Waals term) describes the slow change in attractive forces. As the particles are brought even closer together, the (1/r)12 term describes the strong repulsion that occurs when the electron clouds strongly repel one another. The Van der Waals interaction forces are long-range, relatively weak attractive forces. The origin of the Van der Waals forces is quantum mechanical in nature; they result from a variety of interactions, primarily induced dipole and quadrupole interactions. The Van der Waals forces are nonlocalized, meaning that they are spread out over many atoms. Van der Waals forces for a typical AFM have been estimated to be of the order of 10 nN to 20 nN [43]. The so-called atomic force (a result of the Pauli exclusion principle) is the primary repulsive force at close approach. The magnitude of this force is difficult to predict without a detailed understanding of surface structure. Several additional forces or interactions must be considered for an AFM tip and sample surface. Capillary adhesion is an important attractive force during imaging in air. The capillary force results from the formation of a meniscus made up of water and organic contaminants adsorbed on to the surface of the tip and the sample [36] (see Figure 7.7). The capillary force has been estimated to be of the order of 100 nN or greater. When the tip and the sample are completely immersed in liquid, a meniscus does not form and the capillary forces are absent. Some tips and samples may have hydrophobic properties, in which case hydrophobic interactions must also be taken into consideration. Water near hydrophilic surfaces is structured [34]. When the tip and the sample are brought into close contact during force microscopy in solution or humid air, repulsion arises as the structured water molecules on the surfaces of the tip and the sample are pushed away. In aqueous solutions, electrical double-layer forces, which may be either attractive or repulsive, are present
FIGURE 7.7 Schematic illustration of the strong capillary force that tends to drive the tip and sample together during imaging in air.
Atomic force microscopy
near the surfaces of the tip and the sample. These double-layer forces arise because surfaces in aqueous solution are generally charged. Lateral frictional forces must also be taken into account as the sample is scanned beneath the tip. At low forces, a linear relationship should hold between the lateral force and the force normal (vertical) to the surface with a proportionality constant equal to the coefficient of friction. This relationship is valid up to an approximately 30 nN repulsive force [44]. Frictional forces vary on an atomic scale, and with temperature, scan velocity, relative humidity, and tip and sample materials.
7.3.7.1 Tip functionalisation Inter- and intra-molecular forces affect a variety of phenomena, including membrane structure, molecular recognition and protein folding/unfolding. AFM is a powerful tool for probing these interactions because it can resolve forces that are several orders of magnitude smaller than the weakest chemical bond, and it has appropriate spatial resolution. In recent years, researchers have taken advantage of these attributes to create chemical force microscopy [45]. AFM probes (i.e. cantilevers or tips) are functionalised with chemical functional groups, biomolecules or living, fully functional cells to make them sensitive to specific interactions at the molecular to cellular level (see Table 7.3). There are many ways to functionalize an AFM tip or cantilever. All functionalization methods are constrained by one overriding principle – the bonds between the tip/cantilever and the functionalizing substance (i.e. the forces holding the substance of interest to the tip/cantilever) must be much stronger than those between the functionalizing substance and the sample (i.e. the forces that are actually measured by the AFM). Otherwise, the functionalizing substance would be ripped from the tip/cantilever during force measurements. Table 7.3
Various substances that have been linked to AFM tips or cantilevers
Substance linked to tip/cantilever
Linkage chemistry
Protein Nucleic acid Polysaccharide Glass or latex bead Living microbial cell Dead microbial cell Eukaryotic cell Organic monolayer Nanotube
adsorption, imide, glycol tether, antibody-antigen thiol adsorption epoxy silane, poly-lysine gluteraldehyde epoxy, adsorption self-assembling monolayer, silane epoxy
195
196
C H A P T ER 7 : Scanning probe and particle beam microscopy
Single, colloidal-size beads, a few micrometres in diameter, can be routinely attached to a cantilever using an epoxy resin [46]. Such beads may be simple latex or silica spheres, or more complex designer beads imprinted with biomolecular recognition sites. Care must be taken to select an epoxy that is inert in the aqueous solution and that will not melt under the laser of the optical lever detection system [47]. Simple carboxylic, methyl, hydroxyl or amine functional groups can be formed by self-assembling monolayers on gold-coated tips [45] or by creating a silane monolayer directly on the tip. Organosilane modification of a tip is slightly more robust because it avoids the use of gold, which forms a relatively weak bond with the underlying silicon or silicon nitride surface of the tip in the case of self-assembling monolayers. Carbon nanotubes (CNTs) that terminate in select functional groups can also be attached to cantilever tips [48]. The high aspect ratio and mechanical strength of CNTs creates functionalized cantilevers with unprecedented strength and resolution capabilities. Direct growth of CNTs onto cantilevers by methods such as chemical vapour deposition [49] will probably make this method more accessible to a large number of researchers. Biomolecules such as polymers, proteins and nucleic acids have been linked to AFM tips or deposited directly on the cantilever [50]. One of the simplest attachment techniques is by non-specific adsorption between a protein, for example, and silicon nitride. The adsorbed protein can then serve as a receptor for another protein or ligand. Virtually any biomolecule can be linked to a cantilever either directly or by means of a bridging molecule. Thiol groups on proteins or nucleic acids are also useful because a covalent bond can be formed between sulfulhydrol groups on the biomolecule and gold coatings on a tip. Such attachment protocols have been very useful: however, there are some disadvantages. The linkage procedure may disrupt the native conformation or function of the biomolecule, for example, if the attachment procedure disrupts a catalytic site. It is well known that a protein attached to a solid substrate (a cantilever or tip) may exhibit a significantly different conformation, function and/or activity relative to its native state within a membrane or dissolved in solution. Therefore, care must be taken to design control experiments that test the specificity of a particular biomolecule as it occurs in its natural state.
7.3.8 Tip sample distance measurement To obtain the distance or separation part of the force–distance curve, a point of contact (i.e. zero separation) must be defined and the recorded piezoelectric scanner position (i.e. displacement) must be corrected by the measured
Atomic force microscopy
deflection of the cantilever. Simply adding or subtracting the deflection of the cantilever to the movement of the piezoelectric scanner determines the displacement. For example, if the sample attached to the piezoelectric scanner moves 10 nm towards the cantilever, and the cantilever is repelled 2 nm due to repulsive forces, then the actual cantilever–sample separation changes by only 8 nm. The origin of the distance axis, the point of contact, is chosen as the beginning of the region of constant compliance, i.e. the point on the force curve where cantilever deflection becomes a linear function of piezoelectric scanner displacement (see Figure 7.6). Just as it was difficult to convert photodiode voltage to displacement units for soft, deformable materials, it is not always easy to select the point of contact because there is no independent means of determining cantilever–sample separation. For deformable samples, the cantilever indents into the sample such that the region of constant compliance may be non-linear and the beginning point cannot be easily defined. Recent research has developed an AFM with independent measurement of the piezoelectric scanner and the cantilever displacements [51].
7.3.9 Challenges and artefacts in AFM force measurements There are a number of artefacts that have been identified in force curves. Many of these artefacts are a result of interference by the laser, viscosity effects of the solution or elastic properties of soft samples. When the sample and the cantilever are relatively remote from each other, such that there is no interaction, the force curve data should be a horizontal line (i.e. the region of non-contact; see Figure 7.6). However, the laser has a finite spot size that may be larger than the size of the cantilever such that the laser beam reflects off the sample as well as the cantilever. This is particularly troublesome for reflective substrates, often resulting in optical interference, which manifests itself as a sinusoidal oscillation or as a slight slope in the non-contact region of the force curve [52]. This affects the way in which one defines attractive or repulsive forces. A simple solution is to realign the laser on the cantilever such that the beam does not impinge upon the underlying sample. Alternatively, the oscillation artefact may be removed from the force curve with knowledge of the wavelength of the laser. This optical problem has been largely solved in commercial AFMs by using superluminescent diodes, which possess high optical power and low coherence length. A further artefact is the hysteretic behaviour between the approach and retraction curves in the non-contact area. The approach and retraction curves often do not overlap in high-viscosity media due to fluid dynamic effects [53]. Decreasing the rate at which the piezoelectric scanner translates
197
198
C H A P T ER 7 : Scanning probe and particle beam microscopy
the samples towards and away from the cantilever can help to minimize hysteresis by decreasing the drag caused by the fluid. Another frequently observed artefact in the force curve is caused by the approach and retraction curves not overlapping in the region of contact but rather being offset laterally. Such artefacts make it difficult to define the point of contact, which is necessary to obtain separation values between the sample and the tip. Such hysteresis artefacts are due to frictional effects as the tip (which is mounted in the AFM at an angle of typically 10 to 15 relative to the sample) slides on the sample surface. This hysteresis is dependent upon the scan rate and reaches a minimum below which friction is dominated by stickslip effects and above which friction is dominated by shear forces. This artefact may be corrected by mounting the sample perpendicular to the cantilever, thereby eliminating lateral movement of the cantilever on the sample. Viscoelastic properties of soft samples also make it difficult to determine the point of contact and to measure accurately the forces of adhesion. When the cantilever makes contact with a soft sample, the cantilever may indent the sample such that the region of contact is non-linear. It is then difficult to determine the point at which contact begins. The rate at which the sample approaches or retracts from the tip also affects the adhesive force measured on soft samples. This is because the tip and sample are weakly joined over a large contact area that does not decouple fast enough as the tip is withdrawn at very high scan rates. Thus, the membrane deforms upward as the tip withdraws, causing an increased force of adhesion. Contact between a soft sample and the tip also affects the measured adhesion force in other ways. As a tip is driven into a soft sample the contact area increases as the sample deforms around the tip. Hence, increasing the contact force between the tip and sample increases the contact area, which in turn increases the number of interactions between the tip and sample. Therefore, increasing contact force results in an increased adhesive force between the tip and sample. To compare measured adhesion values, the contact force should be selected such that it does not vary from experiment to experiment. Additionally, slow scan rates should be used to allow the tip and sample to separate during retraction.
7.4 Scanning probe microscopy of nanoparticles Accurate measurement of nanoparticles using AFM requires intermittent or non-contact mode imaging. This reduces the lateral forces allowing imaging of the particle. For contact mode imaging, the high lateral force will displace the weakly attached particles except under certain conditions. A closed-loop xy scanning system is also recommended, to minimise the drift of the
Electron microscopy
piezoelectric scanner in the x and y directions. For very small particles it is also important to have enough resolution for the z scanner, i.e. the dynamic range of the z scanner should be reduced as much as possible, usually by using a low-voltage mode of operation. SPM measurement of nanoparticles differs from that of electron microscopy in that it produces images in three dimensions, via the deflection of a sharp probe and, unlike electron microscopy, simple lateral measurement is not practicable due to large tip–nanoparticle distortion. The only practical solution to this is to measure the heights of the nanoparticle rather than the lateral dimensions.
7.5 Electron microscopy 7.5.1 Scanning electron microscopy The scanning electron microscope (SEM) uses a very fine beam of electrons, which is made to scan the specimen under test as a raster of parallel contiguous lines (see [54,55] for thorough descriptions of electron microscopy). Upon hitting the specimen electrons will be reflected (backscattered electrons) or generated by interaction of the primary electrons with the sample (secondary electrons). The specimen is usually a solid object and the number of secondary electrons emitted by the surface will depend upon its topography or nature. These are collected, amplified and analysed before modulating the beam of a cathode ray tube scanned in sympathy with the scanning beam. The image resembles that seen through an optical lens but at a much higher resolution. The dimensions of the probe beam determine the ultimate resolving power of the instrument. This is controlled in turn by the diffraction at the final aperture. The ultimate probe size in an SEM is limited by diffraction, chromatic aberration and the size of the source. Typical SEMs can achieve image magnifications of 400 000 and have a resolution of around 1 nm with a field emission system and an in-lens detector. The magnification of the system is determined by the relative sizes of the scan on the recording camera and of the probe on the specimen surface. The magnification is, therefore, dependent upon the excitation of the scan coils, as modified by any residual magnetic or stray fields. It also depends sensitively on the working distance between the lens and the specimen. It is not easy to measure the working distance physically but it can be reproduced with sufficient accuracy by measuring the current required to focus the probe on the specimen surface.
199
200
C H A P T ER 7 : Scanning probe and particle beam microscopy
The camera itself may not have a completely linear scan, so distortions of the magnification can occur. In considering the fidelity of the image, it is assumed that the specimen itself does not influence the linear response of the beam; in other words that charging effects on the specimen surface are negligible. If calibration measurements of any accuracy are to be made, any metal coating employed to make the surface conducting should be very thin compared to the structure to be measured, and is best avoided altogether if possible. Since charging is much more serious for low-energy secondary electrons than for the higher-energy backscattered electrons, it is preferable to use the backscattered signal for any calibration work, if the instrument is equipped to operate in this mode. For similar reasons, if the specimen is prone to charging, the use of a low-voltage primary beam rather than an applied conductive coating is much to be preferred, but the resolution is lost again. The indicated magnification shown on the instrument is a useful guide but should not be relied upon for accuracy better than 10 %. In all forms of microscopy, image degradation can occur from a number of factors. These include poor sample preparation, flare, astigmatism, aberrations, type and intensity of illumination and the numerical apertures of the condenser and objective lens [56]. Electron backscattered diffraction (EBSD) provides crystallographic orientation information about the point where the electron beam strikes the surface [57]. It has a spatial resolution down to 10 nm to 20 nm depending on the electron beam conditions that are used. Because of the unique identification of crystal orientation with grain structure, EBSD can be used to measure the size of grains in polycrystalline materials, and can also be used to measure the size of crystalline nanoparticles when these are sectioned. As EBSD relies on the regularity of the crystal structure, it can also be used to estimate the degree of deformation in the surface layers of a material.
7.5.1.1 Choice of calibration specimen for scanning electron microscopy Since there are various potential sources of image distortion in an SEM, it would be convenient to have a calibration artefact that yields measurements over the whole extent of the screen and in two orthogonal directions. Thus a cross-ruled diffraction grating or a square mesh of etched or electron beamwritten lines on a silicon substrate is an ideal specimen. The wide range of magnification covered by an SEM requires that meshes of different dimension are available to cover the full magnification range. There are many gratings and meshes that are commercially available.
Electron microscopy
At progressively higher magnifications, copper foil grids, cross-ruled silicon substrates and metal replica diffraction gratings are available [58]. All the artefacts should be mounted flat on a specimen stub suitable for the SEM in use, and the stage tilt should be set at zero [59]. The zero tilt condition can be checked by traversing the artefact in x and y directions to check that there is no change in beam focus and, therefore, no residual tilt. The beam tilt control should be set at zero. It is important that the working distance is not changed during the examination of a specimen or when changing to a calibration specimen. The indications of working distance given on the instrument are not sensitive enough to detect changes which could affect measurement accuracy in quantitative work. It is better to reset the exchange specimen stub against a physical reference surface which has already been matched to the stub carrying the specimen [59]. The ideal case is to be able to have a magnification standard on the same specimen stub as the sample to be measured, since there is then no ambiguity in the operating conditions (working distance, accelerating voltage, etc.) [60]. For nanoparticles, this can be ensured by using a grid, as suggested above, or even more integrally by dispersing a preparation of polystyrene latex spheres on the specimen so that each field of view contains some of the calibration spheres. It has to be emphasised that, although the various ‘uniform’ latex suspensions do indeed have a well-defined mean size, the deviation from the mean allows a significant number of particles of different size to be present. It is essential, therefore, to include a statistically significant number of latex spheres in the measurement if the calibration is to be valid.
7.5.2 Transmission electron microscopy The transmission electron microscope (TEM) operates on the same basic principle as a light microscope but uses electrons instead of light. The active components that compose the TEM are arranged in a column, within a vacuum chamber. An electron gun at the top of the microscope emits electrons that travel down through the vacuum towards the specimen stage. Electromagnetic electron lenses focus the electrons into a narrow beam and direct it onto the test specimen. The majority of the electrons in the beam travel through the specimen. However, depending on the density of the material present, some of the electrons in the beam are scattered and are removed from the beam. At the base of the microscope the unscattered electrons hit a fluorescent viewing screen and produce a shadow image of the test specimen with its different parts displayed in varied darkness according
201
202
C H A P T ER 7 : Scanning probe and particle beam microscopy
to their density. This image can be viewed directly by the operator or photographed with a camera. The limiting resolution of the modern TEM is of the order of 0.05 nm with aberration-corrected instruments. The resolution of a TEM is normally defined as the performance obtainable with an ‘ideal’ specimen, i.e. one thin enough to avoid imposing a further limit on the performance due to chromatic effects. The energy loss suffered by electrons in transit through a specimen will normally be large compared to the energy spread in the electron beam due to thermal emission velocities, and large also compared to the instability of the high-voltage supply to the gun and the current supplies to the electron lenses. In general the specimen itself causes loss of definition in the image due to chromatic aberration of the electrons, which have lost energy in transit through it. A ‘thick’ specimen could easily reduce the attainable resolution to 1.5 nm to 2 nm [59]. For nanoparticles, this condition could occur if a particle preparation is very dense; a good preparation of a well-dispersed particle array on a thin support film would not in general cause a serious loss in resolution.
7.5.3 Traceability and calibration of transmission electron microscopes As for SEM, the calibration factor for a TEM is the ratio of the measured dimension in the image plane and the sample dimension in the object plane. Calibration should include the whole system. This means that a calibration artefact of known size in the object plane is related to a calibration artefact of known size in the image plane. For example, the circles on an eyepiece graticule, the ruler used to measure photographs and the number of detected pixels in the image analyser should all be related to an artefact of known size in the object plane. The final image magnification of a TEM is made up of the magnifications of all the electron lenses, and it is not feasible to measure the individual stages of magnification. Since the lenses are electromagnetic, the lens strength is dependent not only on the excitation currents, but also on the previous magnetic history of each circuit. It is essential, therefore, to cycle each lens in a reproducible manner if consistent results are to be obtained. Suitable circuitry is now included in many instruments; otherwise, each lens current should be increased to its maximum value before being returned to the operating value in order to ensure that the magnetic circuits are standardized. This should be done before each image is recorded. The indicated
Electron microscopy
magnification shown on the instrument is a useful guide but should not be relied upon for an accuracy better than 10 %.
7.5.3.1 Choice of calibration specimen It is possible to calibrate the lower part of the magnification range using a specimen which has been calibrated optically, although this loses accuracy as the resolution limit of optical instruments is approached. At the top end of the scale, it is possible to image crystal planes in suitable single crystals of known orientation. These spacings are known to a high degree of accuracy by x-ray measurements. Unfortunately, there is at present no easy way of checking the accuracy of calibration in the centre of the magnification range. The specimen most often used is a plastic/ carbon replica of a cross-ruled diffraction grating. While it is believed that these may usually be accurate to about 2 %, it has not so far proved possible to certify them.
7.5.3.2 Linear calibration Linear calibration is the measurement of the physical distances in the object plane represented by a distance in the image plane. The image plane is the digital image inside the computer and so the calibration is expressed in length per pixel or pixels per unit length. The procedure for the linear calibration of image analysers varies from machine to machine but usually involves indicating on the screen both ends of an imaged artefact of known dimensions in the object plane [61]. This calibration artefact may be a grid, grating, micrometer, ruler or other scale appropriate to the viewing system, and should be arranged to fill the field of view, as far as possible. The calibration should be measured both parallel to and orthogonal to the scan direction. Some image analysers can be calibrated in both directions and use both these values. Linear calibration can be altered by such things as drift in a tube camera, the sagging of zoom lenses and the refocusing of the microscope.
7.5.3.3 Localized calibration The linear calibration may vary over the field of view. There may be image distortions in the optics or inadequately compensated distortions from a tilted target in the microscope. These distortions can be seen by comparing an image of a square grid with an overlaid software generated pattern. Tube cameras are a source of localized distortion, especially at the edge of the screen near the start of the scan lines. The size of these distortions can be determined by measuring a graticule with an array of spots all of the same size that fill the screen, or by measuring one spot or reference particle at
203
204
C H A P T ER 7 : Scanning probe and particle beam microscopy
different points in the field of view. Some image analysers allow localized calibrations to be made [62].
7.5.3.4 Reference graticule Many of the calibrations can be performed easily with a calibrated graticule containing arrays of calibrated spots and a square grid. Such a graticule is the reference stage graticule for image analyser calibration. Periodic patterns such as grating replicas, super-lattice structures of semiconductors, crystal lattice images of carbon, gold or silicon can be used as reference materials.
7.5.4 Electron microscopy of nanoparticles Electron microscopy produces two-dimensional images. The contrast mechanism is based on the scattering of electrons. Figure 7.8a shows a typical TEM image of gold nanoparticles. Many microscopes still record the images on photographic film. In this case, the images have to be scanned into a computer file to be analysed. However, CCD cameras are becoming increasingly popular. In this case the image is transferred directly onto a computer file. Traditionally size measurements from electron microscope images are achieved by applying threshold intensity uniformly across the image. Image intensities above (or below) this level are taken to correspond to areas of the particle being measured. This is demonstrated in Figure 7.8b, where a threshold was applied to identify the particles. Simple analysis allows the area and radius of the particle to be determined. In the case of non-spherical particles the diameter is determined by the fitting of an ellipsoid. A histogram of the sizes can then be easily determined (Figure 7.8c). Although the threshold method described above is a simple, well-defined and recognised method it does suffer from some significant drawbacks. The first is setting the threshold level, which is difficult for poorly contrasting particles (such as small polymer particles or inhomogeneous particles (see Figure 7.9)). The second, more important, drawback occurs when analysing agglomerated particles. With no significant intensity difference between the particles a simple threshold is insufficient to distinguish between the particles and hence accurately determine size. It is usually recommended to use a watershed method.
7.6 Other particle beam microscopy techniques In order to get high-resolution images from any scanning beam microscope one must be able to produce a sufficiently small probe, have
Other particle beam microscopy techniques
FIGURE 7.8 (a) TEM image of nominal 30 nm diameter gold nanoparticles; (b) using threshold to identify the individual particles; (c) histogram of the measured diameters.
a small interaction volume in the substrate and have an abundance of information-rich particles to create the image. A typical SEM meets all of these requirements, but other particles can be used as well. Recently, a focused ion beam (FIB) [63] has become more and more popular. The concept of FIB is similar to that of SEM; however, the electrons are replaced by ions of much larger masses. As a consequence they can in
205
206
C H A P T ER 7 : Scanning probe and particle beam microscopy
FIGURE 7.9 TEM image of 150-nm-diameter latex particles. This image highlights the drawback to TEM size measurement using TEM or SEM. The first is that a white ‘halo’ surrounds the particle. Should the halo area be included in the size measurement? If so there will be a difficulty in determining the threshold level. The second is the particles are aggregated, again making sizing difficult.
general induce damage to a specimen by sputtering. However, for each incoming ion two to eight secondary electrons are generated. This abundance of secondary electrons allows for very-high-contrast imaging. In addition to secondary electrons, backscattered ions are also available for imaging. These ions are not as abundant as secondary electrons, but do provide unique contrast mechanisms that allow quantitative discrimination between materials with sub-micrometre spatial resolution. An electron beam has a relatively large excitation volume in the substrate. This limits the resolution of an SEM regardless of the probe size. A helium ion beam does not suffer from this effect, as the excitation volume is much smaller than that of the SEM. SEMs are typically run at or near their secondary electron unity crossover point to minimize charging of the sample. This implies that for each incoming electron, one secondary electron is made available for imaging. The situation with the helium ion beam is much more favourable. The helium ion microscope [64] has several unique properties that, when combined, allow for higher-resolution imaging than that available today with conventional SEMs. In addition to better resolution, the helium ion microscope and the FIB also provide unique contrast mechanisms in both secondary electron mode and backscattered modes that enable material discrimination and identification.
References
7.7 References [1] Binnig G, Rohrer H, Gerber Ch, Weibel E 1982 Surface studies by scanning tunneling microscopy Phys. Rev. Lett. 49 57–61 [2] Meyer E, Hug H J, Bennewitz R 2003 Scanning probe microscopy: the lab on a tip (Springer) [3] Weisendanger R 1994 Scanning probe microscopy and spectroscopy: methods and applications (Cambridge University Press) [4] Courion D 2003 Near-field microscopy and near-field optics (Imperial College Press) [5] Binnig G, Rohrer H 1987 Scanning tunneling microscopy - from birth to adolescence Rev. Mod. Phys. 59 615–625 [6] Binnig G, Quate C F, Gerber Ch 1986 Atomic force microscopy Phys. Rev. Lett. 56 930–933 [7] Magonov S 2008 Atomic force microscopy (John Wiley & Sons) [8] Thayson J, Boisen A, Hansen O, Bouwstra S 2000 Atomic force microscopy probe with piezoresistive read-out and a highly sensitive Wheatstone bridge arrangement Sens. Act. A: Phys. 83 47–53 [9] Howard L, Stone J, Fu J 1979 Real-time displacement measurement with a Fabry-Pe´rot cavity and a diode laser Precision Engineering 25 321–335 [10] Meyer G, Amer N M 1988 Novel optical approach to atomic force microscopy Appl. Phys. Lett. 53 1045–1047 [11] Gittes F, Schmidt C F 1998 Thermal noise limitations on micromechanical experiments Euro. Biophys. J. 27 75–81 [12] Wilkening G, Koenders L 2005 Nanoscale calibration standards and methods (Wiley-VCH) ´ , Kopniczsky J, Kocavecz J, Hoel A, Granqvist C-G, Heszler P 2005 [13] Mechler A Anomalies in nanostructure size measurements by AFM Phys. Rev. B 72 125407 [14] Dai G, Koenders L, Pohlenz F, Dziomba T, Danzebrink H-U 2005 Accurate and traceable calibration of one-dimensional gratings Meas. Sci. Technol. 16 1241–1249 [15] Albrecht T R, Alkamine S, Carver T E, Quate C F 1990 Microfabrication of cantilever styli for the atomic force microscope J. Vac. Sci. Technol. A 8 3386– 3396 [16] Keller D 1993 Reconstruction of STM and AFM images distorted by finite sized tips Surf. Sci. 253 353–364 [17] Villarubia J S 1994 Morphological estimation of tip geometry for scanned probe microscopy Surf. Sci. 321 287–300 [18] Bakucz P, Yacoot A, Dziomba T, Koenders L, Kru ¨ ger-Sehm R 2008 Neural network approximation of tip-abrasion effects in AFM imaging Meas. Sci. Technol. 19 065101
207
208
C H A P T ER 7 : Scanning probe and particle beam microscopy
[19] van Cleef M, Holt S A, Watson G S, Myhra S 2003 Polystyrene spheres on mica substrate: AFM calibration, tip parameters and scan artefacts J Microscopy 181 2–9 ´W [20] Hu ¨ bner U, Morgenroth W, Meyer H G, Sultzbach T, Brendel B, Mirande 2003 Downwards to metrology in nanoscale: determination of the AFM tip shape with well-known sharp edge calibration structures Appl. Phys. A: Mater. Sci. Process. 76 913–817 [21] Seah M P, Spencer S J, Cumpson P J, Johnstone J E 2000 Sputter-induced cone and filament formation on InP and AFM tip shape determination Surf. Interf. Anal. 29 782–790 [22] Lo Y-S, Huefner N D, Chan W S, Dryden P, Hagenhoff P, Beebe T P 1999 Organic and inorganic contamination on commercial AFM cantilevers Langmuir 15 6522–6526 [23] Jorgensen J F, Jensen C P, Garnaes J 1998 Lateral metrology using scanning probe microscopes, 2D pitch standards and image processing Appl. Phys. A: Mater. Sci. Process. 66 S847–S852 [24] Haycocks J A, Jackson K 2005 Traceable calibration of transfer standards for scanning probe microscopy Precision Engineering 29 168–175 [25] Meli F, Thalmann R 1998 Long-range AFM profiler used for accurate pitch measurements Meas. Sci. Technol. 9 1087–1092 [26] Dixson R G, Koening R G, Tsai V W, Fu J, Vorburger T V 1999 Dimensional metrology with the NISTcalibrated atomic force microscope Proc. SPIE 3677 20–34 [27] Gonda S, Doi T, Karusawa T, Tanimuar Y, Hisata N, Yamagishi T, Fujimoto H, Yukawa H 1999 Real-time, interferometrically measuring atomic force microscope for direct calibration of standards Rev. Sci. Instrum. 70 3362–3368 [28] Misumi I, Gonda S, Kurosawa T, Azuma Y, Fujimoto T, Kojima I, Sakurai T, Ohmi T, Takamasu K 2006 Reliability of parameters of associated base straight line in step height samples: uncertainty evaluation in step height measurements using nanometrological AFM Precision Engineering 30 13–22 [29] Misumi I, Gonda S, Karusawa T, Takamasu K 2003 Uncertainty in pitch measurements of one-dimensional grating standards using a nanometrological atomic force microscope Meas. Sci. Technol. 14 463–471 [30] Yacoot A, Koenders L 2008 Aspects of scanning force microscope probes and their effects on dimensional measurement J. Phys. D: Appl. Phys. 41 103001 [31] Beaulieu L Y, Godin M, Laroche O, Tabard-Cossa V, Gru ¨ tter P 2006 Calibrating laser beam deflection systems for use in atomic force microscopes and cantilever sensors Appl. Phys. Lett. 88 083108 [32] D’Costa N P, Hoh J H 1995 Calibration of optical lever sensitivity for atomic force microscopy Rev. Sci. Instrum. 66 5096–5097 [33] Mendels D-A, Lowe M, Cuenat A, Cain M G, Vallejo E, Ellis D, Mendels F 2006 Dynamic properties of AFM cantilevers and the calibration of their spring constants J. Micromech. Microeng. 16 1720–1733
References
[34] Senden T, Ducker W 1994 Experimental determination of spring constants in atomic force microscopy Langmuir 10 1003–1004 [35] Sader J E, Chon J W M, Mulvaney P 1999 Calibration of rectangular atomic force microscope cantilevers Rev. Sci. Instrum 70 3967–3969 [36] Weisenhorn A L, Maivald P, Butt H J, Hamsma P K 1992 Measuring adhesion, attraction, and repulsion between surfaces in liquids with an atomicforce microscope Phys. Rev. B 45 11226–11232 [37] Clifford C A, Seah M P 2005 The determination of atomic force microscope cantilever spring constants via dimensional methods for nanomechanical analysis Nanotechnology 16 1666–1680 [38] Cleveland J P, Manne S, Bocek D, Hamsma P K 1993 A nondestructive method for determining the spring constant of cantilevers for scanning force microscopy Rev. Sci. Instrum. 64 403–405 [39] Hutter J L, Bechhoefer J 1993 Calibration of atomic-force microscope tips Rev. Sci. Instrum. 64 1868–1873 [40] Matei G A, Thoreson E J, Pratt J R, Newell D B 2006 Precision and accuracy of thermal calibration of atomic force microscopy cantilevers Rev. Sci. Instrum. 77 083703 [41] Cappella B, Dietler G 1999 Force-distance curves by atomic force microscopy Surf. Sci. Rep. 34 1–104 [42] Israelachvili J 1992 Intermolecular and surface forces (Academic Press: London) [43] Goodman F O, Garcia N 1991 Roles of the attractive and repulsive forces in atomic-force microscopy Phys. Rev. B 43 4728–4731 [44] Warmack R J, Zheng X -Y, Thurdat T, Allison D P 1994 Friction effects in the deflection of atomic force microscope cantilevers Rev. Sci. Instrum. 65 394–399 [45] Frisbie C D, Rozsnyai L F, Noy A, Wrighton M S, Lieber C M 1994 Functional group imaging by chemical force microscopy Science 265 2071–2074 [46] Ducker W A, Senden T J, Pashley R M 1991 Direct measurement of colloidal forces using an atomic force microscope Nature 353 239–241 [47] Pincet F, Perez E, Wolfe J 1995 Does glue contaminate the surface forces apparatus? Langmuir 11 373–374 [48] Wong S S, Joselewich E, Wooley A T, Cheung C L, Lieber C M 1998 Covalently functionalized nanotubes as nanometre- sized probes in chemistry and biology Nature 394 52–55 [49] Hafner J H, Cheung C L, Lieber C M 1999 Direct growth of single-walled carbon nanotube scanning probe microscopy tips J. Am. Chem. Soc. 121 9750–9751 [50] Florin E L, Moy V T, Gaub H E 1994 Adhesion forces between individual ligand-receptor pairs Science 264 415–417 [51] Yacoot A, Koenders L, Wolff H 2007 An atomic force microscope for the study of the effects of tip-sample interactions on dimensional metrology Meas. Sci. Technol. 18 350–359
209
210
C H A P T ER 7 : Scanning probe and particle beam microscopy
[52] Jaschke M, Butt H-J 1995 Height calibration of optical lever atomic force microscopes by simple laser interferometry Rev. Sci. Instrum 66 1258–1259 [53] Hoh J H, Engel A 1993 Friction effects on force measurements with an atomic force microscope Langmuir 9 3310–3312 [54] Egerton R F 2008 Physical principles of electron microscopy: an introduction to TEM, SEM and AEM (Springer) 2nd edition [55] Goodhew P J, Humpheys F J, Beanland R 2000 Electron microscopy and analysis (Taylor & Francis) [56] Schmidt F, Schmidt K G, Fissan H 1990 Nanoparticles J. Aerosol Sci. 21 S535–S538 [57] Mingard K P, Roebuck B, Bennett E G, Thomas M, Wynne B P, Palmiere E J 2007 Grain size measurement by EBSD in complex hot deformed metal alloy microstructures J. Microscopy 227 298–308 [58] Geller J 2003 Magnification standards for SEM, light or scanning probe microscopes Micro. Anal. 9 712–713 [59] Allen T 1993 Particle size measurement (Chapman and Hall) 4th edition [60] ISO 16700: 2004 Microbeam analysis - scanning electron microscopy Guidelines for calibrating image magnification (International Organization for Standardization) [61] BS 3406 part 1: 1986 Methods for the determination of particle size distribution. Guide to powder sampling (British Standards Institute) [62] Schurtenberger C U, Schlurtenburger P 1998 Characterization of turbid colloidal suspensions using light scattering techniques combined with crosscorrelation methods J. Colloid Interf. Sci. 207 150–158 [63] Giannuzzi L A, Stevie F A 2005 Introduction to focused ion beams: introduction, theory, techniques and practice (Springer) [64] Morgan J, Notte J, Hill R, Ward B 2006 An introduction to the helium ion microscope Microscopy Today 14 24–31
CHAPTER 8
Surface topography characterization 8.1 Introduction to surface topography characterization The characterization of surface topography is a complicated branch of metrology with a huge range of parameters available. Surface form characterization has been covered elsewhere [1] and this book concentrates on surface texture characterization. The proliferation of surface texture characterization parameters has been referred to as ‘parameter rash’ [2] – at any one time there can be over one hundred parameters to choose from. However, due to recent activities, there will soon be a coherent international standards infrastructure to support surface texture characterization. Profile characterization has been standardized for some time now and draft areal standards are now available. The first important work on areal surface texture was carried out by a European project led by Ken Stout from the University of Birmingham [3]. This project ended with the publication of the Blue Book [4] and the definition of the so-called ‘Birmingham-14’ parameters. Following this project ISO started standardization work on areal surface texture. However, ISO experts rapidly realised that further research work was needed to determine the stability of areal parameters and their correlation with the functional criteria used by industry. A further project (SURFSTAND) was carried out between 1998 and 2001, by a consortium of universities and industrial partners, led by Liam Blunt of the University of Huddersfield. SURFSTAND ended with the publication of the Green Book [5] and generated the basic documents for forthcoming specification standards. This chapter will summarize the surface texture characterization methods that are now either fully standardized or are at the draft stage. There are many other parameters (and filtering methods) that can be found on less recent instrumentation and in use in many industries, but this book has only considered the ISO standard methods as these are the most likely to be the Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS Introduction to surface topography characterization Surface profile characterization Areal surface texture characterization Fractal methods Comparison of profile and areal characterization References
211
212
C H A P T ER 8 : Surface topography characterization
methods used in the near future. Further methods for surface characterization, including those from the fields of roundness measurement, and frequency and waveform analysis can be found elsewhere [6,7]. Parameters for areal surface texture are relatively new and there has been limited research on their use. For this reason some of the areal parameters are just presented in this book as stated in the ISO specification standards with little or no description of their uses. It is also expected that most users of surface texture parameters will have access to software packages that can be used to calculate parameters and will not attempt to code the parameters from scratch. However, software packages should be checked for correctness where possible using software measurement standards (see section 6.13).
8.2 Surface profile characterization Surface profile measurement was described in section 6.4. The surface profile characterization methods that have been standardized by ISO are presented here. Section 8.4 presents some of the fractal methods that are available. There are three types of profile that are defined in ISO specification standards [8,9]. Firstly, the traced profile is defined as the trace of the centre of a stylus tip that has an ideal geometrical form (conical, with spherical tip) and nominal tracing force, as it traverses the surface. Secondly, the reference surface is the trace that the probe would report as it is moved along a perfectly smooth and flat work piece. The reference profile arises from the movement caused by an imperfect datum guideway. If the datum were perfectly flat and straight, the reference profile would not affect the total profile. Lastly, the total profile is the (digital) form of the profile reported by a real instrument, combining the traced profile and the reference profile. Note that in some instrument systems it is not practicable to ‘correct’ for the error introduced by datum imperfections and the total profile is the only available information concerning the traced profile. The above types of profile are primarily based on stylus instruments. Indeed, stylus instruments are the only instruments that are covered by ISO standards at the time of writing (see section 8.2.10). However, many optical instruments allow a profile either to be measured directly (scanned) or extracted in software from an areal map. In this case the profile definitions need to be interpreted in an appropriate manner (for example, in the case of a coherence scanning interferometer, see section 6.7.3.4, the reference profile will be part of the reference mirror surface). Two more definitions are required before we can move onto filtering and surface texture parameters:
Surface profile characterization
8.2.1 Evaluation length The evaluation length is the total length along the surface (x axis) used for the assessment of the profile under evaluation. It is normal practice to evaluate roughness and waviness profiles (see sections 8.2.3.2 and 8.2.3.3) over several successive sampling lengths, the sum of which gives the evaluation length. For the primary profile the evaluation length is equal to the sampling length. ISO 4287 [9] advocates the use of five sampling lengths as the default for roughness evaluation and if another number is used the assessment parameter (see section 8.2.5) will have that number included in its symbol, for example Ra6. No default is specified for waviness. With a few exceptions, parameters should be evaluated in each successive sampling length and the resulting values averaged over all the sampling lengths in the evaluation length. Some parameters are assessed over the entire evaluation length. To allow for acceleration at the start of a measurement and deceleration at the end of a measurement (when using a stylus instrument), the instrument traverse length is normally rather longer than the evaluation length.
8.2.2 Total traverse length The total traverse length is the total length of surface traversed in making a measurement. It is usually greater than the evaluation length due to the need to allow a short over-travel at the start and end of the measurement to allow mechanical and electrical transients to be excluded from the measurement and to allow for the effects of edges on the filters.
8.2.3 Profile filtering Filtering plays a fundamental role in surface texture analysis. In this context, it is any means (usually electronic or computational, but sometimes mechanical) for selecting for analysis a range of structure in the total profile that is judged to be that of significance to a particular situation. Alternatively, it may be thought of as a means of rejecting information considered irrelevant, including, for example, attempts to reduce the effect of instrument noise and imperfections. Filters select (or reject) structure according to its scale in the x axis, that is in terms of wavelengths or spatial frequencies. A filter that rejects short wavelengths while retaining longer ones is called a low-pass filter since it preserves (or lets pass) the low frequencies. A highpass filter preserves the shorter-wavelength features while rejecting longer ones. The combination of a low-pass and a high-pass filter to select a restricted range of wavelengths with both high regions and low regions rejected is called a band-pass filter. The attenuation (rejection) of a filter should not be too sudden else we might get very different results from
213
214
C H A P T ER 8 : Surface topography characterization
surfaces that are almost identical apart from a slight shift in the wavelength of a strong feature. The wavelength at which the transmission (and so also the rejection) is 50 % is called the cut-off of that filter (note that this definition is specific to the field of surface texture). The transmission characteristics of a filter are determined by its weighting function. The weighting function, standardized in ISO 11562 [7,10], in the form of a Gaussian probability function is described mathematically by x 2 1 exp sðxÞ ¼ (8.1) al al where a is a constant designed to provide 50% transmission at a cut-off wavelength of l, and is equal to rffiffiffiffiffiffiffiffi ln2 z 0:4687: (8.2) p The filter effect of the weighting function, s(x), is exclusively determined by the constant a. Filtering produces a filter mean line which results from the convolution of the measured profile with the weighting. A surface profile filter separates the profile into long wave and short wave components (see Figure 8.1). There are three filters used by instruments for measuring roughness, waviness and primary profiles: ls profile filter. This is the filter that defines where the intersection occurs between the roughness (see section 8.2.3.2) and shorter-wavelength components present in a surface; lc profile filter. This is the filter that defines where the intersection occurs between the roughness and waviness (see section 8.2.3.3) components; lf profile filter. This is the filter that defines where the intersection occurs between the waviness and longer wavelength components present in a surface.
FIGURE 8.1 Separation of surface texture into roughness, waviness and profile.
Surface profile characterization
Almost all modern instruments and software packages now employ a Gaussian filter according to [10]. However, older instruments may employ other forms of filter, for example the 2RC filter [7,11]. It is important to be aware of the type of filter used by an instrument and care should be taken when comparing data from such instruments to those from modern instruments.
8.2.3.1 Primary profile The primary profile is defined as the total profile after application of the short wavelength (low-pass) filter, with cut-off, ls, but including the effect of the standardized probe (see section 6.6.1). Ultimately the finite size of the stylus limits the rejection of very short wavelengths and in practice this mechanical filtering effect is often used by default for the ls filter (similar arguments can be used throughout this chapter for optical instruments; for example, the equivalent to a finite stylus radius for an optical instrument will be either the spot size, diffraction limit or pixel spacing). Since styli vary and since the instrument will introduce vibration and other noise into the profile signal that has equivalent wavelengths shorter than the stylus dimensions, the best practice is always to apply ls filtration upon the total profile. Figure 8.2 relates the primary to the roughness and waviness profiles.
8.2.3.2 Roughness profile The roughness profile is defined as the profile derived from the primary profile by suppressing the long-wave component using a long-wavelength (high-pass) filter, with cut-off, lc. The roughness profile is the basis for the evaluation of the roughness profile parameters. Note that such evaluation automatically includes the use of the lf profile filter, since it derives from the primary profile.
FIGURE 8.2 Primary (top), waviness (middle) and roughness (bottom) profiles.
215
216
C H A P T ER 8 : Surface topography characterization
8.2.3.3 Waviness profile The waviness profile is derived by the application of a band-pass filter to select the surface structure at rather longer wavelengths than the roughness. Filter lf suppresses the long-wave component (profile component) and filter lc suppresses the short-wave component (roughness component). The waviness profile is the basis for the evaluation of the waviness profile parameters.
8.2.4 Default values for profile characterization ISO 4287 [9] and ISO 4288 [12] define a number of default values for various parameters that are used for surface profile characterization. Unless otherwise stated these default values apply. For example, unless otherwise stated, five sampling lengths are used to calculate the roughness parameters. Table 8.1 shows the relationship between cut-off wavelength, tip radius and maximum sampling spacing. When a component is manufactured from a drawing the surface texture specification will normally include the sampling length for measuring the surface profile. The most commonly used sampling length is 0.8 mm. However, when no indication is given on the drawing the user will require a means of selecting the most appropriate value for his or her particular application. The sampling length should only be selected after considering the nature of the surface texture and which characteristics are required for the measurement.
8.2.5 Profile characterization and parameters A surface texture parameter, be it profile or areal, is used to give the surface texture of a part a quantitative value. Such a value may be used to simplify the description of the surface texture, to allow comparisons with other parts (or areas of a part) and to form a suitable measure for a quality system. Surface texture parameters are also used on engineering drawings to
Table 8.1
Relationship between cut-off wavelength, tip radius (rtip) and maximum sampling spacing [12]
lc (mm)
ls (mm)
Roughness cut-off wavelength ratio lc/ls
rtip max (mm)
Maximum sampling spacing (mm)
0.08 0.25 0.8 2.5 8
2.5 2.5 2.5 8 25
30 100 300 300 300
2 2 2 5 10
0.5 0.5 0.5 1.5 5
Surface profile characterization
formally specify a required surface texture for a manufactured part. Some parameters give purely statistical information about the surface texture and some can describe how the surface may perform in use, that is to say, its functionality. All the profile parameters described below (and the areal parameters – see section 8.3.5) are calculated once the form has been removed from the measurement data. The ideas of ‘peaks’ and ‘valleys’ are important in understanding and evaluating surfaces. Unfortunately it is not always easy to decide what should be counted as a peak. To overcome the confusion caused by early non-coordinated attempts to produce parameters reflecting this difference, the modern standards introduce an important specific concept: the profile element consisting of a peak and a valley event. Associated with the element is a discrimination that prevents small, unreliable measurement features from affecting the detection of elements. A profile element is a section of a profile from the point at which it crosses the mean line to the point at which it next crosses the mean line in the same direction (for example, from below to above the mean line). The part of a profile element that is above the mean line, i.e. the profile from when it crosses the mean line in the positive direction until it next crosses the mean line in the negative direction. It is possible that a profile could have a very slight fluctuation that takes it across the mean line and almost immediately back again. This is not reasonably considered as a real profile peak or profile valley. To prevent automatic systems from counting such features, only features larger than a specified height and width are counted. In the absence of other specifications, the default levels are that the height of a profile peaks (valley) must exceed 10 % of the Rz, Wz or Pz parameter value and that the width of the profile peak (valley) must exceed 1 % of the sampling length. Both criteria must be met simultaneously.
8.2.5.1 Profile parameter symbols The first capital letter in the parameter symbol designates the type of profile under evaluation. For example, Ra is calculated from the roughness profile, Wa from the waviness profile and Pa from the primary profile. In the description given below only the roughness profile parameters are described, but the salient points apply also to the waviness and primary profile parameters.
8.2.5.2 Profile parameter ambiguities There are many inconsistencies in the parameter definitions in ISO 4287 [9]. Some parameter definitions are mathematically ambiguous and the description of the W parameters is open to misinterpretation. Perhaps the most ambiguous parameter is RSm, where a different value for the parameter
217
218
C H A P T ER 8 : Surface topography characterization
can be obtained purely by reversing the direction of the profile. These ambiguities are described elsewhere [13] and, in the case of RSm, a nonambiguous definition has been proposed [14].
8.2.6 Amplitude profile parameters (peak to valley) 8.2.6.1 Maximum profile peak height, Rp This parameter is defined as the largest profile peak height within the sampling length, i.e. it is the height of the highest point of the profile from the mean line; see Figure 8.3. This parameter is often referred to as an extreme-value parameter and as such can be unrepresentative of the surface as its numerical value may vary so much from sample to sample. It is possible to average over several consecutive sampling lengths and this will reduce the variation, but the value is often still numerically too large to be useful in most cases. However, this parameter will succeed in finding unusual conditions such as a sharp spike or burr on the surface that may be indicative of poor material or poor processing.
8.2.6.2 Maximum profile valley depth, Rv This is the largest profile valley depth within the sampling length, i.e. it is the depth of the lowest point on the profile from the mean line and is an extremevalue parameter with the same disadvantages as the maximum profile peak height (see Figure 8.4).
8.2.6.3 Maximum height of the profile, Rz This is the sum of the height of the largest profile peak height, Rp, and the largest profile valley depth, Rv, within a sampling length.
FIGURE 8.3 Maximum profile peak height, example of roughness profile.
Surface profile characterization
FIGURE 8.4 Maximum profile valley depth, example of roughness profile.
8.2.6.4 Mean height of the profile elements, Rc This is the mean value of the profile element heights within a sampling length. This parameter requires height and spacing discrimination as described earlier. If these values are not specified then the default height discrimination used shall be 10 % of Rz. The default spacing discrimination is 1 % of the sampling length. Both of these conditions must be met. It is extremely rare to see this parameter used in practice and it can be difficult to interpret. It is described here for completeness and, until it is seen on an engineering drawing, should probably be ignored (it is, however, used in the German automotive industry).
8.2.6.5 Total height of the surface, Rt This is the sum of the height of the largest profile peak height and the largest profile valley depth within the evaluation length (see Figure 8.5). This parameter is defined over the evaluation length rather than the sampling length and as such it has no averaging effect. Therefore, scratches or contamination on the surface can strongly affect Rt.
8.2.7 Amplitude parameters (average of ordinates) 8.2.7.1 Arithmetical mean deviation of the assessed profile, Ra The Ra parameter is the arithmetic mean of the absolute ordinate values, z(x), within the sampling length, l, Ra ¼
1 l
ðl 0
jzðxÞjdx:
(8.3)
219
220
C H A P T ER 8 : Surface topography characterization
FIGURE 8.5 Height of profile elements, example of roughness profile.
Note that equation (8.3) is for a continuous z(x) function. However, when making surface texture measurements, z(x) will generally be determined over a discrete number of measurement points. In this case equation (8.3) should be written as Ra ¼
N 1X jZi j N i¼1
(8.4)
where N is the number of measured points in a sampling length. The equations for the other profile parameters in this section, plus the areal parameters described in section 8.3, that involve an integral notation can be converted to a summation notation in a similar manner. The derivation of Ra can be illustrated graphically as shown in Figure 8.6. The areas of the graph below the centre line within the sampling length are placed above the centre line. The Ra value is the mean height of the resulting profile. The Ra value over one sampling length is the average roughness; therefore, the effect of a single non-typical peak or valley will have only a slight influence on the value. It is good practice to make assessments of Ra over a number of consecutive sampling lengths and to accept the average of the values obtained. This will ensure that Ra is typical of the surface under inspection. It is important that measurements take place perpendicular to the lay. The Ra value does not provide any information as to the shape of the irregularities on the surface. It is possible to obtain similar Ra values for surfaces having very different structures (see section 8.3). For historical reasons Ra is probably the most common of the all the surface texture
Surface profile characterization
FIGURE 8.6 The derivation of Ra.
parameters and is dominant on most engineering drawings when specifying surface texture. This should not deter one from considering other parameters that may give more information regarding the functionality of a surface.
8.2.7.2 The root mean square deviation of the assessed profile, Rq The Rq parameter is defined as the root mean square value of the ordinate values, z(x), within the sampling length, sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð 1 l 2 Rq ¼ z ðxÞdx: (8.5) l 0 The Rq parameter is another popular parameter along with Ra. It is common to see it stated that Rq is always 11 % larger than Ra for a given surface. However, this is only true of a sinusoidal surface, although Rq will always be larger than Ra. The reason for the commonality of Ra and Rq is chiefly historical. Ra is easier to determine graphically from a recording of the profile and was, therefore, adopted initially before automatic surface texture measuring instruments became generally available. The Rq parameter is used in optical applications where it is more directly related to the optical quality of a surface. Also, Rq is directly related to the total spectral content of a surface.
221
222
C H A P T ER 8 : Surface topography characterization
8.2.7.3 Skewness of the assessed profile, Rsk Skewness is a measurement of the symmetry of the surface deviations about the mean reference line and is the ratio of the mean cube value of the height values and the cube of Rq within a sampling area, 2
Rsk ¼
1 41 Rq3 l
ðl 0
3 z3 x dx5:
(8.6)
The Rsk parameter describes the shape of the topography height distribution. For a surface with a random (or Gaussian) height distribution that has symmetrical topography, the skewness is zero. The skewness is derived from the amplitude distribution curve; it is the measure of the profile symmetry about the mean line. This parameter cannot distinguish whether the profile spikes are evenly distributed above or below the mean plane and is strongly influenced by isolated peaks or isolated valleys. This parameter represents the degree of bias, either in the upward or downward direction, of an amplitude distribution curve. A symmetrical profile gives an amplitude distribution curve that is symmetrical about the centre line and an unsymmetrical profile results in a skewed curve. The direction of the skew is dependent on whether the bulk of the material is above the mean line (negative skew) or below the mean line (positive skew). Figure 8.7 shows three profiles with positive, zero and negative skewness. Use of this parameter can distinguish between two surfaces having the same Ra value.
FIGURE 8.7 Profiles with positive (top), zero (middle) and negative (bottom) values of Rsk (reprinted from ASME B46.1-1995, by permission of the American Society of Mechanical Engineers. All rights reserved).
Surface profile characterization
As an example, a porous, sintered or cast iron surface will have a large value of skewness. A characteristic of a good bearing surface is that it should have a negative skew, indicating the presence of comparatively few peaks that could wear away quickly and relatively deep valleys to retain lubricant traces. A surface with a positive skew is likely to have poor lubricant retention because of the lack of deep valleys in which to retain lubricant traces. Surfaces with a positive skewness, such as turned surfaces, have high spikes that protrude above the mean line. The Rsk parameter correlates well with load-carrying ability and porosity.
8.2.7.4 Kurtosis of the assessed profile, Rku The Rku parameter is a measure of the sharpness of the surface height distribution and is the ratio of the mean of the fourth power of the height values and the fourth power of Rq within the sampling area, 2 3 ð 1 41 l 4 5 Rku ¼ z x dx : Rq4 l 0
(8.7)
The Rku parameter characterizes the spread of the height distribution. A surface with a Gaussian height distribution has a kurtosis value of three. Unlike Rsk this parameter can not only detect whether the profile spikes are evenly distributed but also provides a measure of the spikiness of the area. A spiky surface will have a high kurtosis value and a bumpy surface will have a low kurtosis value. Figure 8.8 shows two profiles with low and high values
FIGURE 8.8 Profiles with low (top) and high (bottom) values of Rku (reprinted from ASME B46.1-1995, by permission of the American Society of Mechanical Engineers. All rights reserved).
223
224
C H A P T ER 8 : Surface topography characterization
of Rku. This is a useful parameter in predicting component performance with respect to wear and lubrication retention. Note that kurtosis cannot differentiate between a peak and a valley.
8.2.8 Spacing parameters 8.2.8.1 Mean width of the profile elements, RSm The RSm parameter is the mean value of the profile element widths within a sampling length (see Figure 8.9). In other words, this parameter is the average value of the length of the mean line section containing a profile peak and adjacent valley. This parameter requires height and spacing discrimination. If these values are not specified then the default height discrimination used is 10 % of Rz. The default spacing discrimination is 1 % of the sampling length and both of these conditions must be met.
8.2.9 Curves and related parameters The profile parameters described so far have resulted in a single number (often with a unit) that describes some aspect of the surface. Curves and related parameters give much more information about the surface from which, often, functional information can be gained [7]. All curves and related parameters are defined over the evaluation length rather than the sampling length.
8.2.9.1 Material ratio of the profile The material ratio of the profile is the ratio of the bearing length to the evaluation length. It is represented as a percentage. The bearing length is the
FIGURE 8.9 Width of profile elements.
Surface profile characterization
sum of the section lengths obtained by cutting the profile with a line (slice level) drawn parallel to the mean line at a given level. The ratio is assumed to be 0 % if the slice level is at the highest peak, and 100 % if it is at the deepest valley. Parameter Rmr(c) determines the percentage of each bearing length ratio of a single slice level or nineteen slice levels that are drawn at equal intervals within Rt respectively.
8.2.9.2 Material ratio curve The material ratio curve (formally known as the Abbot-Firestone or bearing ratio curve) is the curve representing the material ratio of the profile as a function of level. By plotting the bearing ratio at a range of depths in the profile, the way in which the bearing ratio varies with depth can be easily seen and provides a means of distinguishing different shapes present on the profile. The definition of the bearing area fraction is the sum of the lengths of individual plateaux at a particular height, normalized by the total assessment length, and is the parameter designated Rmr (see Figure 8.10). Values of Rmr are sometimes specified on drawings; however, such specifications can lead to large ambiguities if the bearing area curve is referred to the highest and lowest points on the profile. Many mating surfaces requiring tribological functions are usually produced with a sequence of machining operations. Usually the first operation establishes the general shape of the surface with a relatively coarse finish, and further operations refine this finish to produce the properties required by the design. This sequence of operations will remove the peaks of the original process but the deep valleys will be left untouched. This process leads to a type of surface texture that is referred to as a stratified surface. The height distributions will be negatively skewed, therefore making it difficult for a single average parameter such as Ra to represent the surface effectively for specification and quality-control purposes. A honed surface is a good example of a stratified surface.
FIGURE 8.10 Material ratio curve.
225
226
C H A P T ER 8 : Surface topography characterization
8.2.9.3 Profile section height difference, Rdc The profile section height difference is the vertical distance between two section levels of given material ratio.
8.2.9.4 Relative material ratio, Rmr The relative material ratio is the material ratio determined at a profile section level Rdc, and related to a reference, C0, where C1 ¼ C0 Rdc and C0 ¼ C(Rmr0). Rmr refers to the bearing ratio at a specified height (see Figure 8.11). A way of specifying the height is to move over a certain percentage (the reference percentage) on the bearing ratio curve and then to move down a certain depth (the slice depth). The bearing ratio at the resulting point is Rmr. The purpose of the reference percentage is to eliminate spurious peaks from consideration – these peaks tend to wear off in early part use. The slice depth then corresponds to an allowable roughness or to a reasonable amount of wear.
8.2.9.5 Profile height amplitude curve The profile height amplitude curve is defined as the sample probability density function of the ordinate, z(x), within the evaluation length. The amplitude distribution curve is a probability function that gives the probability that a profile of the surface has a certain height, at a certain position. The curve has the characteristic bell shape like many probability distributions (see Figure 8.12). The curve tells the user how much of the profile lies at a particular height, in a histogram sense. The profile height amplitude curve illustrates the relative total lengths over which the profile graph attains any selected range of heights above or below the mean line. This is illustrated in Figure 8.13. The horizontal lengths of the
FIGURE 8.11 Profile section level separation.
Surface profile characterization
FIGURE 8.12 Profile height amplitude distribution curve.
FIGURE 8.13 Amplitude distribution curve.
profile included within the narrow band dz at a height z are a, b, c, d and e. By expressing the sum of these lengths as a percentage of the evaluation length, a measure of the relative amount of the profile at a height z can be obtained. Figure 8.13 is termed the amplitude distribution at height z. By plotting density against height the amplitude density distributed over the whole profile can be seen. This produces the amplitude density distribution curve.
8.2.10 Profile specification standards There are nine ISO specification standards relating to the measurement and characterization of surface profile. These standards only cover the use of
227
228
C H A P T ER 8 : Surface topography characterization
stylus instruments. The details of the standards are presented in [8] and their content is briefly described in this section. It should be noted that the current ISO plan for surface texture is that the profile standards will become a sub-set of the areal standards (see section 8.3.4). Whilst the basic standards and details will probably not change significantly, the reader should keep abreast of the latest developments in standards. ISO 3274 [15] describes a typical stylus instrument and its metrological characteristics. ISO 4287 [9] presents the definitions of the surface profile parameters (i.e. the P, W and R parameters – see section 8.2.3) and how to calculate the parameters. ISO 4288 [12] describes the various default values, and basic rules and procedures for surface texture profile analysis. ISO 11562 [10] describes the phase correct Gaussian filter that is applied for the various cut-off filters used for surface profile analysis. ISO 12179 [16] presents the methods for calibrating contact stylus instruments for profile measurement and ISO 5436 part 1 [17] describes the artefacts that are used to calibrate stylus instruments (see section 6.10.2). ISO 5436 part 2 [18] describes the concepts and use of software measurement standards (see section 6.13). ISO 1302 [19] presents the rules for the indication of surface texture in technical product documentation such as drawings, specifications, contracts and reports. Note that there are no specification standards that relate to the measurement of surface profile using optical instruments. However, in many cases where a profile can be mathematically extracted from an areal optical scan, the profile characterization and analysis standards can be applied. It is important, however, to understand how the surface data are filtered, especially when trying to compare contact stylus and optical results. There are no methods specified in ISO standards on how to remove form prior to surface texture analysis. The most common form removal filter is the linear least squares method and this method is applied on some commercial instruments as a default. However, the linear least squares method may be the most appropriate in a large range of cases (especially where low slope angle tilt needs to be removed) but can sometimes lead to significant errors. For example, a linear least squares form removal process will introduce tilt into a sinusoidal surface with few periods within the sampling length. Least squares can also be calculated in two different manners, both leading to potentially different results (see [20] for details). ISO 13565 parts 1 [21], 2 [22] and 3 [23] relate to the measurement of surfaces having stratified functional properties. The roughness profile generated using the filter defined in ISO 11562 [10] (see section 8.2.3) suffers some undesirable distortions, when the measured surface consists of relatively deep valleys beneath a more finely finished plateau with minimal
Areal surface texture characterization
waviness. This type of surface is very common, for example in cylinder liners for internal combustion engines. ISO 13565 part 1 provides a method of greatly reducing these distortions, thus enabling the parameters defined in ISO 13565 part 2 and part 3 to be used for evaluating these types of surfaces, with minimal influence from these distortions. In 1970s France, engineers from the school of ‘Arts et Me´tiers’ together with Peugeot and Renault conceived a graphical method for analysing motifs, adapted to the characterization of functional surface texture. This method takes the functional requirements of the surface into account and attempts to find relationships between peak and valley locations and these requirements. The motif method had success in French industry and was incorporated into an international standard in 1996 [24]. These motif methods are the basis for the segmentation used in areal feature parameter analysis (see section 8.3.7).
8.3 Areal surface texture characterization There are inherent limitations with 2D surface measurement and characterization. A fundamental problem is that a 2D profile does not necessarily indicate functional aspects of the surface. For example, consider the most commonly used parameter for 2D surface characterization, Ra. Figure 6.4 shows the profiles of two surfaces, both of which return the same Ra value when filtered under the same conditions. It can be seen that the two surfaces have very different features and consequently very different functional properties. With profile measurement and characterization it is also often difficult to determine the exact nature of a topographic feature.
8.3.1 Scale-limited surface Distinct from the 2D profile system, areal surface characterization does not require three different groups (profile, waviness and roughness) of surface texture parameters as defined in section 8.2.3. For example, in areal parameters only Sq is defined for the root mean square parameter rather than the primary surface Pq, waviness Wq and roughness Rq as in the profile case. The meaning of the Sq parameter depends on the type of scale-limited surface used. Two filters are defined, the S-filter and the L-filter [25]. The S-filter is defined as a filter that removes unwanted small-scale lateral components of the measured surface such as measurement noise or functionally irrelevant small features. The L-filter is used to remove unwanted large-scale lateral
229
230
C H A P T ER 8 : Surface topography characterization
components of the surface, and the F-operator removes the nominal form (by default using a least squares method [26]). The scale at which the filters operate is controlled by the nesting index. The nesting index is an extension of the notion of the original cut-off wavelength, and is suitable for all types of filters. For example, for a Gaussian filter the nesting index is equivalent to the cut-off wavelength. These filters are used in combination to create SF and SL surfaces. An SF surface (equivalent to a primary surface) results from using an S-filter and an F-operator in combination on a surface, and an SL surface (equivalent to a roughness surface) by using an L-filter on an SF surface. Both an SF surface and an SL surface are called scale-limited surfaces. The scalelimited surface depends on the filters or an operator used, with the scales being controlled by the nesting indices of those filters.
8.3.2 Areal filtering A Gaussian filter is a good general-purpose filter and it is the current standardized approach for the separation of the roughness and waviness components from a primary surface (see section 8.2.3). Both roughness and waviness surfaces can be acquired from a single filtering procedure with minimal phase distortion. The weighting function of an areal filter is the Gaussian function given by " # 1 p x2 y2 exp 2 sðx; yÞ ¼ 2 þ (8.8) a lcxlcy a lcx2 lcy 2 lcx x lcx; lcy y lcy where x, y are the two-dimensional distance from the centre (maximum) of the weighting function, lc is the cut-off wavelength, a is a constant, to provide 50 % transmission characteristic at the cut-off lc, and a is given by rffiffiffiffiffiffiffiffi ln2 z 0:4697: (8.9) p With the separability and symmetry of the Gaussian function, a twodimensional Gaussian filtered surface can be obtained by convoluting two one-dimensional Gaussian filters through rows and columns of a measured surface, thus XX z0 ðx n1 ; y n2 Þsðn1 Þsðn2 Þ : zðx; yÞ ¼ z0 ðx; yÞ (8.10) Figure 8.14 shows a raw measured epitaxial wafer surface (a), and its short-scale SL surface (roughness) (b) and middle-scale SF surface (waviness)
Areal surface texture characterization
FIGURE 8.14 Epitaxial wafer surface topographies in different transmission bands: (a) the raw measured surface; (b) roughness surface (short-scale SL-surface) S-filter ¼ 0.36 mm (sampling space), Lfilter ¼ 8 mm); (c) wavy surface (middle-scale SF-surface) S-filter ¼ 8 mm, F-operator; and (d) form error surface (long-scale form surface), F-operator.
(c) and long-scale form surface (form error surface) (d) by using Gaussian filtering with an automatic correct edged process. The international standard for the areal Gaussian filter (ISO 16610-61 [27]) is currently being developed (the areal Gaussian filter has been widely used by almost all instrument manufacturers). It has been easily extrapolated from the linear profile Gaussian filter standard into the areal filter by instrument manufacturers for at least a decade and allows users to separate waviness and roughness in surface measurement. For surfaces produced using a range of manufacturing methods, the roughness data have differing degrees of precision that contains some very different observations or outliers. In this case, a robust Gaussian filter (based on the maximum likelihood estimation) can be used to suppress the influence of the outliers. The robust Gaussian filter can also be found in most instrument software. It should be noted that the Gaussian filter is not applicable for all functional aspects of a surface, for example, in contact phenomena, where the upper envelope of the surface is more relevant. A standardized framework for filters has been established, which gives a mathematical foundation for
231
232
C H A P T ER 8 : Surface topography characterization
filtration, together with a toolbox of different filters. Information concerning these filters will soon be published as a series of technical specifications (ISO/ TS 16610 series [27]), to allow metrologists to assess the utility of the recommended filters according to applications. So far only Gaussian filters have been published, but the toolbox will contain the following classes of filters: Linear filters: the mean line filters (M-system) belong to this class and include the Gaussian filter, spline filter and the spline-wavelet filter; Morphological filters: the envelope filters (E-system) belong to this class and include closing and opening filters using either a disk or a horizontal line; Robust filters: filters that are robust with respect to specific profile phenomena such as spikes, scratches and steps. These filters include the robust Gaussian filter and the robust spline filter; Segmentation filters: filters that partition a profile into portions according to specific rules. The motif approach belongs to this class and has now been put on a firm mathematical basis. Filtering is a complex subject that will probably warrant a book of its own following the introduction of the ISO/TS 16610 series [27] of specification standards. The user should consider filtering options on a case-by-case basis but the simple rule of thumb is that if you want to compare two surface measurements, it is important that both sets use the same filtering methods and nesting indexes (or that appropriate corrections are applied). Table 8.2 presents the default nesting indices in ISO 25178 part 3 [26]. The user should consult the latest version of ISO 25178 part 3 because at the time of writing there is still debate in the standards committees as to the values in Table 8.2.
8.3.3 Areal specification standards The areal specification standards are at various stages of development. The plan is to have the profile standards (see section 8.2.10) as a subset of the areal standards (with appropriate re-numbering). Hence, the profile standards will be re-published after the areal standards (with some omissions, ambiguities and errors corrected) under a new numbering scheme that is consistent with that of the areal standards. All the areal standards are part of ISO 25178, which will consist of at least the following parts, under the general title Geometrical product specification (GPS) – Surface texture: Areal: -
Part 1: Areal surface texture drawing indications
-
Part 2: Terms, definitions and surface texture parameters [25]
Areal surface texture characterization
Table 8.2
Relationships between nesting index value, S-filter nesting index, sampling distance and ball radius
Nesting index value(F-operator/L-filter) (mm)
S-filter nesting index (mm)
Max. sampling distance (mm)
Max. ball radius (mm)
. 0.1 0.2 0.25 0.5 0.8 1.0 2.0 2.5 5.0 8.0 10 20 25 50 80 100 .
. 1.0 2.0 2.5 5.0 8.0 10 20 25 50 80 100 200 250 500 800 1000 .
. 0.3 0.6 0.8 1.5 2.5 3.0 6.0 8.0 15 25 30 60 80 150 250 300 .
. 0.8 1.5 2.0 4.0 6.0 8.0 15 20 40 60 80 150 200 400 600 800 .
-
Part 3: Specification operators [26]
-
Part 4: Comparison rules
-
Part 5: Verification operators
-
Part 6: Classification of methods for measuring surface texture [28]
-
Part 70: Measurement standards for areal surface texture measurement instruments
-
Part 71: Software measurement standards [29]
-
Part 72: Software measurement standards – XML file format
-
Part 601: Nominal characteristics of contact (stylus) instruments [30]
-
Part 602: Nominal characteristics of non-contact (confocal chromatic probe) instruments [31]
233
234
C H A P T ER 8 : Surface topography characterization
-
Part 603: Nominal characteristics of non-contact (phase-shifting interferometric microscopy) instruments [32]
-
Part 604: Nominal characteristics of non-contact (coherence scanning interferometry) instruments [33]
-
Part 605: Nominal characteristics of non-contact (point autofocus) instruments
-
Part 606: Nominal characteristics of non-contact (variable focus) instruments
-
Part 701: Calibration and measurement standards for contact (stylus) instruments [34]
-
Part 702: Calibration and measurement standards for non-contact (confocal chromatic probe) instruments
-
Part 703: Calibration and measurement standards for non-contact (phase-shifting interferometric microscopy) instruments
-
Part 704: Calibration and measurement standards for non-contact (coherence scanning interferometry) instruments
-
Part 705: Calibration and measurement standards for non-contact (point autofocus) instruments
-
Part 706: Calibration and measurement standards for non-contact (variable focus) instruments
The American National Standards Institute has also published specification standards [35] that include some areal analyses (mainly fractal based).
8.3.4 Unified coordinate system for surface texture and form Surface irregularities have traditionally been divided into three groups loosely based on scale [36]: (i) roughness, generated by the material removal mechanism such as tool marks; (ii) waviness, produced by imperfect operation of a machine tool; and (iii) errors of form, generated by errors of a machine tool, distortions such as gravity effects, thermal effects, set-up, etc. This grouping gives the impression that surface texture should be part of a coherent scheme with roughness at the smaller scale and errors of form at the larger scale. The primary definition of surface texture has, until recently, been based on the profile (ISO 4287 [9]). To ensure consistency of the irregularities in the
Areal surface texture characterization
measured profile, the direction of that profile was specified to be orthogonal to the lay (the direction of the prominent pattern). This direction is not necessarily related to the datum of the surface, whereas errors of form, such as straightness, are always specified parallel to a datum of the surface [37]. Hence, profile surface texture and profile errors of form usually have different coordinate systems and do not form a coherent specification. This situation has now changed since the draft standardization of areal surface methods, in which the primary definition of surface texture is changed from one being based on profiles to one based on areal surfaces. This means that there is no consistency requirement for the coordinate system to be related to the lay. Therefore, a unified coordinate system has been established for both surface texture and form measurement [26]. Surface texture is now truly part of a coherent scheme, with surface texture at the smaller scale. The system is part of what is referred to as the geometrical product specification (GPS).
8.3.5 Areal parameters There are two main classes of areal parameters: -
Field parameters – defined from all the points on a scale-limited surface;
-
Feature parameters – defined from a subset of predefined topological features from the scale-limited surface.
A further class of areal parameters are those based on fractal analysis. Fractal parameters are essentially field parameters but are given their own section in this book as they have certain distinguishing characteristics. Some examples of the use of areal parameters can be found in [38] for the treatment of steel surfaces, [39] for the characterization of dental implants, [40] for the monitoring of milling tool wear and [41] for the analysis of biofilms.
8.3.6 Field parameters The field or S- and V-parameter set has been divided into height, spacing, hybrid, functions and related parameters, and one miscellaneous parameter. A great deal of the physical arguments discussed for the profile parameters also apply to their areal equivalents, for example, Rsk and Ssk. Therefore, when reading about the areal parameters for the first time, it would be prudent to become acquainted with the description of its profile equivalent (where one exists).
235
236
C H A P T ER 8 : Surface topography characterization
8.3.6.1 Areal height parameters 8.3.6.1.1 The root mean square value of the ordinates, Sq The Sq parameter is defined as the root mean square value of the surface departures, z(x, y), within the sampling area, vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u1ð ð u Sq ¼ t z2 ðx; yÞdxdy A
(8.11)
A
where A is the sampling area, xy. Note that equation (8.11) is for a continuous z(x, y) function and the same philosophy applies when converting to a sampled definition as in section 8.2.7.1.
8.3.6.1.2 The arithmetic mean of the absolute height, Sa The Sa parameter is the arithmetic mean of the absolute value of the height within a sampling area, ð 1 dxdy: zðx; yÞ (8.12) Sa ¼ A A
The Sa parameter is the closest relative to the Ra parameter; however, they are fundamentally different and caution must be exercised when they are compared. Areal, or S-parameters, use areal filters whereas profile, or Rparameters, use profile filters.
8.3.6.1.3 Skewness of topography height distribution, Ssk Skewness is the ratio of the mean cube value of the height values and the cube of Sq within a sampling area,
ðð 1 1 3 Ssk ¼ z x; y dxdy : Sq3 A
(8.13)
A
8.3.6.1.4 Kurtosis of topography height distribution, Sku The Sku parameter is the ratio of the mean of the fourth power of the height values and the fourth power of Sq within the sampling area,
ðð 1 1 4 Sku ¼ z ðx; yÞdxdy : Sq4 A A
(8.14)
Areal surface texture characterization
8.3.6.1.5 The maximum surface peak height, Sp The Sp parameter is defined as the largest peak height value from the mean plane within the sampling area.
8.3.6.1.6 The maximum pit height of the surface, Sv The Sv parameter is defined as the largest pit or valley depth from the mean plane within the sampling area.
8.3.6.1.7 Maximum height of the surface, Sz The Sz parameter is defined as the sum of the largest peak height value and largest pit or valley depth value within the sampling area.
8.3.6.2 Areal spacing parameters The spacing parameters describe the spatial properties of surfaces. These parameters are designed to assess the peak density and texture strength. These parameters are particularly useful in distinguishing between highly textured and random surface structures.
8.3.6.2.1 The auto-correlation length, Sal For the Sal parameter it is first necessary to define the auto-correlation function (ACF) as the correlation between a surface and the same surface translated by (tx, ty), given by ÐÐ zðx; yÞzðx tx; y tyÞdydy A ÐÐ ACFðtx; tyÞ ¼ : (8.15) zðx; yÞzðx; yÞdxdy A
The auto-correlation length, Sal, is then defined as the horizontal distance of the ACF(tx, ty) which has the fastest decay to a specified value s, with 0 s < 1. The Sal parameter is given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Sal ¼ min tx2 þ ty 2 : (8.16) For all practical applications involving relatively smooth surfaces, the value for s can be taken as 0.2 [26], although other values can be used and will be subject to forthcoming areal specification standards. For an anisotropic surface Sal is in the direction perpendicular to the surface lay. A large value of Sal denotes that that surface is dominated by low spatial frequency components, while a small value for Sal denotes the opposite case. The Sal parameter is a quantitative measure of the distance along the surface by which one would find a texture that is statistically different from that at the original location.
237
238
C H A P T ER 8 : Surface topography characterization
8.3.6.2.2 Texture aspect ratio of the surface, Str The texture aspect ratio, Str, is a parameter used to identify texture strength, i.e. uniformity of the texture aspect. The Str parameter can be defined as the ratio of the fastest to slowest decay to correlation length, 0.2, of the surface ACF and is given by pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi min tx2 þ ty 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : Str ¼ (8.17) max tx2 þ ty 2 In principle, Str has a value between 0 and 1. Larger values, say Str > 0.5, indicate uniform texture in all directions, i.e. for no defined lay. Smaller values, say Str < 0.3, indicate an increasingly strong directional structure or lay. It is possible that the slowest decay ACF for some anisotropic surfaces never reaches 0.2 within the sampling area. In this case, Str is invalid. The Str parameter is useful in determining the presence of degree of lay in any direction. For applications where a surface is produced by multiple processes, Str may be used to detect the presence of underlying surface modifications.
8.3.6.3 Areal hybrid parameters The hybrid parameters are parameters based upon both amplitude and spatial information. They define numerically hybrid topography properties such as the slope of the surface, the curvature of outliers and the interfacial area. Any changes that occur in either amplitude or spacing may have an effect on the hybrid property. The hybrid parameters have particular relevance to contact mechanics, for example, the friction and wear between bearing surfaces.
8.3.6.3.1 Root mean square gradient of the scale-limited surface, Sdq The Sdq parameter is defined as the root mean square of the surface gradient within the definition area, vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u ð ð " # u1 vzðx; yÞ 2 vzðx; yÞ 2 u Sdq ¼ t þ dxdy : A vx vy
(8.18)
A
The Sdq parameter characterizes the slopes on a surface and may be used to differentiate surfaces with similar value of Sa. The Sdq parameter is useful for assessing surfaces in sealing applications and for controlling surface cosmetic appearance.
Areal surface texture characterization
8.3.6.3.2 Developed interfacial area ratio of the scale-limited surface, Sdr The Sdr parameter is the ratio of the increment of the interfacial area of the scale-limited surface within the definition area over the definition area and is given by 0vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 3 " 2 2 # ðð u u 1 vzðx; yÞ vzðx; yÞ Sdr ¼ 4 @t 1 þ þ 1Adxdy 5: A vx vy 2
(8.19)
A
The Sdr parameter may further differentiate surfaces of similar amplitudes and average roughness. Typically Sdr will increase with the spatial complexity of the surface texture independent of changes in Sa. The Sdr parameter is useful in applications involving surface coatings and adhesion, and may find relevance when considering surfaces used with lubricants and other fluids. The Sdr parameter may be related to the surface slopes and thus finds application related to how light is scattered from a surface.
8.3.6.4 Functions and related parameters The functions and related parameters are an areal extension of the profile curves and parameters described in section 8.2.9.
8.3.6.4.1 Areal material ratio of the scale limited surface This is a function representing the areal material ratio of the scale-limited surface as a function of height. The related parameters are calculated by approximating the areal material ratio curve by a set of straight lines. The parameters are derived from three sections of the areal material ratio curve: the peaks above the mean plateau, the plateaux themselves and the valleys between plateaux.
8.3.6.4.2 Areal material ratio of the scale-limited surface, Smc(c) The areal material ratio is the ratio of the material at a specified height, c, to the evaluation area expressed as a percentage (see Figure 8.15). The heights are taken from the reference plane.
8.3.6.4.3 Inverse areal material ratio of the scale-limited surface, Sdc(mr) The inverse areal material ratio is the height, c, at which a given areal material ratio, mr, is satisfied, taken from the reference plane (see Figure 8.16).
239
240
C H A P T ER 8 : Surface topography characterization
FIGURE 8.15 Areal material ratio curve.
8.3.6.4.4 Areal parameters for stratified functional surfaces of scale-limited surfaces Parameters (Sk, Spk, Svk, Smr1, Smr2, Svq and Smq) for stratified functional surfaces are defined according to the specification standards for stratified surfaces [22,23].
FIGURE 8.16 Inverse areal material ratio curve.
Areal surface texture characterization
8.3.6.4.5 Void volume, Vv(mr) The volume of voids per unit area for a given material ratio is calculated from the material ratio curve, ð 100 % K VvðmrÞ ¼ ½SdcðmrÞ SdcðqÞdq (8.20) 100 % mr where K is a constant to convert to millimetres per metre squared. The dale volume at p material ratio is given by Vvv ¼ VvðpÞ
(8.21)
and the core void volume (the difference in void volume between p and q material ratio) is given by Vvc ¼ VvðpÞ VvðqÞ (8.22) where the default values for p (also for Vvv) and q are 10 % and 80 % respectively [26].
8.3.6.4.6 Material volume, Vm(mr) The material volume is the volume of material per unit area at a given material ratio calculated from the areal material ratio curve, ð mr K VmðmrÞ ¼ ½SdcðqÞ SdcðmrÞdq (8.23) 100 % 0 where K is defined as in equation (8.20). The peak material volume at p is given by Vmp ¼ VmðpÞ
(8.24)
and the core material volume (or the difference in material volume between p and q material ratio, is given by Vmc ¼ VmðqÞ VmðqÞ
(8.25)
where default values for p (also for Vmp) and q are 10 % and 80 % respectively [26]. Figure 8.17 shows the parts of the material ratio curve that are represented by Vvv, Vvc, Vmp and Vmc.
8.3.6.4.7 Peak extreme height, Sxp The peak extreme height is the difference in height between p and q material ratio, Sxp ¼ SmrðpÞ SmrðqÞ
(8.26)
where the default values for p and q are 97.5 % and 50 % respectively [26].
241
242
C H A P T ER 8 : Surface topography characterization
FIGURE 8.17 Void volume and material volume parameters.
8.3.6.4.8 Gradient density function The gradient density function is calculated from the scale-limited surface and shows the relative spatial frequencies against the angle of the steepest gradient, a(x, y), and the direction of the steepest gradient, b(x, y), anticlockwise from the x axis, thus sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi vz2 vz2 þ (8.27) aðx; yÞ ¼ tan1 vy vx and 2 3 vz 6 vy 7 7 bðx; yÞ ¼ tan1 6 (8.28) 4vz5: vx
8.3.6.5 Miscellaneous parameters 8.3.6.5.1 Texture direction of the scale-limited surface, Std The texture direction parameter, Std, is the angle, with respect to a specified direction, q, of the absolute maximum value of the angular power spectrum. The angular power spectrum for an areal surface would be displayed as a 3D plot in which the x and y axes represent the various spatial frequencies for a given direction. The amplitude of the angular power spectrum (displayed on the z axis) represents the amplitude of the sine wave at a particular spatial
Areal surface texture characterization
frequency direction. The angular power spectrum is found by integrating the amplitudes of each component sine wave as a function of angle. Std is useful in determining the lay direction of a surface relative to a datum by positioning the part in the measuring instrument in a known orientation. In some applications such as sealing, a subtle change in the surface texture direction may lead to adverse conditions. Std may also be used to detect the presence of a preliminary surface modification process (for example turning), which is to be removed by a subsequent operation (for example grinding).
8.3.7 Feature characterization Traditional surface texture parameters, i.e. the profile parameters and the areal field parameters, use a statistical basis to characterize the cloud of measured points. Such parameters, and in particular, profile parameters, were developed primarily to monitor the production process. But, how does a human assess a surface? We do not usually see field parameter values but patterns of features, such as hills and valleys, and the relationships between them [5]. Pattern analysis assesses a surface in the same way. By detecting features and the relationships between them it can characterize the patterns in surface texture. Parameters that characterize surface features and their relationships are termed feature parameters [42]. Much of the early research work on feature parameters stemmed from work in such areas as machine vision and cartography. Feature characterization does not have specific feature parameters defined but has instead a toolbox of pattern-recognition techniques that can be used to characterize specified features on a scale-limited surface. The feature characterization process defined in ISO 25178 part 2 [25] has five stages which are presented below.
8.3.7.1 Step 1 – Texture feature selection The three main types of surface texture features are areal features, line features and point features (see Table 8.3). It is important to select the appropriate type of surface texture feature to describe the function of the surface that is being characterized. The various types of feature will be explained by example in the following sections.
8.3.7.2 Step 2 – Segmentation Segmentation is used to determine regions of the scale-limited surface that define the scale-limited features. The segmentation process consists of first finding the hills and dales on the scale-limited surface. This usually results in
243
244
C H A P T ER 8 : Surface topography characterization
Table 8.3
Types of scale-limited features
Class of scale-limited feature
Type of scale-limited feature
Symbol
Areal
Hill Dale Course line Ridge Peak Pit Saddle point
H D C R P V S
Line Point
over-estimation of the surface and so the smaller, or less significant, segments are pruned out to leave a suitable segmentation of the surface. Some criteria of size that can be used to define a threshold for small segments to prune out are given in Table 8.4. A surface can be divided into regions consisting of hills and regions consisting of dales. Here a hill is defined as an area from which maximum uphill paths lead in to one particular peak, and a dale is defined as an area from which maximum downhill paths lead to one particular pit. By definition the boundaries between hills are course lines and the boundaries between dales are ridge lines. Ridge and course lines are maximum uphill and downhill paths respectively emanating from saddle points and terminating at peaks and pits. ISO 25178 part 2 [25] defines a dale as consisting of a single dominant pit surrounded by a ring of ridge lines connecting peaks and saddle points, and a hill as consisting of a single dominant peak surrounded by a ring of course lines connecting pits and saddle points. Within a dale or hill there may be other pits or peaks, but they will be insignificant compared to the dominant pit or peak. Figure 8.18 shows a simulated surface and Figure 8.19 shows the corresponding contour representation displaying all the features described above (a simulated surface has been used for reasons described in section 8.3.7.2.1). Table 8.4
Criteria of size for segmentation
Criteria of size
Symbol
Threshold
Local peak/pit height (Wolf pruning – see section Volume of hill/dale (at height of connected saddle on change tree) Area of hill/dale Circumference of hill/dale
Wolfprune VolS Area Circ
% of Sz Specified volume % of definition area Specified length
Areal surface texture characterization
FIGURE 8.18 Example simulated surface.
FIGURE 8.19 Contour map of Figure 8.18 showing critical lines and points.
8.3.7.2.1 Change tree A useful way to organise the relationships between critical points in hills and dales, and still retain relevant information, is that of the change tree [36]. The change tree represents the relationships between contour lines from a surface. The vertical direction on the change tree represents the height. At a given height all individual contour lines are represented by a point that is
245
246
C H A P T ER 8 : Surface topography characterization
part of a line representing that contour line continuously varying with height. Saddle points are represented by the merging of two or more of these lines into one. Peaks and pits are represented by the termination of a line. Consider filling a dale gradually with water. The point where the water first flows out of the dale is a saddle point. The pit in the dale is connected to this saddle point in the change tree. Continuing to fill the new lake, the next point where the water flows out of the lake is also a saddle point. Again the line on the change tree, representing the contour of the lake shoreline, will be connected to the saddle point in the change tree. This process can be continued and establishes the connection between the pits, saddle points and the change tree. By inverting the surface so that peaks become pits, a similar process will establish the connection between peaks, saddle points and the change tree. There are three types of change tree: -
the full change tree (see Figure 8.20), which represents the relationships between critical points in the hills and dales;
-
the dale change tree (see Figure 8.21), which represents the relationships between pits and saddle points;
-
the hill change tree (see Figure 8.22), which represents the relationship between peaks and saddle points.
The dale and hill change trees can be calculated from the full change tree.
FIGURE 8.20 Full change tree for Figure 8.19.
Areal surface texture characterization
FIGURE 8.21 Dale change tree for Figure 8.19.
FIGURE 8.22 Hill change tree for Figure 8.19.
247
248
C H A P T ER 8 : Surface topography characterization
In practice change trees can be dominated by very short contour lines due to noise and insignificant features on a surface (this is the reason that a simulated surface was used at the beginning of this section). A mechanism is required to prune the change tree, reducing the noise but retaining significant features. There are many methods for achieving this pruning operation that are too complex to be presented here (see [43] for a thorough mathematical treatment). It is expected that the software packages for feature characterization will include pruning techniques. One method stipulated in ISO 25178 part 2 [25] is Wolf pruning and details of this methods can be found in [44].
8.3.7.3 Step 3 – Significant features It is important to determine the features on a surface that are functionally significant and those that are not. For each particular surface function there needs to be defined a segmentation function that identifies the significant and insignificant features defined by the segmentation. The set of significant features is then used for characterization. Methods (segmentation functions) for determining significant features are given in Table 8.5. Once again, it is expected that all these functions will be carried out by the software packages used for feature characterization. Various research groups are currently developing further methods for determining significant features.
8.3.7.4 Step 4 – Selection of feature attributes Once the set of significant features have been determined it is necessary to determine suitable feature attributes for characterization. Most attributes are a measure of the size of features, for example the length or volume of a feature. Some feature attributes are given in Table 8.6. Various research groups are currently developing further methods for selecting feature attributes and different forms of attribute.
Table 8.5
Methods for determining significant features
Class of feature
Segmentation functions
Symbol
Parameter units
Areal
Feature is significant if not connected to the edge at a given height Feature is significant if not connected to the edge at a given height A peak is significant if it has one of the top N Wolf peak heights A pit is significant if it has one of the top N Wolf pit heights
Closed
Top
Height is given as material ratio Height is given as material ratio N is an integer
Bot
N is an integer
All
–
Point
Areal, line, point
Open
Areal surface texture characterization
Table 8.6
Feature attributes
Feature class
Feature attribute
Symbol
Areal
Local peak/pit height Volume of areal feature Area of areal feature Circumference of areal feature Length of line Local peak/pit height Local curvature at critical point Attribute takes value of one
Lpvh VolS VolE Area Leng lpvh Curvature Count
Line Point Areal, line, point
8.3.7.5 Step 5 – Quantification of feature attribute statistics The calculation of a suitable statistic of the attributes of the significant features, a feature parameter, or alternatively a histogram of attribute values, is the final part of feature characterization. Some attribute statistics are given in Table 8.7. Various research groups are currently developing further methods for quantifying feature attribute statistics.
8.3.7.6 Feature parameters To record the results of feature characterization it is necessary to indicate the particular tools that were used in each of the five steps. An example of how to do this that shows the convention is FC; D; Wolfprune : 5 %; Edge : 60 %; VolE; Hist
Table 8.7
Attribute statistics
Attribute statistic
Symbol
Threshold
Arithmetic mean of attribute value Maximum attribute value Minimum attribute value RMS attribute value Percentage above a specified value
Mean Max Min RMS Perc
Histogram Sum of attribute values Sum of all the attribute values divided by the definition area
Hist Sum Density
– – – – Value of threshold in units of attribute – – –
249
250
C H A P T ER 8 : Surface topography characterization
where FC denotes feature characterization and the next five symbols, delimited by semicolons, are the symbols from the five tables corresponding to the five steps. In sections 8.3.7.6.1 to 8.3.7.6.9 the default value for X is 5 % [26].
8.3.7.6.1 Density of peaks, Spd The density of peaks, Spd, is the number of peaks per unit area, Spd ¼ FC; H; Wolfprune : X %; All; Count; Density:
(8.29)
8.3.7.6.2 Arithmetic mean peak curvature, Spc The Spc parameter is the arithmetic mean of the principle curvatures of peaks with a definition area, Spc ¼ FC; P; Wolfprune : X %; All; Curvature; Mean:
(8.30)
8.3.7.6.3 Ten point height of surface, S10z The S10z parameter is the average of the heights of the five peaks with largest global peak height added to the average value of the heights of the five pits with largest global pit height, within a definition area, S10z ¼ S5p þ S5v:
(8.31)
8.3.7.6.4 Five point peak height, S5p The S5p parameter is the average of the heights of the five peaks with largest global peak height, within a definition area, S5p ¼ FC; H; Wolfprune : X %; Top : 5; lpvh; Mean:
(8.32)
8.3.7.6.5 Five point pit height, S5v The S5v parameter is the average of the heights of the five pits with largest global pit height, within a definition area, S5v ¼ FC; D; Wolfprune : X %; Bot : 5; lpvh; Mean:
(8.33)
8.3.7.6.6 Closed dale area, Sda(c) The Sda(c) parameter is the average area of dales connected to the edge at height c, SdcðcÞ ¼ FC; D; Wolfprune : X %; Open : c : Area; Mean:
(8.34)
Fractal methods
8.3.7.6.7 Closed hill area, Sha(c) The Sha(c) parameter is the average area of hills connected to the edge at height c, ShaðcÞ ¼ FC; D; Wolfprune : X %; Open : c; Area; Mean:
(8.35)
8.3.7.6.8 Closed dale volume, Sdc(c) The Sdc(c) parameter is the average volume of dales connected to the edge at height c, SdcðcÞ ¼ FC; D; Wolfprune : X %; Open : c; VolE; Mean:
(8.36)
8.3.7.6.9 Closed hill volume, Shv(c) The Shv(c) parameter is the average of hills connected to the edge at height c, ShvðcÞ ¼ FC; H; Wolfprune : X %; Open : c; VolE; Mean:
(8.37)
8.4 Fractal methods Fractal methods have been shown to have a strong ability to discriminate profiles measured from different surfaces and can be related to functional models of interactions with surfaces. There are many ways of analysing fractal profiles [45]. Fractal parameters utilize information about the height and the spacing characteristics of the surface, making them hybrid parameters. Fractal profiles and surfaces usually have the following characteristics: -
they are continuous but nowhere differentiable;
-
they are not made up of smooth curves, but rather maybe described as jagged or irregular;
-
they have features that repeat over multiple scales;
-
they have features that repeat in such a way that they are self-similar with respect to scale over some range of scales.
Many, if not most, measured profiles appear to have the above characteristics over some scale ranges; that is to say that many profiles and surfaces of practical interest may be by their geometric nature more easily described by fractal geometry rather than by Euclidian geometry. Fractals have some interesting geometric properties. Most interesting is that fractal surfaces have geometric properties that change with scale. Peak
251
252
C H A P T ER 8 : Surface topography characterization
and valley radii, inclination of the surface, profile length and surface area, for example, all change with the scale of observation or calculation. This means that a profile does not have a unique length. The length depends on the scale of observation or calculation. This property in particular can be effectively exploited to provide characterization methods that can be used to model phenomena that depend on roughness and to discriminate surfaces that behave differently or that were created differently. The lack of a unique length is the basis for length-scale analysis. Fractals are often characterized by a fractional, or fractal, dimension, which is essentially a measure of the complexity of the surface or profile. The fractal dimension for a line will be equal to or greater than one and less than two. The fractal dimension for a surface will be equal to or greater than two and less than three. For mathematical fractal constructs, this characterization by fractal dimension can be scale-insensitive [46]. However, most surfaces of engineering interest are smooth if viewed at a sufficiently large scale, and the fractal dimension can change with respect to scale. Two approaches have been used to adapt fractal analysis to engineering profiles and surfaces. One approach is to treat the profiles as self-affine, meaning that they have a scaling exponent that varies with scale [47]. The other approach is to divide the scales into regions. For example, most surfaces are rough at fine scales, and smooth at larger scales, and a smooth–rough crossover (SRC) scale can be used to define the boundary between rough (described by fractal geometry) and smooth (described by Euclidean geometry). In the rough region the fractal dimension can be used to characterize roughness; however, the relative lengths and relative areas at particular scales, which are used to determine the fractal dimension, may be more useful. The SRC is determined as the scale at which the relative lengths or areas exceed a certain threshold. There may be other crossover scales, separating scale regions where different surface creation mechanisms have created geometries with different complexities.
8.4.1 Linear fractal methods The fractal dimension and the length-scale fractal complexity are determined from the slope of a log-log plot of relative lengths against scale [48]. The relative lengths are the calculated lengths, determined from a series of virtual tiling exercises, divided by the nominal length (see Figure 8.23). The nominal length is the straight line length, or the length of the profile used in the length calculation projected onto the datum. In a virtual tiling exercise the length of the profile at a certain scale is calculated by stepping along the measured profile with a line segment whose length is that scale. The exercise is
Fractal methods
FIGURE 8.23 Line segment tiling on a profile.
repeated in the series by using progressively different lengths and plotting the logarithm of the relative lengths against the logarithm of the corresponding scale for determining that relative length. Linear interpolation is used between measured heights to maintain consistency in the length of the line segments. The slope of the graph in Figure 8.23 is determined over some appropriate range of scales where the plot is approximately linear. The scale region is indicated with the slope. Slope multiplied by minus 1000 is the linear fractal complexity parameter, Rlfc ¼ 1000 ðslopeÞ:
(8.38)
One minus the slope of the length-scale plot, whose value is a negative number, is the fractal dimension, Dls ¼ 1 ðslopeÞ:
(8.39)
While the slope of the length-scale plot is generally negative or zero when there are periodic structures on the surface, aliasing can result in small-scale regions with positive slopes. In these cases local minima in relative lengths can be found at integer multiples of the wavelength of the periodic structures [49]. The finest linear scale in this analysis that has meaning is the sampling interval, and the largest is the length of the measured profile. Length-scale fractal analysis has found fewer applications than area-scale fractal analysis. Some examples of its use include determining anisotropy for discriminating different kinds of dental microwear [50], discrimination of tool usage and there is some indication that length-scale fractal analysis may be useful in understanding the skin effect in high-frequency electrical
253
254
C H A P T ER 8 : Surface topography characterization
transmissions. The relative lengths as a function of scale have also been used to compare instruments [51]. The relative length at a particular scale is related to the inclination on the surface, f, at that scale. Inclinations on a surface vary as a function of scale (see Figure 8.24). The relative length parameter is given by Rrel ¼
X 1 pi L cosf i
(8.40)
where L is the total nominal length of the profile and pi is the nominal, or projected length of the ith segment. The relative length can give an indication of the amount of the surface that is available for interaction. The relative area, calculated from an areal measurement, however, gives a better indication, because it contains more topographic information. When the analysed profile is sufficiently long, an SRC can be observed. At the largest scales the relative lengths will tend towards a minimum weighted average of the reciprocal of the cosine of the average inclination of the analysed profile. If the profile is levelled, this will be one, the minimum relative length. In any case, the slope at the largest scales will be zero so that the fractal dimension will be one, the minimum for a profile. At larger scales the relative lengths deviate significantly from one and the SRC has been reached. A threshold in relative length can be used to determine the crossover in scale. There may be other crossover scales dividing regions of scale that have different slopes on the relative length-scale plot. This possibility of multiple
FIGURE 8.24 Inclination on a profile.
Fractal methods
slopes on the length-scale plot is a characteristic of a scale-sensitive fractal analysis.
8.4.2 Areal fractal analysis The areal fractal methods are in many ways similar to the linear methods discussed in section 8.4.1. As with the profile analyses there are many methods that can be used to estimate the fractal dimension of a rough areal surface. Two areal methods can be found in ISO 25178 part 2 [25], volumescale and area-scale methods.
8.4.2.1 Volume-scale analysis Volume-scale analysis, also known as the variation method, estimates the volume between morphological opening and closing envelopes about a surface. The volume is estimated using nominally square, structuring elements. The size of the structuring elements is varied and the change of volume (Svs) is noted. The logarithm of the volume is plotted against the scale of the elements, i.e. the length of the sides of the square structuring elements. As the scale increases so does the volume. The fractal dimension is the slope of the plot, d, plus two. As with the length-scale analysis of engineering surfaces, volume-scale analysis can produce a plot with several slopes in different scale regions with corresponding crossover scales, making this a scale-sensitive type of fractal analysis.
8.4.2.2 Area-scale analysis Area-scale analysis estimates the area of a rough surface as a function of scale. Area-scale analysis uses repeated virtual tiling exercises of the measured surface with triangles whose area represents the scale of the analysis. For each tiling exercise the triangles are all the same size. The tiling exercises are repeated with different-sized triangles until the desired range of scales is covered (see Figure 8.25). The maximum range of areal scales that is potentially meaningful in area-scale analysis of a measured surface is from the finest areal scales, which would be half the square of the sampling interval, to the largest, which would be half of the region measured at the large scales. This is for a measurement that is approximately square with equal sampling intervals in each direction. Linear interpolation is used between measured heights to maintain consistency in the area of the triangles. The relative area (Srel) is the calculated area divided by the nominal or projected area. Therefore, the minimum relative area is one. As with the relative length, the relative area is an indication of the inclinations on the surface.
255
256
C H A P T ER 8 : Surface topography characterization
FIGURE 8.25 Tiling exercises for areascale analysis.
The logarithm of the relative area is plotted against the logarithm of the scale to create an area-scale plot. The slope on this graph is related to the area-scale fractal complexity, Safc, Safc ¼ 1000ðslopeÞ:
(8.41)
The scale range over which the slope has been determined can also be useful in discriminating surfaces, and in understanding surface texture formation and its influence on surface behaviour. The fractal dimension is given by Das ¼ 2 2 ðslopeÞ:
(8.42)
The slopes of the area-scale plots used in these calculations are negative. The calculated fractal dimensions are greater than or equal to two and less than three. The above methods are scale-sensitive fractal analyses, recognising that actual surfaces cannot be well characterized by a single fractal dimension. When the analysed region is sufficiently large there is an SRC. At the larger scales the relative areas tend towards the weighted average of the reciprocal of the cosine of the slopes of the unlevelled surface, as shown in equation (8.40). Srel will be one at the large scales if the measured surface is sufficiently large and properly levelled. In any event the slope of the relative area-scale graph will be generally zero, at sufficiently large scales if a sufficiently large region is analysed. Therefore, the fractal dimension tends towards two, or the Euclidean dimension, at large scales.
Comparison of profile and areal characterization
Area-scale analysis has a clear physical interpretation for many applications. Many interactions with surfaces are related to the area available to interact and with the inclinations on the surface. The relative area can serve to characterize surfaces in a manner directly related to the functioning for these kinds of interactions. For example, equations for heat, mass and charge exchange contain area terms or density terms implying area. Because the area of a rough surface depends on the scale of observation, or calculation, to use a calculated or measured area for a rough surface in heat, mass or charge exchange calculations, the appropriate scale for the exchange interaction must be known. Adhesion is an area where area-scale analysis has found application, for example thermal spray coating adhesion [52], bacterial adhesion [53] and carburizing [54], which depends on mass exchange. Area-scale analysis also appears useful in electrochemical impedance [55], gloss [56] and scattering [57]. The relative area at a particular scale can be used as a parameter for discrimination testing over a range of scales. This kind of scale-based discrimination has been successful on pharmaceuticals [58], microwear on teeth [50] and ground polyethylene ski bases [59]. Area-scale analysis can also be used to show the effects of filtering by comparing the relative areas of measurements with different filtering at different scales.
8.5 Comparison of profile and areal characterization With the long history and usage of profile parameters, knowledge has been built up and familiarity with profile methods has developed. It will, therefore, often be necessary to compare profile and areal parameters. This section presents some guidance on the fundamental differences between the different classes of parameters and some guidance on their comparison. The largest difference between profile and areal methods is in the filtration methods used. A profile extracted from an SL surface or an SF surface is not mathematically equivalent to a profile analysed using the methods detailed in the profile standards. The latter uses a profile filter (orthogonal to the lay) and the former an areal filter that can produce very different results even with similar filter types (for example Gaussian) and cut-off (or nesting index). To minimize the difference between profile and areal filtering the following guidelines should be followed: -
the orientation of the rectangular portion of the surface, over which the measurement is made, is aligned with the surface lay;
257
258
C H A P T ER 8 : Surface topography characterization
-
a Gaussian filter is used with recommended cut-off value given by the default values in Table 8.1;
-
other default values in the profile standards should be used, for example stylus tip radius, sample spacing, etc;
-
the length in the traverse direction of the rectangular portion of the surface should be five times the cut-off length;
Only those areal parameters that have a direct profile equivalent can be compared, for example, the root mean square height parameters Rq and Sq. As a counter example, the texture aspect ratio, Str, has no profile equivalent. Areal surface texture parameters that characterize the extrema of the surface, for example, maximum peak height, Sp, tend to have larger measured values than their equivalent profile parameters since the peaks and valleys on a measured profile nearly always go over the flanks of the peak or valley and not the true extremes.
8.6 References [1] Malacara D 2007 Optical shop testing (Wiley Series in Pure and Applied Optics) 3rd edition [2] Whitehouse D J 1982 The parameter rash - is there a cure? Wear 83 75–78 [3] Thomas T R 2008 Kenneth J Stout 1941-2006: a memorial Wear 266 490–497 [4] Stout K J, Sullivan P J, Dong W P, Mainsah E, Luo N, Mathia T, Zahouani H 1993 The development of methods for the characterization of roughness in three dimensions (Commission of the European Communities: Brussels). [5] Blunt L A, Jiang X 2003 Advanced techniques for assessment surface topography (Butterworth-Heinemann) [6] Whitehouse D J 2002 Handbook of surface and nanometrology (Taylor & Francis) 1st edition [7] Muralikrishnan B, Raja J 2008 Computational surface and roundness metrology (Springer) [8] Leach R K 2001 The measurement of surface texture using stylus instruments NPL Good practice guide No. 37 (National Physical Laboratory) [9] ISO 4287: 2000 Geometrical product specification (GPS) - Surface texture: Profile method - Terms, definitions and surface texture parameters (International Organization of Standardization) [10] ISO 11562: 1996 Geometrical product specification (GPS) - Surface texture: Profile method - Metrological characteristics of phase correct filters (International Organization of Standardization)
References
[11] Thomas T R 1999 Rough surfaces (Imperial College Press) 2nd edition [12] ISO 4288: 1996 Geometrical product specification (GPS) - Surface texture: Profile method - Rules and procedures for the assessment of surface texture (International Organization of Standardization) [13] Leach R K, Harris P M 2002 Ambiguities in the definition of spacing parameters for surface-texture characterization Meas. Sci. Technol. 13 1924–1930 [14] Scott P J 2007 The case of the surface texture parameter RSm Meas. Sci. Technol. 17 559–564 [15] ISO 3274: 1996 Geometrical product specification (GPS) - Surface texture: Profile method - Nominal characteristics of contact (stylus) instruments (International Organization of Standardization) [16] ISO 12179: 2000 Geometrical product specification (GPS) - Surface texture: profile method - Calibration of contact (stylus) instruments (International Organization for Standardization) [17] ISO 5436 part 1: 2000 Geometrical product specification (GPS) - Surface texture: Profile method - Measurement standards - Material measures (International Organization of Standardization) [18] ISO 5436 part 2: 2000 Geometrical product specification (GPS) - Surface texture: Profile method - Software measurement standards (International Organization of Standardization) [19] ISO 1302: 2002 Geometrical product specification (GPS) - Indication of surface texture in technical product documentation (International Organization of Standardization) [20] Cox M G, Forbes A B, Harris P M, Smith I M 2004 The classification and solution of regression problems for calibration NPL Report CMSC 24/03 [21] ISO 13565 part 1: 1996 Geometrical product specification (GPS) - Surface texture: Profile method - Surfaces having stratified functional properties Filtering and general measurement conditions (International Organization for Standardization) [22] ISO 13565 part 2: 1998 Geometrical product specification (GPS) - Surface texture: Profile method - Surfaces having stratified functional properties Height characterization using material ratio curve (International Organization for Standardization) [23] ISO 13565 part 3: 2000 Geometrical product specification (GPS) - Surface texture: Profile method - Surfaces having stratified functional properties Height characterization using material probability curve (International Organization for Standardization) [24] ISO 12085 Geometrical product specifications (GPS) - Surface texture: Profile method - Motif parameters (International Organization for Standardization) [25] ISO/DIS 25178 part 2: 2007 Geometrical product specification (GPS) Surface texture: Areal - Part 2: Terms, definitions and surface texture parameters (International Organization for Standardization)
259
260
C H A P T ER 8 : Surface topography characterization
[26] ISO/DIS 25178 part 3: 2007 Geometrical product specification (GPS) Surface texture: Areal - Part 3: Specification operators (International Organization for Standardization) [27] ISO/TS 16610–1: 2006 Geometrical product specification (GPS) - Filtration Part 1: Overview and basic terminology (International Organization for Standardization) [28] ISO/DIS 25178 part 6: 2008 Geometrical product specification (GPS) Surface texture: Areal - Part 6: Classification of methods for measuring surface texture (International Organization for Standardization) [29] ISO/DIS 25178 part 71: 2007 Geometrical product specification (GPS) Surface texture: Areal - Part 71: Software measurement standards (International Organization for Standardization) [30] ISO/DIS 25178 part 601: 2007 Geometrical product specification (GPS) Surface texture: Areal - Part 601: Nominal characteristics of contact (stylus) instruments (International Organization for Standardization) [31] ISO/DIS 25178 part 602: 2008 Geometrical product specification (GPS) Surface texture: Areal - Part 602: Nominal characteristics of non-contact (confocal chromatic probe) instruments (International Organization for Standardization) [32] ISO/CD 25178 part 603: 2007 Geometrical product specification (GPS) Surface texture: Areal - Part 603: Nominal characteristics of non-contact (phase shifting interferometric microscopy) instruments (International Organization for Standardization) [33] ISO/CD 25178 part 604: 2008 Geometrical product specification (GPS) Surface texture: Areal - Part 604: Nominal characteristics of non-contact (coherence scanning interferometry) instruments (International Organization for Standardization) [34] ISO/DIS 25178 part 701: 2007 Geometrical product specification (GPS) Surface texture: Areal - Part 701: Calibration and measurement standards for contact (stylus) instruments (International Organization for Standardization) [35] ANSI/ASME B46.1 2002 Surface texture, surface roughness, waviness and lay (American National Standards Institute) [36] Jiang X 2007 Paradigm shifts in surface metrology Part II. The current shift Proc. R. Soc. A 463 2071–2099 [37] ISO 1101: 2004 Geometrical product specification (GPS) - Geometrical tolerancing, tolerances of form, orientation, location and run-out (International Organization of Standardization) [38] Messner C, Silberschmidt W, Werner E A 2003 Thermally-induced surface roughness in austenitic-ferritic duplex stainless steel Acta Meterialia 51 1525–1537 [39] Juodzbalys G, Sapragoniene M, Wennerberg A, Baltrugonis T 2007 Titanium dental implant surface micromorphology optimization J. Oral Implant. 33 177–185
References
[40] Zeng W, Jiang X, Blunt L A 2008 Surface characterization-based tool wear monitoring in peripheral milling Int. J. Adv. Manuf. Technol. 40 226–233 [41] Yang X, Beyenal H, Harkin G, Lewandowski Z 2000 Quantifying biofilm structure using image analysis J. Microbio. Meth. 39 109–119 [42] Scott P J 2009 Feature parameters Wear 266 458–551 [43] Scott P J 2004 Pattern analysis and metrology: the extraction of stable features from observable measurements Proc. R. Soc. Lond. A 460 2845– 2864 [44] Wolf G W 1991 A Fortran subroutine for cartographic generalization Computer & Geoscience 17 1359–1381 [45] DeChiffre L, Lonardo P, Trumphold H, Lucca D A, Goch G, Brown C A, Raja J, Hansen H N 2000 Quantitative characterization of surface texture Ann. CIRP 49 635–652 [46] Mandelbrot B B 1977 Fractals: form, chance and dimension (WH Freeman: (San Francisco)). [47] Shepard M K, Brackett R A, Arvidson R E 1995 Self-affine (fractal) topography: surface parameterization and radar scattering J. Geophys. Res. 100 11709–11718 [48] Brown C A, Savary G 1991 Describing ground surface texture using contact profilometry and fractal analysis Wear 141 211–226 [49] Brown C A, Johnsen W A, Butland R M 1996 Scale-sensitive fractal analysis of turned surfaces Ann. CIRP 45 515–518 [50] Scott R S, Ungar P S, Bergstrom T S, Brown C A, Childs B, Teaford M F, Walker A 2006 Dental microwear texture analysis J. Hum. Evol. 51 339–349 [51] Malburg M 1997 A fractal-based comparison of surface profiling instrumentation ASPE Proc., Maryland, USA, June 36–40 [52] Brown C A, Siegmann S 2001 Fundamental scales of adhesion and areascale fractal analysis Int. J. Mach. Tools Manufac. 41 1927–1933 [53] Emerson IV R, Bergstrom T S, Liu Y, Soto E R, Brown C A, McGimpsey G W, Camesano T A 2006 Microscale correlation between surface chemistry, texture, and the adhesive strength of Staphylococcus epidermidis Langmuir 22 11311–11321 [54] Karabelchtchikova O, Brown C A, Sisson Jr. R D 2007 Effect of surface roughness on kinetics of mass transfer during gas carburizing International Heat Treatment and Surface Engineering 1 164–170 [55] McRae G A, Maguire M A, Jeffrey C A, Guzonas D A, Brown C A 2002 Atomic force microscopy of fractal anodic oxides on Zr-2.5Nb J. Appl. Surf. Sci. 191 94–105 [56] Whitehouse D J, Bowen D K, Venkatesh V C, Leonardo P, Brown C A 1994 Gloss and surface topography Ann. CIRP 2 541–549 [57] Shipulski E M, Brown C A 1994 A scale-based model of reflectivity Fractals 2 413–416
261
262
C H A P T ER 8 : Surface topography characterization
[58] Narayan Hancock P B, Hamel R, Bergstrom T S, Brown C A 2006 Using fractal analysis to differentiate the surface topography of various pharmaceutical excipient compacts Mat. Sci. Eng. A: Structural Materials: Properties, Microstructure and Processing 430 79–89 [59] Jordan S E, Brown C A 2006 Comparing texture characterization parameters on their ability to differentiate ground polyethylene ski bases Wear 261 398–409
CHAPTER 9
Coordinate metrology 9.1 Introduction to CMMs This section gives an overview of coordinate metrology as an introduction to the sections on miniature coordinate measuring machines (CMMs). An understanding of the operation of normal industrial CMMs will help in the understanding of the principles of miniature CMMs. A CMM is a measuring system with the means to move a probing system and capability to determine spatial coordinates on the surface of the part being measured. A photograph of a typical CMM is shown in Figure 9.1. CMMs come in a number of configurations (see Figure 9.2) and a range of sizes, from those able to measure something the size of a bus to the miniature versions described in section 9.4. However, the majority of CMMs fall in the range 0.5 m to 2 m. CMMs generally incorporate three linear axes and use Cartesian coordinates, but CMMs are available with four axes where the fourth axis is generally a rotary axis. The first CMMs became available in the late 1950s and early 1960s (see [1] for a thorough description of CMMs and some history, and [2] for an overview of their use). CMMs measure either by single point probing, where data from single points on the surface are collected, or by scanning, where data are collected continuously as the stylus tip is dragged across the surface. The stylus tip in contact with the surface is usually a synthetic ruby ball, although other geometries are possible, for example cylindrical stylus tips. The data collected by the CMM are essentially ball centre data. The stylus in contact with the surface, therefore, needs to be qualified to determine the effective stylus radius and the position of the centre of the tip relative to some reference point. Stylus qualification is carried out by measuring a known artefact, usually a high-quality ceramic sphere. The data collected from the part being measured need to be aligned with either the component drawing or a computer-aided design (CAD) model. Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
CONTENTS Introduction to CMMs Sources of error on CMMs Traceability, calibration and performance verification of CMMs Miniature CMMs Miniature CMM probes Calibration of miniature CMMs References
263
264
C H A P T ER 9 : Coordinate metrology
FIGURE 9.1 A typical moving bridge CMM.
This alignment is usually carried out with reference to defined datum features on the drawing. However, for freeform artefacts (see section 9.1.5) a best-fit alignment may be more appropriate. Once data are collected they are analysed by a software package. This involves fitting substitute elements (circles, planes, etc.) to the collected data. The software can then be used to calculate intersection points, distances between features, locations of features in the workpiece coordinate frame, distances between features and form errors such as roundness, cylindricity, etc. The international specification standard for CMMs is ISO 10360. CMM types are described in ISO 10360 part 1 [3] and include: -
Fixed table cantilever CMMs (Figure 9.2a)
-
Moving bridge CMMs (Figure 9.2b)
-
Gantry CMMs (Figure 9.2c)
-
L-shaped bridge CMM (Figure 9.2d)
-
Fixed bridge CMMs (Figure 9.2e)
Introduction to CMMs
FIGURE 9.2 CMM configurations.
265
266
C H A P T ER 9 : Coordinate metrology
-
Moving table cantilever CMMs (Figure 9.2f)
-
Column CMMs (Figure 9.2g)
-
Moving ram horizontal-arm CMM (Figure 9.2h)
-
Fixed table horizontal-arm CMM (Figure 9.2i and j)
-
Moving table horizontal-arm CMM (Figure 9.2k)
Moving and fixed bridge type CMMs are the most common design. A further type of CMM also encountered is the vision system. A vision system CMM is essentially a microscope mounted on one of the CMM arrangements described above. It is often referred to as being 2.5D as the range and access in the vertical, z axis is inferior to that in the x and y axes (height is measured by focusing the microscope on the relevant surfaces).
9.1.1 CMM probing systems The probing system attached to a CMM [4] can be one of the following three types: -
an analogue or scanning probe;
-
a touch trigger probe;
-
a probe that employs optical technology.
An analogue probe is capable of working either in a mode where it collects points from a number of surface contacts or by scanning the component surface. It is a measuring probe and data are collected from the CMM scales and the probe as it scans along the surface. A touch trigger probe works by recording the machine coordinates when the stylus tip contacts the surface. It is essentially on or off. Various optical probes can be attached to CMMs, often working on a range of principles, for example, triangulation (see section 6.7.2.1). Optical probes have the advantage of being able to collect data significantly faster than an analogue contacting probe. However, they are generally less accurate.
9.1.2 CMM software An important part of a CMM is its software. The software needs to carry out the following tasks: -
collect data from the CMM (scales, probe, temperature sensors);
-
fit substitute elements to the data;
Introduction to CMMs
-
create alignments relating to the part in question;
-
report the data;
-
compare against CAD data where necessary.
CMM software needs to be tested and this is covered in ISO 10360 part 6 [5]. Use is made of reference data sets and reference software to check the ability of the software to calculate the parameters of basic geometric elements.
9.1.3 CMM alignment To measure a component on a CMM, its alignment relative to the coordinate system of the machine needs to be described. This alignment is usually made using datum features on the part in question. The alignment needs to control the following: -
the part spatial rotation (two degrees of freedom);
-
the part planar rotation (one degree of freedom);
-
the part origin (three degrees of freedom).
As an example, for a rectangular block the alignment process would typically be: 1. Measure a plane on the top surface (defines rotation axis and z zero) 2. Measure a line on the side face (defines planar rotation about z axis and y zero) 3. Measure a point on a face orthogonal to the side face (x zero) Other alignments are possible, for example, best-fit alignments and reference point alignments are used for freeform shapes.
9.1.4 CMMs and CAD Modern CMM software allows programming direct from a CAD model. Furthermore, once data are collected the actual points can be compared to the nominal points and pictorial representations of the errors created. Point clouds can also be best-fitted to the CAD model for alignment purposes.
267
268
C H A P T ER 9 : Coordinate metrology
9.1.5 Prismatic against freeform Artefacts measured on CMMs fall into two categories: -
purely prismatic components, examples of which include engine blocks, brake components, bearings, etc.
-
freeform components, examples of which include car doors, body panels, mobile phone covers, IT peripherals, etc.
Prismatic components can be broken down into easily defined elements, for example, planes, circles, cylinders, cones and spheres. A measurement will consist of breaking down the component into these geometries and then looking at their inter-relationships, for example, the distance between two holes or the diameter of a pitch circle. Freeform components cannot be broken down as with prismatic components. Generally the surface is contacted at a large number of points and a surface approximated to the data. If a CAD model exists then the cloud of data can be compared directly against the CAD model. Having a CAD model is an advantage for freeform surfaces, as the nominal local slope at the contact point is known in advance. The local slope is needed to correctly correct for the probe tip radius in a direction normal to the surface. For reverse engineering applications, the local slope needs to be approximated from measurement points adjacent to the target point. Many real-world components are a mixture of freeform surfaces and geometric features; for example, a mobile phone cover may have location pins that need to be measured.
9.1.6 Other types of CMM Other types of coordinate measuring systems include articulated-arm CMMs and laser trackers ([2] discusses both types of CMM). The devices have the advantage that they are generally portable and are better suited to measuring larger items, for example, aerospace components.
9.2 Sources of error on CMMs Whilst CMM manufacturers aim to build CMMs with small geometric errors, no CMM is constructed perfectly. A typical CMM has twenty-one sources of geometric error. Each axis has a linear error, three rotation errors and two straightness errors (six per axis gives eighteen). The final three errors
Traceability, calibration and performance verification of CMMs
are the orthogonality errors between any two pairs of axes. These errors are also described briefly in section 7.3.4 for scanning probe microscopes. Traditionally these errors were minimized during manufacture of the CMM. However, with the advent of modern computers CMMs can be errormapped (volumetric error compensation) with corrections to geometric errors made in software [1,6–8]. CMM geometric errors are measured in one of the four following manners: -
using instruments such as straight edges, autocollimators and levels;
-
using a laser interferometer system and associated optics;
-
using a calibrated-hole plate [9];
-
using a tracking laser interferometer [10].
9.3 Traceability, calibration and performance verification of CMMs Calibration and performance verification are two issues that are often confused when talking about CMMs [2]. To clarify, CMM calibration is the measurement of the twenty-one degrees of freedom of a CMM to enable mechanical correction or error mapping of a CMM. Performance verification is a series of tests that allows the manufacturer of the CMM to demonstrate that an individual machine meets the manufacturer’s specification. Note that calibration can be part of the performance verification. The ISO 10360 series of specification standards defines the procedure for performance verification of CMMs. The series is broken down into six parts, which are briefly described. Part 1: Vocabulary. Part 1 [3] describes the terminology used to describe CMMs. It is important when describing CMMs to adhere to this terminology. Part 2: CMMs used for measuring size. Part 2 [11] describes how a CMM should be specified and the necessary steps to show that a machine meets specification. The tests detailed in part 2 involve: -
measuring a good-quality sphere at a number of positions and examining the variation in indicated radius;
269
270
C H A P T ER 9 : Coordinate metrology
-
measuring a series of lengths in a number of directions in the machine volume and comparing the machine indication against the known size of the artefact.
In addition part 2 describes how stable artefacts can be used for interim monitoring of the CMM. Part 3: CMMs with the axis of a rotary table as the fourth axis. Part 3 [12] describes the extra steps necessary to performance-verify a CMM which has a rotary axis as the fourth axis. Part 4: CMMs used in scanning measuring mode. Part 4 [13] contains the tests necessary to demonstrate that the scanning capability of a CMM meets specification. Part 5: CMMs using multiple-stylus probing systems. Part 5 [14] extends the probe test covered in part 2 to cover multiple-stylus probing systems. Part 6: Estimation of errors in computing Gaussian associated features. Part 6 [5] is concerned with assessing the correctness of the parameters of computed associated features as measured by a CMM or other coordinate measuring system.
9.3.1 Traceability of CMMs Traceability of CMMs is difficult to demonstrate. One of the problems is associating a measurement uncertainty with a result straight off the CMM. The formulation of a classical uncertainty budget is impracticable for the majority of the measurement tasks for CMMs due to the complexity of the measuring process. It used to be the case that the only way to demonstrate traceability was to carry out ISO 10360-type tests on the machine. However, if a CMM is performance-verified this does not automatically mean that measurements carried out with this CMM are calibrated and/or traceable. A performance verification only demonstrates that the machine meets its specification for measuring simple lengths, i.e. it is not task-specific. This task-specific nature of a CMM can be illustrated with a simple example. Suppose a CMM measures a circle in order to determine its diameter. To do this the CMM measures points on that circle. The points can be measured equally spaced along the circumference, but may have to be from a small section only, for example because there is no material present at the rest of the circle. This is illustrated in Figure 9.3, which shows the effect on the diameter and the centre location if measurements with the same uncertainty are taken in a different manner. This means that even if
Traceability, calibration and performance verification of CMMs
FIGURE 9.3 Illustration of the effect of different measurement strategies on the diameter and location of a circle. The measurement points are indicated in red; the calculated circles from the three sets are in black and the centres are indicated in blue.
the uncertainty for a single coordinate is known, this does not simply correspond to an uncertainty of a feature that is calculated from multiple points. A better method is described in ISO/TS 15530 part 3 [15]. This specification standard makes use of calibrated artefacts to essentially use the CMM as a comparator. The uncertainty evaluation is based on a sequence of measurements on a calibrated object or objects, performed in the same way and under the same conditions as the actual measurements. The differences between the results obtained from the measurement of the objects and the known calibration values of these calibrated objects are used to estimate the uncertainty of the measurements. However, this method requires independently calibrated artefacts for all its measurements, which is quite contradictory to the universal nature of a CMM. Alternative methods that are consistent with the GUM (see section 2.8.3) can be used to determine the task specific uncertainty of coordinate measurements. One such method that evaluates the uncertainty by numerical simulation of the measuring process is described in ISO/TS 15530 part 4 [16]. To allow CMM users to easily create uncertainty statements, CMM suppliers and other third-party companies have developed uncertaintyevaluating software, also known as virtual CMMs [17]. Even by adopting ISO 15530 part 4, there are many different approaches to the implementation of a virtual CMM [18–20].
271
272
C H A P T ER 9 : Coordinate metrology
9.4 Miniature CMMs The advent and adoption of the CMM greatly reduced the complexity, down time and operator skill required for measurements in a production environment. It is difficult to imagine a modern successful automobilemanufacturing plant that does not employ CMMs. The ‘CMM revolution’ has yet to come to the MNT manufacturing area. Once again many instruments are employed to measure the dimensions of MNT parts but there are now additional problems despite their tiny size: many of the parts that need measuring are very complex, high-aspect-ratio structures that may be constructed from materials that are difficult to contact with a mechanical probe (for example, polymers or bio-materials). Also, there is often a need to measure the surface topography of steep walls found in, for example, deep reactive ion etched (DRIE) structures used for MEMS. The only instruments that are available are those which essentially ‘measure from above’ and were traditionally used to measure surface topography. These instruments generally lack traceability for surface topography measurements (see section 6.10). Therefore, it is difficult to measure with any degree of confidence the complex structures that are routinely encountered in MNT products. In recent years many groups have developed ‘small CMMs’, typically with ranges of tens of millimetres and tens of nanometres accuracy in the x, y and z directions. These miniature CMMs come in two forms: those that are developed as stand-alone CMMs and those that are retrofitted to macro-scale CMMs. One of the first miniature CMMs of the latter form was the compact high-accuracy CMM developed at NPL [21]. This CMM used the movement scales of a conventional CMM with a retrofitted high-accuracy probe with six degrees of freedom metrology. This CMM had a working volume of 50 mm by 50 mm by 50 mm with a volumetric accuracy of 50 nm. Retrofitted CMMs will not be discussed in detail as they are simply a combination of conventional CMMs (see section 9.1) and micro-CMM probes (see section 9.5). One technical challenge with probing MNT structures arises due to the inability to actually navigate around the object being measured without some likelihood of a collision between the probe and the part being measured. Typical miniature probing systems are less robust than on larger CMMs that incorporate collision protection. Future research must concentrate on these difficult technical issues associated with micro-CMMs if they are to become as widely used as conventional CMMs. However, the technical barriers associated with mechanical contact of probes at the micro-scale may force researchers to look into completely novel approaches such as SEM-based photogrammetry or x-ray computed tomography [22].
Miniature CMMs
9.4.1 Stand-alone miniature CMMs Only two examples of stand-alone miniature CMMs are given here because they are the only two that are both currently commercially available and for which quite extensive information is available in the open literature. Two further examples are the Mitutoyo Nanocord and the IBS ISARA (based on work by [23] but with roller as opposed to air bearing slideways). There are many instruments that are at the research stage (see for example [24]) and some that were developed but are not currently commercially available (see for example [23,25,26]).
9.4.1.1 A linescale-based miniature CMM The F25 is a miniature CMM based on a design by the Eindhoven University of Technology (TUE) [27] and is available commercially from Carl Zeiss. The F25 has a unique kinematic design that helps to eliminate some of the geometric errors inherent in conventional CMMs. The basic kinematic layout is shown schematically in Figure 9.4. The red arms are stationary and firmly attached to the machine. The blue arms form the x and y measurement axes and are free to move. The green arms connect the x and y axes to the machine and also hold them orthogonal to the machine. Rather than moving orthogonally and independently of each other, as is the case for most CMMs, the x and y axes are connected together at right angles and move as a single unit. This acts to increase the stiffness and accuracy of the machine. The extensive use of high-quality air bearings to support the xy frame and a large granite base also help to increase the stability of the system.
FIGURE 9.4 Schema of the kinematic design of the Zeiss F25 CMM.
273
274
C H A P T ER 9 : Coordinate metrology
During the redesign process of the original TUE machine, Zeiss changed many of the component parts so they became serviceable, and added a controller and software. The other main additions to the redesign were aimed at increasing the overall stiffness of the system, and included the addition of high-quality air bearings and a general increase in mass of all the major components. The F25 is subject to only thirteen geometric errors and has minimal Abbe error in the horizontal mid-plane. The measurement capacity is 100 mm by 100 mm by 100 mm. The resolution on the glass-ceramic linescales on all measurement axes is 7.8 nm and the quoted volumetric measurement accuracy is 250 nm. The F25 has a tactile probe based on silicon membrane technology (see section 9.5) with a minimum commercially available stylus tip diameter of 0.125 mm. The F25 also includes a camera sensor with an objective lens that is used to make optical 2D measurements. The optics are optimized to exhibit a high depth of field and low distortion. The whole system allows measurements to be taken from the optical sensors and the tactile probe whilst using the same programmed coordinate system. A second camera is used to aid observation of the probe during manual measurement and programming.
9.4.1.2 A laser interferometer-based miniature CMM The Nanomeasuring Machine (NMM) was developed by the Ilmenau University of Technology [28,29] and is manufactured by SIOS Messtechnik GmbH. The device implements sample scanning over a range of 25 mm by 25 mm by 5 mm with a resolution of 0.1 nm. The measurement uncertainty is 3 nm to 5 nm and the repeatability is 1 nm to 2 nm. Figure 9.5 illustrates the configuration of a NMM, which consists of the following main components: -
traceable linear and angular measurement instruments;
-
a 3D nanopositioning stage;
-
probes suitable for integration into the NMM;
-
control equipment.
Both the metrology frame, which carries the measuring systems (interferometers), and the 3D stage are arranged on a granite base. The upper Zerodur plate (not shown in Figure 9.5) of the metrological frame is constructed such that various probes can be installed and removed. A corner mirror is moved by the 3D stage, which is built in a stacked arrangement. The separate stages consist of ball-bearing guides and voice coil drives. The
Miniature CMM probes
FIGURE 9.5 Schema of the NMM.
corner mirror is measured and controlled by single, double and triple beam plane mirror interferometers that are used to measure and control the six degrees of freedom of the 3D stage. The three laser interferometer measuring beams are reflected from the outer surfaces of the corner mirror, whereby the virtual extension of the reflected beams intersect at the point of contact between the specimen and the sensor (see Figure 9.6). Because the sample, as opposed to the probe, is scanned in the NMM, the Abbe principle is realised over the entire measuring range. Angular deviations of the guide systems are detected at the corner mirror by means of a double and a triple beam plane mirror interferometer. The detected angular deviations are compensated by a closed-loop control system. The NMM can be used with a range of probes, including both tactile and optical probes.
9.5 Miniature CMM probes Many research groups have developed miniature CMM probes and a select few probes are now available commercially (see [30] for a review of highaccuracy CMM probes that includes miniature probes). Whilst sometimes referred to as ‘micro-CMMs’, most miniature CMMs usually have a standard probe tip of diameter 0.3 mm (although tips with a diameter of 0.125 mm are
275
276
C H A P T ER 9 : Coordinate metrology
FIGURE 9.6 Schema of the NMM measurement coordinate measuring principle.
readily available). This is far too large to measure a typical MEMS structure, for example a deep hole or steep DRIE trench. What are required are smaller, micrometre-scale probe tips that measure in 3D. This is not simply a matter of scaling the size of the probe in direct analogy with probes on conventional CMMs. CMM probe heads that have been simply scaled down in size have achieved measurement uncertainties of 50 nm [25]. They have been meticulously designed to reduce the probing force and ensure equal probing forces in each measurement axis. However, even with extensive redesign, these probes tend to have an overall mass of several grams. With stylus tip diameters needing to be sub-millimetre these probes are quite destructive at any probing force above 1 mN [31]. Addressing the problem of contact force reduction is one major area of development in micro-scale probe design and manufacture as it is one of several potential sources of error that, on the scale where micro-scale probes will be operating, is of the same order of magnitude as the desired probing accuracy. The pressure field generated at the surface when a miniature tip comes into contact may be sufficient to cause plastic deformation [32]. Reducing the contact force during measurement will greatly reduce the possible damage caused and also increase the accuracy of the measurement. Monitoring of the tunnelling current between the probe tip and the sample being measured has been proposed to avoid physical contact with the surface [33]. In an attempt to reduce the probing force, silicon flexures, membranes or meshes are used to suspend the probe shaft. Using methods for chemical
Miniature CMM probes
etching and vapour deposition developed by the IC industry, highly complex probes can be made consisting of multiple layers of electrical connections, strain gauges, flexures, meshes or membranes. As well as reducing the overall contact force exerted on the measurement surface, using silicon to suspend the microprobe also serves to make surface contact detection more sensitive. As the stem diameter gets smaller its compliance increases and it becomes more difficult to sense a deflection of the probe using conventional elastic hinges. These methods have been demonstrated as viable for micro-scale probe production and at the Eindhoven University of Technology (TUE) they have produced a probe with a measurement uncertainty of 30 nm [34]. Using these production methods presents major design challenges and has radically changed what we perceive to be a CMM probe (as seen in Figure 9.7). However, because of the prolific use of these etching methods, large-scale production of these probes can easily be realised, substantially reducing costs. Research at PTB has developed a highly accurate silicon micro-scale probe that is designed around a silicon membrane onto which a micro-stylus is attached [35] (a similar probe was also developed elsewhere [36]). Both the TUE and the PTB probes are instrumented with piezoelectric strain sensors that have been etched onto the silicon suspension membrane. These will detect when the probe makes contact with the measurement surface by producing a voltage signal when membrane deformation occurs.
FIGURE 9.7 Silicon micro-scale probe designed by [34], produced by chemical etching and vapour deposition.
277
278
C H A P T ER 9 : Coordinate metrology
When probes are refined to operate at even lower probing forces their silicon membranes or flexures can have the adverse effect of giving false readings due to inertia. This means that the probes must be moved at very slow speeds, which slows the measurement process. Membrane probes are also unable to exert similar forces in all measurement axes, reducing their accuracy as true three-dimensional probes [37]. In a further attempt to reduce the surface damage caused by probe interactions, probes have been developed in both PTB [38] and NIST [39] that take optical measurements from illuminated glass fibres. One such design is shown in Figure 9.8. The operating principle of fibre probe systems is surprisingly simple. The contact element of the probe is formed by a microsphere that is attached to the tip of a single optical fibre. An optical system is then focused on the microsphere or the shaft of the glass fibre. Analysing the movement of either identifies contact with a measurement surface. This results in a probe with a measurement force of the order of a few hundred nanonewtons and a very high aspect ratio. Using a probe with a diameter of 75 mm and a length of 50 mm NIST has been able to investigate the nozzles of fuel injectors and the diameters of optical ferrules to an uncertainty of 70 nm [39].
FIGURE 9.8 The fibre probe developed by PTB. Notice the second microsphere on the shaft of the fibre; this gives accurate measurement of variations in sample ‘height’ (z axis) [38].
Miniature CMM probes
Even though reducing the probing interaction force reduces the problem of plastic deformation of the sample, it does not address the problems due to surface forces. These surface forces, including electrostatic, van der Waals and the resulting interactions due to liquid films (see section 7.3.7), could conceivably cause a false trigger in a low-force probe. These forces also have the effect of producing ‘snap back’ on low-force probes when they retract from the measurement surface. Once the probe tip has come into contact with the measurement surface, regardless of possible initial attraction or false triggering, the surface forces will tend to hold the probe head on the surface, even while the CMM head is retracting. The result is that the probe will ‘snap back’ from the measurement surface, a movement that could cause serious damage to the probe. The surface forces also cause a stick-slip phenomenon that can seriously reduce the speed of measurement [40]. Both measurement force and surface force problems have been addressed by the development of vibrating probes, whose basic concept is shown in Figure 9.9. Such probes are forced to vibrate at a specific frequency and small amplitude by piezoelectric elements at the top of the probe shaft. Any contact made with a measurement surface will result in a change in the frequency at which the probe vibrates that is detected by piezoelectric sensors [30]. Modern piezoelectric sensors can detect very small changes in vibration amplitude. Therefore, registering a contact using this detection technique allows the measurement force to be greatly reduced.
FIGURE 9.9 A vibrating fibre probe. The vibrating end forms a ‘virtual’ tip that will detect contact with the measurement surface while imparting very little force [41].
279
280
C H A P T ER 9 : Coordinate metrology
This probing method also addresses the problem caused by the surface forces. After extensive investigation into the strength of the contact forces, the detection sensitivity can be tuned so that only a true surface contact registers as a measurement, rather than probe interaction with surface forces. As such, vibrating probes have displayed an ability to repeatedly resolve surface features of less than 5 nm [41]. However, even though current vibrating probes exert low measurement forces and address the problem of measurement errors due to surface interactions, they do not have the ability to have the same probing interaction force (or compliance) in each axis. With the requirements of a high-accuracy micro-scale probe clearly defined, developments have been made that address all of the main problems faced and extensive work is being carried out on integrating all these requirements into one single system. Current research includes the development of a fully 3D vibrating microprobe at NPL [42]. Extensive work has also been carried out to alter existing surface measuring systems so that they can cope with the high-aspect-ratio features that this new generation of micro-scale probes hope to address [43]. A device developed at PTB consists of an AFM tip attached vertically onto the end of a standard AFM cantilever, shown in Figure 9.10. The device was able to take internal measurements of sidewalls on MEMS devices [44].
FIGURE 9.10 Vertical AFM probe for MEMS sidewall investigation [44].
Calibration of miniature CMMs
A simple procedure for calibrating the form of the probe tip, which is relatively easy for millimetre-sized probes, becomes a significant challenge when dealing with probes of a few tens of micrometres diameter. Reversal methods have been developed [25] and a stand-alone optical instrument has been developed [45]. However, to date, accuracies of several tens of nanometres have been achieved – this will need to be improved for future, smaller probes. The manufacture of ever-smaller probes is also a very difficult task. Drawn optical fibres [38] and micro-electro-discharge machining [46] techniques have been used to produce miniature balls on stems for probes but are fast approaching their manufacturing and materials limits [47,48]. Optical methods have also been used in conjunction with miniature CMMs. Optical probes are usually based on the same principles as those for surface topography measurement (see section 6.7) and are summarized in [49]. Optical probes have also been developed that operate in a similar manner to a tactile probe but with an interferometric output [50].
9.6 Calibration of miniature CMMs Miniature CMMs suffer from geometric errors in the same way as large-scale CMMs. With accuracy goals being higher for the miniature CMMs, the importance of a proper calibration of the instrument increases. The purpose of the calibration is to map the systematic errors of the miniature CMM, so that they can be compensated for. Some effects will perhaps not be compensated, but they still have to be measured in order to assign an uncertainty contribution to them. If care is taken to ensure that all steps in the calibration are traceable to the standard of length, this forms the basis for the traceability of the miniature CMM as a whole. For large CMMs, it is customary for the manufacturer to performance verify a CMM with gauge blocks according to ISO 10360 part 2 [11]. The advantages of gauge blocks as performance verification artefacts are that they can be calibrated with low uncertainty (around 25 nm), and that their use in performance verification is well established for large-scale CMMs. Because of the short shaft of miniature CMM stylus systems, it is typically not possible to use the central length of the gauge block. Probing will be close to the edge of the gauge block, which should be taken into account in the initial gauge block calibration. If the gauge block is rotated out of the horizontal plane, the CMM probe can no longer reach the bottom face of the gauge block, and an additional surface has to be wrung onto the gauge block. Some specialized artefacts have also been developed for performance verification of miniature CMMs. For one-dimensional verification
281
282
C H A P T ER 9 : Coordinate metrology
measurements, METAS (the NMI in Switzerland) has developed miniature ball bars [51] (see Figure 9.11a), consisting of ruby spheres connected by a ZerodurÔ rod. Spheres are widely used in artefacts for performance verification, because measuring the relative position of spheres eliminates effects from the probe diameter, shape and sensitivity, thereby allowing verification of the guidance error correction only. However, the probe related effects have to be verified in an additional test. Two-dimensional artefacts in the form of regular arrays of balls or holes have been developed by PTB (see Figure 9.11b) [52]. PTB has also developed a 2D calotte plate and a 3D calotte cube (see Figure 9.11c and Figure 9.11d) [52]. As an option with the F25 miniature CMM, Carl Zeiss supplies a half sphere plate with seven half spheres on a Zerodur plate (Figure 9.11e). The use of half spheres instead of full spheres gives better contrast in optical measurements with a vision system. By measuring a ball or hole plate in different orientations and using error separation techniques, it is possible to obtain the remaining errors of the CMM but not scale, without external calibration of the ball or hole positions.
a
c
b
d
e
FIGURE 9.11 Miniature CMM performance verification artefacts. (a) METAS miniature ball bar, (b) PTB ball plate, (c) PTB calotte plate, (d) PTB calotte cube, (e) Zeiss halfsphere plate.
Calibration of miniature CMMs
9.6.1 Calibration of laser interferometer-based miniature CMMs With the calibration of the laser interferometers on a miniature CMM, the length scale is established. The following geometrical errors have to be characterized in order to establish traceability: -
cosine errors;
-
Abbe errors;
-
mirror shape deviations;
-
squareness errors.
The cosine error is directly related to the quality of the laser alignment relative to the mirror normal (see section 5.2.8.3). Abbe errors result from parasitic rotations in combination with an offset between the probed position on the object and the position where the measurement is taken. Abbe errors can be minimized by moving the sample instead of the probe and having the virtual intersection of the laser beams coincide with the probe centre (as on the NMM in section 9.4.1.2). The maximum Abbe offset that remains has to be estimated, in order to quantify the maximum residual Abbe error. The rotational errors can be measured with an autocollimator or a laser interferometer with angular optics (see section 5.2.9). The NMM (see section 9.4.1.2) uses double and triple interferometers to measure the angular deviations during operation and actively correct for them – this greatly reduces the Abbe errors. The mirror flatness can be measured on a Fizeau interferometer (see section 4.4.2). The angle between the orthogonal mirrors can be measured by removing the mirror block from the instrument and using optical techniques (for example, by comparison with a calibrated optical square). It is also possible to calibrate the orthogonal mirror block directly, by extending it with two additional mirrors and calibrating it as if it were a four-sided polygon [53]. Alternatively, the squareness can be determined using a suitable calibration artefact on the miniature CMM (see below).
9.6.2 Calibration of linescale-based miniature CMMs For a linescale-based miniature CMM, such as the Zeiss F25 (see section 9.4.1.1), the traceability is indirect via the linescales. The linescales are periodically compared to a laser interferometer in a calibration. The calibrated aspects are the linearity, straightness and rotational errors. The squareness between the axes is determined separately, by a CMM measurement on a dedicated artefact.
283
284
C H A P T ER 9 : Coordinate metrology
For the linearity determination, a cube-corner retro-reflector is mounted in place of, or next to, the probe. The offset between the centre of the retroreflector and the probe centre is kept as small as possible, in order to minimize the Abbe error in the linearity determination. Care must also be taken to minimize the cosine errors during the linearity calibration. Alignment by eye is good enough for large-scale CMMs, but for miniature CMMs with their increased accuracy goal, special measures have to be taken. For the calibration of the F25 a position-sensitive detector (PSD) has been used for alignment [54]. The return laser beam is directed onto the PSD and the run-out over the 100 mm stroke reduced to a few micrometres. This translates into less than 1 nm of cosine error over the full travel. Straightness and rotations can be measured with straightness and rotational optics respectively. Because of the special construction of the F25, some errors are dependent on more than one coordinate. The platform holding the z axis moves in two dimensions on a granite table. This means that instead of two separate straightness errors, there is a combined straightness, which is a function of both x and y. The same holds for the rotations around the x and y axes. This complicates the calibration, by making it necessary to measure the straightness and rotations of the platform along several lines, divided over the measuring volume. The results of the laser interferometer calibration can be used to establish what is commonly referred to as a computer-aided accuracy (CAA) correction field. Figure 9.12 shows the results of a laser interferometer measurement of straightness (xTx) on the F25 with the CAA correction enabled [54].
FIGURE 9.12 Straightness (xTx) measurement of the F25 with the CAA correction enabled.
References
In this case, there was a half-year period between the two measurements. The remaining error is a result of the finite accuracy of the original set of measurements used to calculate the CAA field, the finite accuracy of the second set of measurements and the long-term drift of the instrument. The maximum linearity error is 60 nm. The squareness calibration of the F25 cannot be carried out with a laser interferometer, so an artefact is used. During this measurement a partial CAA correction is active, based on the laser interferometer measurements only. The artefact measurement consists of measuring a fixed length in two orientations. For the xy squareness, one of these measurements will be along the xy diagonal, the other in an orientation rotated 180 around the y axis. The squareness can then be calculated from the apparent length difference between the two orientations. The artefact can be a gauge block, but it is better to use an artefact where the distance is between two spheres, since the probe radius does not affect the measurement. Because the principle of the squareness calibration is based upon two measurements of the same length, it is particularly important that this length does not drift between the measurements. In order to get a squareness value which applies to the whole measurement volume, the two spheres should be as far apart as possible and placed symmetrically within the measurement volume.
9.7 References [1] Bosch J A 1995 Co-ordinate measuring machines and systems (CRC Press) [2] Flack D R, Hannaford J 2005 Fundamental good practice in dimensional metrology NPL Good practice guide No. 80 (National Physical Laboratory) [3] ISO 10360 part 1: 2000 Geometrical product specifications (GPS) Acceptance and reverification tests for coordinate measuring machines (CMM) - Part 1: Vocabulary (International Organization for Standardization) [4] Flack D R 2001 CMM probing NPL Good practice guide No. 43 (National Physical Laboratory) [5] ISO 10360 part 6: 2001 Geometrical product specifications (GPS) Acceptance and reverification tests for coordinate measuring machines (CMM) - Part 6: Estimation of errors in computing Gaussian associated features (International Organization for Standardization) [6] Barakat N A, Elbestawi M A, Spence A D 2000 Kinematic and geometric error compensation of coordinate measuring machines Int. J. Machine Tools Manufac. 40 833–850 [7] Satori S, Zhang G X 2007 Geometric error measurement and compensation of machines Ann. CIRP 44 599–609
285
286
C H A P T ER 9 : Coordinate metrology
[8] Schwenke H, Knapp W, Haitjema H, Weckenmann A, Schmitt R, Delbressine F 2008 Geometric error measurement and compensation for machines - an update Ann. CIRP 57 660–675 [9] Lee E S, Burdekin M 2001 A hole plate artifact design for volumetric error calibration of a CMM Int. J. Adv. Manuf. Technol. 17 508–515 [10] Schwenke H, Franke M, Hannaford J, Kunzmann H 2005 Error mapping of CMMs and machine tools by a single tracking interferometer Ann. CIRP 54 475–478 [11] ISO 10360 part 2: 2009 Geometrical product specifications (GPS) - Acceptance and reverification tests for coordinate measuring machines (CMM) Part 2: CMMs used for measuring size (International Organization for Standardization) [12] ISO 10360 part 3: 2000 Geometrical Product Specifications (GPS) - Acceptance and reverification tests for coordinate measuring machines (CMM) Part 3: CMMs with the axis of a rotary table as the fourth axis (International Organization for Standardization) [13] ISO 10360 part 4: 2000 Geometrical Product Specifications (GPS) - Acceptance and reverification tests for coordinate measuring machines (CMM) Part 4: CMMs used in scanning measuring mode (International Organization for Standardization) [14] ISO 10360 part 5: 2000 Geometrical Product Specifications (GPS) Acceptance and reverification tests for coordinate measuring machines (CMM) - Part 5: CMMs using multiple-stylus probing systems (International Organization for Standardization) [15] ISO/TS 15530 part 3: 2004 Geometrical product specifications (GPS) Coordinate measuring machines (CMM): Technique for determining the uncertainty of measurement - Part 3: Use of calibrated workpieces or standards (International Organization for Standardization) [16] ISO/TS 15530 part 4: 2008 Geometrical product specifications (GPS) Coordinate measuring machines (CMM): Technique for determining the uncertainty of measurement - Part 4: Evaluating CMM uncertainty using task specific simulation (International Organization for Standardization) [17] Balsamo A, Di Ciommo M, Mugno R, Rebaglia B I, Ricci E, Grella R 1999 Evaluation of CMM uncertainty through Monte Carlo simulations Ann. CIRP 48 425–428 [18] Takamasu K, Takahashi S, Abbe M, Furutani R 2008 Uncertainty estimation for coordinate metrology with effects of calibration and form deviation in strategy of measurement Meas. Sci. Technol. 19 84001 [19] van Dorp B, Haitjema H, Delbressine F, Schellekens P 2002 The virtual CMM method for three-dimensional coordinate machines Proc. 3rd Int. euspen Conf., Eindhoven, Netherlands, May 633–636 [20] Haitjema H, van Dorp B, Morel M, Schellekens P H J 2001 Uncertainty estimation by the concept of virtual instruments Proc. SPIE 4401 147–158
References
[21] Peggs G N, Lewis A J, Oldfield S 1999 Design for a compact high-accuracy CMM Ann. CIRP 48 417–420 [22] Hasen H N, Carniero K, Haitjema H, De Chiffre L 2006 Dimensional micro and nano metrology Ann. CIRP 55 721–743 [23] Ruijl T A M, van Eijk J 2003 A novel ultra precision CMM based on fundamental design principles Proc. ASPE, UNCC, USA, June [24] Fan K C, Fei Y T, Yu X F, Chen Y J, Wang W L, Chen F, Liu Y S 2006 Development of a low-cost micro-CMM for 3D micro/nano measurements Meas. Sci. Technol. 17 524–532 [25] Ku ¨ ng A, Meli F, Thalmann R 2007 Ultraprecision micro-CMM using a low force 3D touch probe Meas. Sci. Technol. 18 319–327 [26] van Seggelen J K, Roseille P C J N, Schellekens P H J, Spaan H A M, Bergmans R H, Kotte G J W L 2005 An elastically guided machine axis with nanometer repeatability Ann. CIRP 54 487–490 [27] Vermeulen M, Rosielle P C J N, Schellekens P H J 1998 Design of a highprecision 3D-coordinate measuring machine Ann. CIRP 47 447–450 ¨ger G, Grunwald R, Manske E, Housotte T 2004 A nanopositioning and [28] Ja nanomeasuring machine, operation, measured results Nanotechnology and Precision Engineering 2 81–84 ¨ger G, Manske E, Housotte Scott W 2002 Operation and analysis of [29] Ja a nanopositioning and nanomeasuring machine Proc. ASPE, St. Louis, Missouri, USA 229–304 [30] Weckenmann A, Estler T, Peggs G, McMurty D 2004 Probing systems in dimensional metrology Ann. CIRP 53 657–684 [31] Meli F, Ku ¨ ng A 2007 AFM investigation of surface damage caused by mechanical probing with small ruby spheres Meas. Sci. Technol. 18 486–502 [32] van Vliet W, Schellekens P 1996 Accuracy limitations of fast mechanical probing Ann. CIRP 45 483–487 [33] Hoffmann J, Weckenmann A, Sun Z 2008 Electrical probing for dimensional micro metrology Ann. CIRP 57 59–62 [34] Haitjema H, Pril W, Schellekens P 2001 Development of a silicon-based nanoprobe system for 3-D measurements Ann. CIRP 50 365–368 [35] Brand U, Kleine-Besten T, Schwenke H 2000 Development of a special CMM for dimensional metrology on microsystem components Proc. 15th ASPE, Scotsdale, Arizona, USA, Oct. 1–5 [36] Pril W O 2002 Development of high precision mechanical probes for coordinate measuring machines (PhD Thesis: Technical University of Eindhoven) [37] Kim B, Masuzawa T, Bourina T 1999 The vibroscanning method for the measurement of micro-hole profiles Measurement 10 697–705 ¨ldele F, Weiskirch C, Kunzmann H 2001 Opto-tactile sensor [38] Schwenke H, Wa for 2D and 3D measurement of small structures on coordinate measuring machines Ann. CIRP 50 381–364
287
288
C H A P T ER 9 : Coordinate metrology
[39] Stone J A, Muralikrishnan B, Stoup J R 2005 A fiber probe for CMM measurement of small features Proc. SPIE 5879 58790R [40] Thelen R, Schultz J, Meyer P, Saile V 2008 Approaching a sub-micron capability index using Werth fiber probe Proc. 4M Conf., Cardiff, UK, Oct. 147–150 [41] Bauza M B, Hocken R J, Smith S T, Woody S C 2005 Development of a virtual probe tip with an application to high aspect ratio microscale features Rev. Sci. Instrum. 76 095112 [42] Stoyanov S, Bailey C, Leach R K, Hughes E B, Wilson A, O’Neil W, Dorey R A, Shaw C, Underhill D, Almond H J 2008 Modelling and prototyping the conceptual design of a 3D CMM micro-probe 2nd Electronics System Integration Technology Conference, Greenwich, UK 193–198 [43] Peiner E, Balke M, Doering L, Brand U 2008 Tactile probes for dimensional metrology with microcomponents at nanometre resolution Meas. Sci. Technol. 19 064001 [44] Dai G, Wolff H, Weimann T, Xu M, Pohlenz F, Danzelbrink H-U 2007 Nanoscale surface measurements at sidewalls of nano- and micro-structures Meas. Sci. Technol. 18 334–341 [45] Chen L-C 2007 Automatic 3D surface reconstruction and sphericity measurement of micro spherical balls of miniaturized coordinate measurement probes Meas. Sci. Technol. 18 1748–1755 [46] Sheu D-Y 2005 Micro-spherical probes machining by EDM J. Micromech. Microeng. 15 185–189 [47] Masuzawa T 2000 State of the art in micromachining Ann. CIRP 49 473–488 [48] Kunieda M 2008 Challenges to miniaturization in micro EDM Proc. ASPE, Portland, Oregon, USA, Oct. 55–60 [49] Schwenke H, Neuschaefer-Rube U, Pfeifer J, Kunzmann H 2002 Optical methods for dimensional metrology in production engineering Ann. CIRP 51 685–700 [50] Drabarek P, Gnausch T, Fleischer M 2008 Measuring machines with interferometrical stylus for form measurement of precise mechanical parts Proc. ASPE, Portland, Oregon, USA, Oct. 149–151 [51] Ku ¨ ng A, Meli F 2006 Scanning performance with an ultrprecision m-CMM Proc. 6th Int. euspen Conf., Baden bei Wien, Austria, May–Jun. 418–421 [52] Neuschaefer-Rube U, Neugebauer M, Ehrig W, Bartscher M, Hipert U 2008 Tactile and optical microsensors: test procedures and standards Meas. Sci. Technol. 19 084010 [53] Koops K R, van Veghel M G A, Kotte G J W L 2006 Calibration strategies for scanning probe microscopes Proc. 6th Int. euspen Conf., Baden bei Wien, Austria, May–Jun. 466–469 [54] van Veghel M, Bergmans R H, Niewenkamp H J 2008 Traceability of a linescale based micro-CMM Proc. 8th Int. euspen Conf., Zurich, Switzerland, May 263–268
CHAPTER 10
Mass and force measurement 10.1 Traceability of traditional mass measurement Although the basic comparison method of weighing, and indeed the weights themselves, have not changed much since earliest records, the instruments used and methods of dissemination have.1 The beam balance, which can be traced back at least three thousand years, is still the most accurate way of comparing weights, although the system for sensing the difference between the weights has changed. Opto-electronic and force compensated sensing elements have taken over from conventional optical systems, the most basic of which is the pointer and scale. Weights have always been based on multiples and sub-multiples of naturally occurring physical quantities such as a number of grains of wheat (hence the unit of the grain, one seven thousandth of a pound and the basis of the imperial system of weight). An artefact standard based on a natural quantity (the weight of a cubic decimetre of water) is still used to maintain and disseminate the unit, nowadays on a global rather than a regional scale. The development of the balance as a measurement instrument has seen modifications in the execution of the comparison technique rather than in the technique itself. Current technology offers little improvement in terms of resolution on the best knife-edge balances used during the eighteenth century [1]. For the last eighty years NMIs have been able to make measurements on kilogram weights to a resolution of a few micrograms [2]. Comparisons on such two pan balances were time-consuming and laborious and the limited amount of data produced in turn limited the uncertainties that could be achieved. The recent automation of mass comparators, both in terms of collection of data and the exchange of weights, has allowed many more
1
CONTENTS Traceability of traditional mass measurement Low-mass measurement Low-force measurement References
This section follows on from the introduction to mass given in section 2.4
Fundamental Principles of Engineering Nanometrology Copyright Ó 2010 by Elsevier Inc. All rights reserved.
289
290
C H A P T ER 1 0: Mass and force measurement
comparisons of standards and unknowns to be made. The increase in data collected allows statistical analysis and this, rather than an absolute improvement in the overall resolution or accuracy of the instrument, has led to an improvement in the uncertainty with which the kilogram can be monitored and disseminated. The current state of the art in mass measurement allows the comparison of kilogram weights with a repeatability approaching 1 mg on mass comparators, which can reliably be used on a daily basis. With this frequency of calibration, the stability of the standard weight used as a reference becomes significant not only at the working standards level but also for national standards and for the International Prototype Kilogram itself. For this reason there is interest both in the absolute stability of the unit of the kilogram and in the way it is defined and disseminated.
10.1.1 Manufacture of the Kilogram weight and the original copies After many attempts in France, Johnson Matthey of London made a successful casting of a 90 % platinum 10 % iridium alloy mass standard in 1879. Three cylindrical pieces were delivered to St-Claire Deville metallurgists in France where they were hammered in a press to eliminate voids, rough machined and polished and finally adjusted against the kilogram des archives [3]. One of these kilograms was designated K and became the International Prototype Kilogram. Forty further kilogram weights were produced using the same techniques and delivered in 1884. Twenty of these were allocated to the signatories of the convention of the metre as national standards. The International Prototype Kilogram (commonly known as the (International) Kilogram or just K) is a cylinder of approximate dimensions 39 mm diameter by 39 mm height [4] (see Figure 2.4). The design of the artefact minimizes its surface area while making it easy to handle and machine (a sphere would give the minimum surface area but presents difficulties in manufacture and use). Platinum-iridium was chosen as the material for the kilogram for a number of reasons. Its high density (approximately 21.5 kg$m3) means that the artefact has a small surface area and, therefore, the potential for surface contamination is minimized. The relatively inert nature of the material also minimizes surface contamination and enhances the mass stability of the artefact. The high density of the material also means that it displaces a smaller amount of air than a kilogram of less dense material (stainless steel or brass for example). The weight-in-air of the kilogram (or any mass standard) depends on the density of the air in which it is weighed because the air (or any fluid in which it is weighed) exerts a buoyancy effect proportional to the volume of the artefact.
Traceability of traditional mass measurement
Minimizing the volume of the weight minimizes the effect of changing air density on the weight of the artefact. Platinum and its alloys are reasonably easy to machine [5], enabling a good surface finish to be achieved on the artefact, again reducing the effect of surface contamination. The addition of 10 % iridium to the platinum greatly increases its hardness and so reduces wear.
10.1.2 Surface texture of mass standards The surface texture of the kilogram standards has a major effect on their stability. Early copies of the International Prototype (and the Kilogram itself) were finished by hand polishing using gradually finer polishing grains, concluding finally by polishing with a grain diameter of 0.25 mm [6]. More recent copies (since 1960) have been diamond-turned, producing a visibly better finish on the surface. Measurements using coherence scanning interferometry have shown typical surface roughness (Ra) values of 65 nm to 85 nm for hand-polished weights, compared with 10 nm to 15 nm achieved by diamond turning [7].
10.1.3 Dissemination of the kilogram The BIPM is responsible for the dissemination of the unit of mass worldwide. Dissemination is achieved via official copies of the International Prototype Kilogram, known as national prototypes, held by all countries that are signatories to the Metre Convention. These are periodically compared, at the BIPM, with the International Prototype. The official copies of the kilogram are, like the original, made of platinum-iridium alloy and the final machining and adjustment is done at BIPM. At present there are approximately ninety official copies of the kilogram. Periodic verification of the national kilogram copies takes place approximately every ten years [8]. Each time the national copies are returned to the BIPM they are cleaned and washed by a process known as nettoyage-lavage [9], which theoretically returns them to a reference value. All kilograms, including the International Prototype, are subject to nettoyage-lavage prior to the periodic verification exercise. The BIPM justify the use of this cleaning process because of the wide spread in the contamination levels of the returning national prototypes and the need to return K to its reference value. Surface contamination varies between national copies and ranges from those which are not used at all (some are returned to the BIPM with the seal on the container still intact from the last verification) to those that are used on a regular basis and have collected many tens of micrograms worth of accreted material on their surfaces.
291
292
C H A P T ER 1 0: Mass and force measurement
10.1.4 Post nettoyage-lavage stability Although the gravimetric effects of the nettoyage-lavage process have been studied by various NMIs [8,10,11] and the (variable) reproducibility of the method is documented, no work has been done to link the actual effect on the surface of the weight (measured by a reliable surface analysis technique) with either the mechanical cleaning method or the observed weight loss. Furthermore, while the BIPM has made studies of the mass gain over the first three months after cleaning based on the behaviour of all the national prototypes, the return of the prototypes to their NMIs after this period means no longer-term studies have been made. Only an NMI with at least three other platinum-iridium kilograms, against which the stability of the national prototype could be monitored, would be able to carry out such work and even so the stability of the other three kilograms would affect the results. Due to the lack of data on the stability of national standards after returning from BIPM (approximately three to four months after cleaning and so relatively unstable) a wide variety of algorithms are used to predict the longer-term mass gain of the kilogram standards. Some algorithms are expressed as a function of time; for example, NPL has used the following expression to predict the value of kilogram 18 after cleaning at the BIPM Mass18 ¼ 1 kg þ DV þ 0:356097t0:511678 mg (10.1) where DV is the measured difference from nominal in micrograms directly after cleaning (as measured by the BIPM) and t is the time after cleaning in days. The most commonly used algorithm is that the national standard has the value assigned on leaving BIPM (approximately three months after cleaning) plus 1 mg per year. Some NMIs modify this by using a 0.22 mg per month gain for the first two years. Other NMIs assume that their national kilogram is perfectly stable on return from the BIPM and the mass gain is zero.
10.1.5 Limitations of the current definition of the kilogram The kilogram is unique among the seven base SI units in that it is the only one that is still defined in terms of a physical artefact. As an artefact definition its realization and dissemination presents a unique set of practical problems. While the theoretical uncertainty associated with the value of K is zero (it is, by definition, exactly 1 kg) the practical accuracy with which the kilogram can be realized is limited by the stability of the artefact and the repeatability of the nettoyage-lavage cleaning process. Although the BIPM monitor the
Traceability of traditional mass measurement
stability of K against a number of official copies it keeps, the practical limit of the uncertainty in its value is about 2 mg. Additionally, the value of platinum-iridium kilograms has been seen to drift by up to 2 mg per year although K is undoubtedly more stable than this. The fact that one artefact provides traceability for the entire world-wide mass scale also presents difficulties. The calibration of the national prototypes presents a problem for the BIPM as it involves a large number of measurements. The use of the nettoyage-lavage cleaning process to return the kilograms to a ‘base value’ is not only time-consuming and arduous in itself but greatly increases the number of weighings which must be made on the artefacts. Values of the kilograms before and after cleaning are calculated, as is the weight gain of the kilograms immediately after the cleaning process, from measurements made over a period of several weeks. Thus, not only is the work load of the BIPM very high, but the national prototype kilograms are not available to their NMIs for up to six months. Most NMIs around the world hold only one official copy of the kilogram and thus their entire national mass measurement system is dependent on the value of their national prototype. This means that the handling and storage of this weight is very important and any damage means it would at least have to be returned to the BIPM for re-calibration and at worst replaced.
10.1.6 Investigations into an alternative definition of the kilogram For the last twenty years there has been a considerable amount of work undertaken looking for an alternative, more fundamental, definition for the SI unit of the kilogram [12]. This work has been driven by two main assumptions. The limitations of the stability, realization and dissemination of the kilogram have been discussed in section 2.4. The other reason for the re-definition work currently being performed is the perception of the definition using an artefact as ‘low tech’ when compared with the definitions of the other six SI base units. For this reason, the approaches to a fundamental re-definition have in some ways been forced rather than being logical solutions to the problem. The other base units have more simple definitions based on one measurement (such as the wavelength of light for the metre) whereas any of the current proposals for the re-definition of the kilogram involve a number of complicated measurements. In the same way the timescale for the re-definition of the other base units was defined by the discovery of a suitable phenomenon or piece of equipment (for example
293
294
C H A P T ER 1 0: Mass and force measurement
the laser used to define the metre). A similar method for re-definition of the kilogram has yet to be found. At present there are four main methods being investigated with a view to providing a new fundamental definition for the SI unit of the kilogram. Even from these brief descriptions of the four approaches given in sections 10.1.6.1 to 10.1.6.4, it can be seen that the present approaches to the redefinition involve a number of demanding measurements. Almost all of these measurements must be performed at uncertainties which represent the state of the art (and in some cases much better than those currently achievable) to realize the target overall uncertainty of one part in 108 set for this work. The absolute cost of the equipment also means that the ultimate goal of all NMIs being able to realize the SI unit of the kilogram independently will, on purely financial grounds, not be achievable. All four approaches require traceability to a mass in vacuum both for their initial determination and for dissemination. The significance of the work described in this book, therefore, extends not only to improving knowledge of the stability of the current definition of the kilogram but also to facilitating the practical use of any of the currently considered methods of re-definition.
10.1.6.1 The Watt balance approach The first proposed re-definition of the kilogram was via the Watt. Bryan Kibble of NPL proposed using the current balance [13], formerly used to define the ampere, to relate the kilogram to a value for Plank’s constant. The fundamental measurements necessary for the definition of the kilogram by this method are the volt (via the Josephson junction) and the ohm (via the quantized Hall effect). Measurements of length, time and the acceleration due to gravity are also necessary. There are currently three NMIs working on the Watt balance project: NPL [14], NIST [15] and METAS in Switzerland [16].
10.1.6.2 The Avogadro approach The Avogadro project will define a kilogram based on a fixed number of atoms of silicon [17,18]. The mass of a sphere of silicon will be related to its molar mass and the Avogadro constant by the following equation m ¼
Mm V NA v0
(10.2)
where m is the calculated mass of the sphere, Mm is the molar mass of the silicon isotopes measured by spectrometry, NA is the Avogadro constant, V is the volume of the sphere measured by interferometry and v0 is the volume occupied by a silicon atom.
Traceability of traditional mass measurement
To calculate v0 the lattice spacing of a silicon crystal must be measured by x-ray interferometry [19] (see section 5.7.2). The practical realization of this definition relies on the calculation of a value for NA from an initial value for the mass of the sphere [20]. This value is then set and used subsequently to give values for the mass of the sphere, m. An added complication with this definition is the growth of oxides of silicon on the surface of the spheres. The thickness of the layer needs to be monitored (probably by ellipsometry) and used to correct the value of mass, m.
10.1.6.3 The ion accumulation approach A third approach to the re-definition of the kilogram involves the accumulation of a known number of gold atoms [21,22]. Ions of Au197 are released from an ion source into a mass separator and accumulated in a receptor suspended from a mass comparator. The number of ions collected is related to the current required to neutralize them supplied by an irradiated Josephson junction voltage source. The mass of ions, M, is then given by the equation Z n1 :n2 :ma t M ¼ fðtÞdt (10.3) 2 0 where n1 and n2 are integers, ma is the atomic mass of gold, f(t) is the frequency of the microwave radiation irradiated onto the Josephson junction and ma ¼ 197 u, for gold isotope Au197, where u is the atomic mass unit (equal to 1/12 of the mass of C12).
10.1.6.4 Levitated superconductor approach As with the Watt balance approach, the levitated superconductor method relates the unit of the kilogram to electrical quantities defined from the Josephson and quantized Hall effects [23]. A superconducting body is levitated in a magnetic field generated by a superconducting coil. The current required in the superconducting coil is proportional to the load on the floating element and defines a mass (for the floating element) in terms of the current in the superconducting coil [24–26].
10.1.7 Mass comparator technology From the earliest days of mass calibration, the measurements have been made by comparison, each weight or quantity being compared with a standard of theoretically better accuracy. A series of comparisons would thus allow all measurements to be eventually related back to a primary standard, whether it was a naturally occurring standard (such as a grain of
295
296
C H A P T ER 1 0: Mass and force measurement
wheat) or an artefact standard such as the current international prototype kilogram. Until recently these comparisons have been performed using two-pan balances. From the earliest incarnations to the present day the technology has relied on a balance beam swinging about a pivot normally at the centre of the beam. The mechanical quality of the beam and in particular the pivot has been refined until modern two-pan mechanical balances are capable of resolutions of the order of one part in 109, equivalent to 1 mg on a 1 kg mass standard.
10.1.7.1 The modern two-pan mechanical balance Two-pan balances consist of a symmetrical beam and three knife-edges. The two terminal knife-edges support the pans and a central knife-edge acts as a pivot about which the beam swings. Two-pan balances are generally undamped, with a rest point being calculated from a series of turning points. Some balances incorporate a damping mechanism (usually mechanical or magnetic) to allow the direct reading of a rest point. Readings from two-pan balances tend to be made using a simple pointer and scale although some use more complicated optical displays. In all cases the reading in terms of scale units needs to be converted into a measured mass difference. Capacities of such balances range from a few grams up to several tonnes. The resolution of smaller balances is limited to the order of 1 mg by the accuracy with which the central knife-edge can be polished.
10.1.7.2 Electronic balances Electronic balances are usually top-loading balances with the applied load being measured by an electro-magnetic force compensation unit or a strain gauge load cell. Single-pan electronic balances give a direct reading of the weight applied whereas the other two mechanical balance types rely on the comparison of two forces (an unknown weight with either an external or internal weight). Despite the possibility of using these balances as direct reading devices (applying an unknown weight and taking the balance reading as a measure of its mass) single-pan electronic balances will always perform better when used as comparators, comparing a standard (A) and an unknown (B) in an ABA or ABBA sequence. Since the definition of the unit of mass is currently realised at the 1 kg level, the development of 1 kg electronic balances and mass comparators represents the current state of the art and 1 kg mass standards can be compared to a resolution of one part in 1010 and with an accuracy approaching 1 mg.
Low-force measurement
10.2 Low-mass measurement At loads less than 1 kg the sensing technology does not improve significantly and resolution is limited to 0.1 mg. Additionally the process of subdividing the kilogram mass standard introduces significant uncertainties that increase as the mass is moved away from 1 kg. Traditionally there has not been a large demand for weighing quantities at the milligram level and below to accuracies better than a few tenths of 1 %. This, coupled with uncertainties introduced by the sub-division process and the relative instability of milligram mass standards, has limited the development of weighing technology in this area. Equally there has been no real drive to extend the mass scale below its traditional limit of 1 mg as weights at this level become very difficult to manufacture and handle (see section 2.4). Recently, however, demands from the aerospace, pharmaceutical, microfabrication, environmental monitoring and low force measurement areas have led to increased research into the lower limits of the mass scale. Traditional mass standards of metal wire have been manufactured with values down to a few tens of micrograms. These have been calibrated using existing microbalance technology to relative accuracies of a few percent. Traceability is taken from kilogram mass standards by a process of subdivision. For mass standards below this level the physical size of wire weights becomes too small for easy handling. However, the use of particulates may provide a way forward for microgram and nanogram mass standards, with traceability being provided by density and dimensional measurements.
10.2.1 Weighing by sub-division Sub-division is used for the most demanding mass calibration applications. It involves the use of standards of one or more values to assign values to weights across a wide range of mass values. A typical example of this would be to use two or three 1 kg standards to calibrate a 20 kg to 1 mg weight set. Equally it would be possible to use a 1 kg and a 100 g standard for such a calibration. Weighing by sub-division is most easily illustrated by considering how values would be assigned to a weight set using a single standard. In reality the weighing scheme would be extended to involve at least two standards. The standard is compared with any weights from the set of the same nominal value and also with various combinations of weights from the set that sum to the same nominal value. A check-weight, which is a standard treated in the same manner as any of the test-weights, is added in each decade of the calibration so that it is possible to verify the values assigned to the weight set.
297
298
C H A P T ER 1 0: Mass and force measurement
10.3 Low-force measurement 10.3.1 Relative magnitude of low forces A full derivation of the surface interaction forces significant at the MNTscale is beyond the scope of this book, and indeed has been presented by various groups previously. Nevertheless the basic force separation dependencies are worthy of consideration by the reader and a selection is presented in Table 10.1. Equations obtained from referenced works have, where necessary, been adapted to use common nomenclature. To simplify comparison, the interaction of a sphere and flat plate is considered where possible. Since the tips of most probes can be adequately modelled as a (hemi-) sphere, this is a suitable approach. The sphere–plate separation is assumed to be much less than the sphere radius. Figure 10.1 is a comparative plot using typical values for the given parameters. Section 7.3.7 also discusses surface forces in terms of the atomic force microscope.
10.3.2 Traceability of low-force measurements Traceability for force measurement is usually carried out by comparing to a calibrated mass in a known gravitational field (see section 2.4). However, as the forces (and hence masses) being measured decrease below around 10 mN (approximately equivalent to 1 mg), the uncertainty in the mass measurement becomes too large and the masses become difficult to handle. For this
Table 10.1
Summary of surface interaction force equations
In these equations F is a force component, U the work function difference between the materials, D the sphere-flat separation, g the free surface energies at state boundaries, H the Hamaker constant and q the contact angle of in-interface liquid on the opposing solid surfaces. In the capillary force the step function u(.) describes the breaking separation; e is the liquid layer thickness and r the radius of meniscus curvature in the gap Interaction
Equation
Electrostatic
F ¼ 30 U 2 pR 2 =D 2 [27] h 2e F ¼ 4pgR 1 uðh þ LÞ [27,28]. 2r HR F ¼ 2 for non-retarded, 6D
Capillary Van der Waals
attractive forces [29] Casimir effect
F ¼
Rp3 hc [30] 360D 3
Low-force measurement
FIGURE 10.1 Comparative plot of described surface interaction forces, based on the following values: R ¼ 2 mm; U ¼ 0.5 V; g ¼ 72 mJ$m2; H ¼ 1018 J; e ¼ r ¼ 100 nm. Physical constants take their standard values: e0 ¼ 8.854 1012 C2$N1$m2; h ¼ 1.055 1034 m2$kg$s1 and c ¼ 3 108 m$s1
reason it is more common to have a force balance that gains its traceability through electrical and length measurements. The current force traceability route is at least a two-stage process. The first stage is to develop a primary force standard instrument deriving traceability directly from the base unit definitions realized at the world’s NMIs. These primary instruments will typically sacrifice practicalities in order to obtain the best possible metrological performance. Various groups have developed such instruments, with the current best performance held by examples at NIST, PTB and NPL. The second stage in the traceability route is to design a transfer artefact, or sequence of artefacts, to transfer the force calibration to target instruments in the field. These artefacts may sacrifice uncertainties, resolution or range of force measurement, in exchange for cost reductions, portability or compliance with other physical constraints, such as size or environmental tolerance.
10.3.3 Primary low-force balances The leading examples of force measurement instruments, operating in the millinewton to nanonewton range, are based on the electrostatic force balance principle. The force to be measured is exerted on a flexure system, which deflects. This deflection is measured using an interferometer. The deflection of the flexure also changes the capacitance of a set of parallel capacitor plates in the instrument. This is usually achieved either by
299
300
C H A P T ER 1 0: Mass and force measurement
changing the plate overlap, or by changing the position of a dielectric, with flexure deflection. In this way the capacitance changes linearly with deflection. The interferometer signal is used in a closed-loop controller to generate a potential difference across the capacitor generating an electrostatic force that servos the flexure back to zero deflection. Measurement of the force exerted is derived from traceable measurements of length, capacitance and potential difference. The exerted force is calculated using equation (10.4), in which z is the flexure displacement, and C and V the capacitance of and voltage across the parallel plates respectively. The capacitance gradient, dC/dz, must be determined prior to use. 1 dC F ¼ V2 2 dz
(10.4)
The first electrostatic force balance primarily designed with the traceability for low-force measurements in mind was developed at NIST [31]. Subsequently, balances have been developed at The Korea Research Institute of Standards and Science (KRISS) [32], PTB [33] and NPL [34]. The NPL balance will be discussed in some detail as an example and is shown schematically in Figure 10.2. A vertical force applied to the platen displaces the connected flexure and dielectric. This displacement, measured FIGURE 10.2 Schema of the NPL low-force balance.
Low-force measurement
by a plane mirror differential interferometer (see section 5.2.6), is used by a control system to create a deflection-nulling feedback force. The feedback force is generated by a potential difference across a system of vertically oriented capacitor plates, V in equation (10.4), and acts vertically on the moving dielectric vane.
10.3.4 Low-force transfer artefacts Due to the size of the primary low-force balances and their associated instrumentation, their requirement for vibration isolation and their sensitivity to changes in orientation, it is not possible to connect anything but small items to the balance for force measurement. From this, and from the logistics of moving each target instrument to the balance’s vicinity, stems the need for transfer artefacts.
10.3.4.1 Deadweight force production The most intuitive method of force production makes use of the Earth’s gravitational field acting on an object of finite mass: a deadweight. Deadweights have traditionally been, and are still, used routinely for maintaining force traceability in the millinewton to meganewton range (see section 2.5). However, below 10 mN at the higher end of the low-force balance (LFB) scale, handling difficulties, contamination and independent testing issues lead to high relative uncertainties in weight measurement. The trend is for the relative uncertainty to increase in inverse proportion to the decrease in mass. Deadweights are, therefore, unsuitable for use as transfer artefacts, although useful for comparison purposes at the higher end of the force scale of typical LFBs [35].
10.3.4.2 Elastic element methods Apart from gravitational forces from calibrated masses, the next most intuitive and common technology used for calibrated force production is an elastic element with a known spring constant. The element, such as a cantilever or helical spring, is deflected by a test force. The deflection is measured, either by an external system such as an interferometer, or by an on-board MEMS device such as a piezoelectric element. With the spring constant previously determined by a traceable instrument such as an electrostatic force balance, the magnitude of the test force can be calculated. In this way a force calibration is transferred. Several examples of elastic elements use modified AFM cantilevers, as these are of the appropriate size and elasticity, a simpler geometry than custom designs and thus more reliably modelled, and generally well
301
302
C H A P T ER 1 0: Mass and force measurement
FIGURE 10.3 Experimental prototype reference cantilever array – plan view.
understood by those working in the industry. Very thin cantilevers, the manufacture of which is now possible, have low enough spring constants to allow, in principle, force measurement at the nanonewton level. The calibration of the spring constant of an AFM cantilever is discussed in section 7.3.6. Other elastic element methods will be described here that are not necessarily AFM-specific. In order to provide suitable performance across a working range, usually one spring constant is insufficient. It is common to design devices containing elements with a range of spring constants. This may be achieved in two ways with cantilever arrangements. Either an array of cantilevers with attached probes or single defined probing points is used, or one cantilever with multiple defined probing points is used. An example of the former, called an ‘array of reference cantilevers’, has been developed at NIST [36] and is shown in Figure 10.3. The arrays, microfabricated from single-crystal silicon, contain cantilevers with estimated nominal spring constants in the range 0.02 N$m1 to 0.2 N$m1. Variations in resonant frequency of less than 1 % are reported for the same cantilevers across manufactured batches, as an indication of uniformity. The spring constants were verified on the NIST electrostatic force balance. Cantilever arrays are commercially available for AFM non-traceable calibration. However, their route to traceability puts a much lower ceiling on their accuracy and the uncertainties specified. As the simple devices described in this section are passive, they would require pushing into a LFB by an actuator system and some external means of measuring deflection. This second requirement is significant as it relies on the displacement metrology of the target instrument. The working uncertainty of these devices is higher than active-type cantilevers and may be better calibrated by such an active-type artefact.
Low-force measurement
The alternative to the arrays of high-quality passive cantilevers discussed above is a single cantilever with onboard deflection metrology. These can be used to calibrate target instruments or indeed cheaper, lower-accuracy, disposable transfer artefacts. One of the first examples of an AFM probe with on-board piezoresistive deflection sensing is discussed in [37]. The device was fabricated as a single piezoresistive strain element with pointed-tip cantilever geometry. The researchers claim a 0.01 nm vertical resolution, which is equivalent to 1 nN with a spring constant of 10 N$m1 for this proof-of-concept device. A number of piezoresistive cantilevers have been developed by several NMIs. NPL has developed the C-MARS (cantilever microfabricated array of reference springs) device as part of a set of microfabricated elastic element devices intended for traceable AFM calibration [38]. The relatively large cantilever (150 mm wide by 1600 mm long) is marked with fiducials that in principle allow precise alignment of the contact point for a cantilever-oncantilever calibration. The size of the fiducials is influenced by the 100 mm by 100 mm field of view of typical AFMs. Surface piezoresistors near the base of the cantilever allow the monitoring of displacement and vibrations of the cantilever, if required. Detail of the device is shown in Figure 10.4. Spring constants are quoted for interaction at each fiducial, providing a range of 25 N$m1 to 0.03 N$m1. NIST have also developed a cantilever
FIGURE 10.4 Images of the NPL C-MARS device, with detail of its fiducial markings; the 10 mm oxide squares form a binary numbering system along the axis of symmetry.
303
304
C H A P T ER 1 0: Mass and force measurement
device that has thin legs at the root to concentrate bending in this root region and fiducial markings along its length [39]. Researchers at PTB have created a slightly larger piezoresistive cantilever, of one millimetre width by a few millimetres length, for use in nanoindentation and surface texture work [40]. PTB has also created a two-leg sphere-probe example and a single-leg tip-probe example. The prototypes, manufactured using standard silicon bulk micromachining technology, have a stiffness range of 0.66 N$m1 to 7.7 N$m1. A highly linear relationship between the gauge output voltage and the probing force in the micronewton range has been reported. In continuous scanning mode, the probing tip of a piezoresistive cantilever, such as the NIST device, may be moved slowly down the cantilever beam, with beam deflection and external force values regularly recorded. Notches with well-defined positions show up as discontinuities in the recorded force-displacement curve, and act as a scale for accurate probe tip position determination from the data. The result is a function that describes the spring constant of the transfer artefact, after probing with a LFB. For interaction with an electrostatic force balance operating in position-nulled mode, such a device needs to be pushed into the balance tip.
10.3.4.3 Miniature electrostatic balance methods NPL have developed a novel comb-drive device for force calibration. One example, the ‘Electrical Nanobalance’ device [41,42], is shown in Figure 10.5. A vertical asymmetry in the fields generated in a pair of comb drives levitates a landing stage against an internal elastic element. Measurements of the driving electrical signal and resultant deflection lead to a spring constant value potentially traceable to SI. At end-use, the device becomes a passive, calibrated, elastic device requiring no electrical connections and producing no interacting fields. The authors report a landing stage centre-point spring constant of 0.195 N$m1 0.01 N$m1 and suitability for calibration of AFM cantilevers in the range 0.03 N$m1 to 1 N$m1. The device, calibrated dynamically, must be operated in vacuum to avoid dust contamination of the key working elements. A similar technique is used in NPL’s Lateral Electrical Nanobalance designed to measure lateral forces such as friction in AFM [43].
10.3.4.4 Resonant methods Changes in the tension of a stretched string can be detected via related changes in its resonant frequency. If a force is exerted on one of the string anchor points along the string axis, the tension in the string will decrease. For a well-characterized string the force exerted can be calculated from an
Low-force measurement
FIGURE 10.5 Computer model of the NPL Electrical Nanobalance device. The area shown is 980 mm 560 mm. Dimensions perpendicular to the plane have been expanded by a factor of twenty for clarity.
accurate determination of the frequency shift. In this way a low-force measurement device is created. One example of a resonance force sensor is the ‘nanoguitar’ [44], shown schematically in Figure 10.6. Operating in vacuum, an AFM tip is pressed against the sample cantilever, changing the tension in the oscillating string. The beam is required to be soft compared to the string to transmit the interaction force, improving sensitivity. The set-up allows micrometres of string oscillation amplitude without significant amplitude of parasitic oscillations in the connected cantilever beam. The prototype used a carbon fibre with a diameter of 5 mm and a length of 4 mm, oscillating at 4 kHz. As string tension is decreased, force sensitivity rises but the response time drops. The force resolution is limited by thermal noise in the string oscillation. The authors report a force resolution of 2.5 nN, achieved in vacuum for a response time of 1 ms and a sensor stiffness of 160 N$m1. The sensor performance was limited by a low Q-factor and required precise fibre tension adjustments. Vibration damping was significant because the string was glued to the cantilever. Initial tension was set by sliding one anchor relative to the other using a stick-slip mechanism. The double-ended tuning fork concept forms an alternative highsensitivity force sensor, and has been studied by various groups. In one example [45] a vertical force acting on a sample cantilever beam changes
305
306
C H A P T ER 1 0: Mass and force measurement
FIGURE 10.6 Schema of a resonant force sensor – the nanoguitar.
the resonant frequency of the fork ‘prong’ beams. The beams are vibrated by an external electromagnet and the amplitude is measured with a laser Doppler velocimeter. The monolithically manufactured system has an experimentally determined minimum detection force limit of 19 mN, with a theoretical value as low as 0.45 mN. An attempt has been described to create a tuneable carbon nanotube electromechanical oscillator whose motion is both excited and detected using the electrostatic interaction with the gate electrode underneath the tube [46]. The advantages of the nanotube are highlighted: they are made of the stiffest material known, have low densities, ultra-small cross-sections and can be defect-free. The group report that despite great promise they have as yet failed to realise a room-temperature, self-detecting nanotube oscillator due to practical difficulties. For example, the adhesion of the nanotube to the electrodes inevitably reduces the device’s quality factor by several orders of magnitude.
10.3.4.5 Further methods and summary There are many other physical force production and measurement phenomena that can be used to realize low forces. Many of these methods
Low-force measurement
can be very impracticable and difficult to set up. Examples are simply listed here but further details can be found in the references provided: -
radiation pressure [47];
-
Van der Waals [48] and Casimir effects [49];
-
biochemical and protein manipulation [50–52];
-
fluid flow and capillary forces [53,54];
-
counting of flux quanta [55].
Table 10.2 lists the advantages and disadvantages of the methods for low force production and measurement described in this book.
Table 10.2
Advantages and disadvantages of low-force production and measurement methods
Technology
Advantages
Disadvantages
Deadweight forces
Straightforward use. Need only a reliable lifting mechanism and correct material choice. No development. Simple, well-established technology. Focus on ensuring traceability in a proven technology. Robust. MEMS watt and volt balances currently available and hence development relatively cheap and quick. Promises lower relative uncertainties. Development of poorly represented technology would offer market an alternative. Harnessing ubiquitous forces.
Handling uncertainties.
Elastic element methods Electrostatics, and electromagnetism Resonance methods
Van der Waals and Casimir effect Biochemical and protein manipulation Fluid flow and capillary forces
Possibility of intrinsic and hence highly repeatable force calibration.
Radiation pressure
Simple experimental setup in principle.
Capillary forces always present and must be understood anyway.
Integration of onboard deflection metrology. Dependence on position of interaction. Integration of onboard deflection metrology without compromising primary mechanism. Crosstalk with balance. Practical issues: bandwidth selection, low Qs, miniaturization and absolute uncertainties. Risky development. Prototype iterations could prove costly. Extreme short-range interaction, implying less robust artefact. Dependence on interaction geometry. Hamaker constant determination. Collaboration required due to new skills. Better for smaller forces (future work). Fluid flow totally unsatisfactory. High uncertainties in capillary methods due to e.g. humidity dependence. Required level of traceability highly unlikely. High-power laser (heating, safety), used as lowforce balance verification route.
307
308
C H A P T ER 1 0: Mass and force measurement
10.4 References [1] Poynting J 1879 On a method of using the balance with great delicacy Proc. R. Soc. 28 2–35 [2] Conrady A 1921 A study of the balance Proc. R. Soc 101 211–224 [3] Bonhoure A 1952 The construction of primary standards of mass Microtecnic 6 204–206 [4] Darling A 1968 Iridium platinum alloys - a critical review of their constitution and properties Platinum Metals Review 18–26 [5] Rushforth R 1978 Machining properties of platinum (Johnson Matthey Group Research Centre: internal report) [6] Quinn T 1985 The manufacture of 1 kilogram platinum-iridium mass standards CCM/85–16 (E) [7] Jabbour Z 2000 Status of mass metrology at NIST in 2000 Proc. IMEKO TC3 19 103–108 [8] Girard G 1990 Third periodic verification of national prototypes of the kilogram (Proces-Verbeaux, CIPM) [9] Girard G 1990 The washing and cleaning of kilogram prototypes at the BIPM (BIPM Internal Report) [10] Davidson S 2003 A review of surface contamination and the stability of standard masses Metrologia 40 324–338 [11] Knolle D, Firlus M, Glaeser M 1996 Cleaning investigations on platinumiridium prototypes Proc. IMEKO TC3 15 139–144 [12] Davidson S 2005 The redefinition of the kilogram Proc. Asia-Pacific Symp. Mass, Force and Torque APMF 2005 6–11 [13] Kibble B, Robinson I, Belliss J 1990 A realization of the SI watt by the NPL moving-coil balance Metrologia 27 173–192 [14] Robinson I, Kibble B 1997 The NPL moving-coil apparatus for measuring Plank’s constant and monitoring the kilogram IEEE Trans. Instrum. Meas. 46 2, 596–600 [15] Newell D, Steiner R, Williams E, Picard A 1998 The next generation of the NIST watt balance NIST Report MOPB4–3 108–109 [16] Richard P 1999 The OFMET Watt balance EUROMET Mass and Derived Quantities 7 11–13 [17] Rottger S, Paul A, Keyser U 1997 Spectrometry for isotopic analysis of silicon crystals for the Avogadro project IEEE Trans. Instrum. Meas. 46 560–562 [18] Gonfiantini R, De Bievre P, Valkiers S, Taylor P 1997 Measuring the molar mass of silicon for a better Avogadro constant IEEE Trans. Instrum. Meas. 46 566–571 [19] Becker P, Dorenwendt K, Ebeling G, Lauer R, Lucas W, Probst R, Rademacher H -J, Reim G, Sevfried P, Siegert H 1981 Absolute measurement of the (220) lattice plane spacing in a silicon crystal Phys. Rev. Lett. 46 1540–1544 [20] Sevfried P, Becker P, Kozdon A, Lu ¨ dicke F, Spieweck F, Stu ¨ mpel J, Wagenbreth H, Windisch D, De Bie`vre P, Ku H H, Lenaers G, Murphy T J,
References
Peiser H S, Valkiers S 1992 A determination of the Avogadro constant, Z Phys. B - Condensed Matter 87 289–298 [21] Glaeser M, Ratschko D, Knolle D 1995 Accumulation of ions - an independent method for monitoring the stability of the kilogram Proc. IMEKO TC3 14 7–12 [22] Ratschko D, Knolle D, Glaeser M 2000 Accumulation of gold ions on a gold coated quartz crystal Proc. IMEKO TC3 19 237–240 [23] Kibble B 1983 Realizing the ampere by levitating a superconducting mass a suggested principle IEEE Trans. Instrum. Meas. 32 144 [24] Fujii K, Tanaka M, Nezu Y, Sakuma A, Leistner A, Giardini W 1995 Absolute measurements of the density of silicon crystals in vacuo for a determination of the Avogadro constant IEEE Trans. Instrum. Meas. 44 5542–5545 [25] Glaeser M, Schwartz R, Mecke M 1991 Experimental determination of air density using 1 kg mass comparator in vacuum Metrologia 28 45–50 [26] Frantsuz E, Khavinson V, Geneves G, Piquemal F 1996 A proposed superconducting magnetic levitation system intended to monitor the stability of the unit of mass Metrologia 33 189–196 [27] Sitti M, Hashimoto H 2000 Controlled pushing of nanoparticles: modelling and experiments IEEE/ASME Trans. Mechatronics 5 199–211 [28] Burnham N A, Colton R J, Pollock H M 1993 Interpretation of force curves in force microscopy Nanotechnology 4 64–80 [29] Tabor D 1991 Gases, liquids and solids; and other states of matter (Cambridge University Press: Cambridge) [30] Lamoreaux S K 1997 Demonstration of the Casimir force in the 0.6 to 6 mm range Phys. Rev. Lett. 78 5–8 [31] Pratt J R, Smith D T, Newell D B, Kramar J A, Whitenton E 2004 Progress toward Syste`me International d’Unite´s traceable force metrology for nanomechanics J. Mat. Res. 19 366–379 [32] Choi I-M, Kim M-S, Woo S-Y, Kim S-H2004 Parallelism error analysis and compensation for micro-force measurement Meas. Sci. Technol. 15 237–243 [33] Nesterov V 2007 Facility and methods for the measurement of micro and nano forces in the range below 105 N with a resolution of 1012 N (development concept) Meas. Sci. Technol. 18 360–366 [34] Leach R K, Chetwynd D G, Blunt L A, Haycocks J, Harris P M, Jackson K, Oldfield S, Reilly S 2006 Recent advances in traceable nanoscale dimension and force metrology in the UK Meas. Sci. Technol. 17 467–476. [35] Jones C W, Kramar J A, Davidson S, Leach R K, Pratt J R 2008 Comparison of NIST SI force scale to NPL SI mass scale Proc. ASPE, Oregon, USA [36] Gates R S, Pratt J R 2006 Prototype cantilevers for SI-traceable nanonewton force calibration Meas. Sci. Technol. 17 2852–2860 [37] Tortonese M, Barrett R C, Quate C F 1993 Atomic resolution with an atomic force microscope using piezoresistive detection Appl. Phys. Lett. 62 834–836 [38] Cumpson P J, Clifford C A, Hedley J 2004 Quantitative analytical atomic force microscopy: a cantilever reference device for easy and accurate AFM spring-constant calibration Meas. Sci. Technol. 15 1337–1346
309
310
C H A P T ER 1 0: Mass and force measurement
[39] Pratt J R, Kramar J A, Shaw G, Gates R, Rice P, Moreland J 2006 New reference standards and artifacts for nanoscale property characterization Proc. 11th NSTI Nanotech, Boston, USA, 1st–5th June [40] Behrens I, Doering L, Peiner E 2003 Piezoresistive cantilever as portable micro force calibration standard J. Micromech. Microeng. 13 S171–S177 [41] Cumpson P J, Hedley J, Zhdan P 2003 Accurate force measurement in the atomic force microscope: a microfabricated array of reference springs for easy cantilever calibration Nanotechnology 14 918–924 [42] Cumpson P J, Hedley J 2003 Accurate analytical measurements in the atomic force microscope: a microfabricated spring constant standard potentially traceable to the SI Nanotechnology 14 1279–1288 [43] Cumpson P J, Hedley J, Clifford C A 2005 Microelectromechanical device for lateral force calibration in the atomic force microscope: Lateral electrical nanobalance J. Vac. Sci. Technol. B 23 1992–1997 [44] Stalder A, Du ¨ rig U 1995 Nanoguitar: Oscillating string as force sensor Rev. Sci. Instrum. 66 3576–3579 [45] Fukuzawa K, Ando T, Shibamoto M, Mitsuya Y, Zhang H 2006 Monolithically fabricated double-ended tuning-fork-based force sensor J. Appl. Phys. 99 094901 ¨ stu [46] Sazonova V, Yaish Y, U ¨ nel H, Roundy D, Arias T A, McEuen P L 2004 A tunable carbon nanotube electromechanical oscillator Nature 431 284–287 [47] Nesterov V, Mueller M, Fremin L L, Brand U 2009 A new facility to realize a nanonewton force standard based on electrostatic methods Metrologia 46 277–282 [48] Argento C, French R H 1996 Parametric tip model and force-distance relation for Hamaker constant determination from atomic force microscopy J. Appl. Phys. 80 6081–6090 [49] Sparnaay M J 1958 Measurement of attractive forces between flat plates Physica 24 751–764 [50] Oberhauser A, Hansma P, Carrion-Vazquez M, Fernandez J M 2001 Stepwise unfolding of titin under force-clamp atomic force microscopy PNAS 98 468–472 [51] Oberhauser A F, Marszalek P E, Erickson H P, Fernandez J M 1998 The molecular elasticity of the extracellular matrix protein tenascin Nature 393 181–185 [52] Fulton A, Isaacs W 1991 Titin, a huge, elastic sarcomeric protein with a probable role in morphogenesis Bioassays 13 157–161 [53] Degertekin F L, Hadimioglu B, Sulchek T, Quate C F 2001 Actuation and characterization of atomic force microscope cantilevers in fluids by acoustic radiation pressure Appl. Phys. Lett. 78 1628–1630 [54] Dushkin C D, Yoshimura H, Nagayama K 1996 Note - Direct measurement of nanonewton capillary forces J. Colloid Interface Sci. 181 657–660 [55] Choi J-H, Kim M-S, Park Y-K 2007 Quantum-based mechanical force realization in the piconewton range Appl. Phys. Lett. 90 90 073117
Appendix A SI units of measurement and their realization at NPL
Quantity
Unit (symbol)
Definition
Realisation
Time
second (s)
The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
Length
metre (m)
The metre is the length of the path travelled by light in a vacuum during a time interval of 1/299 792 458 of a second.
Mass
kilogram (kg)
The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram.
The second is realized by primary caesium frequency standards to about 2 parts in 1015. The majority are traditional caesium-beam designs but the latest use lasers to control and detect the atoms. At NPL the metre is currently realized through the wavelength of the 633 nm radiation from an iodine-stabilized heliumneon laser, with an uncertainty of about 3 parts in 1011. Kilogram masses and submultiples of 1 kg, made from similar materials, may be compared on the NPL precision balance to 1 mg. (Continued)
311
312
Appendix A
Quantity
Unit (symbol)
Definition
Realisation
Electric current
ampere (A)
The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 m apart in vacuum, would produce between these conductors a force equal to 2 10 7 N per metre of their length.
Thermodynamic temperature
kelvin (K)
The kelvin is the fraction of 1/273.16 of the thermodynamic temperature of the triple point of water.
Amount of substance
mole (mol)
The mole is the amount of substance of a system that contains as many elementary entities as there are atoms in 0.012 kg of carbon 12.
The ampere is realized, via the watt, to about 0.08 mA using NPL’s currentweighing and induced-emf method. The ohm is realized at NPL via a Thomson-Lambert calculable capacitor to about 0.05 mU and maintained via the quantized Hall resistance to about 0.01 mU. The volt is maintained to 0.01 mV using the Josephson effects of superconductivity. Triple point of water cells are used at NPL to realize the triple point temperature with a reproducibility of 0.1 mK via the International Temperature Scale in terms of which platinum resistance and other thermometers are calibrated within the range 0.65 K to 3000 K. Measurements of amount of substance do not require the mole to be realized directly from its definition. They are made using primary methods that give results expressed in moles by combining measurements made in other SI units. The number of entities in one mole is known to 1 part in 107. (Continued)
Appendix A
Quantity
Unit (symbol)
Definition
Realisation
Luminous intensity
candela (cd)
The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 1012 Hz and that has a radiant intensity in that direction of 1/683 W$sr 1.
The candela has been realized at NPL with an uncertainty of 0.02 %, using a cryogenic radiometer that equates the heating effect of optical radiation with that of electric power. A solidstate photometer has been developed to evaluate light of other frequencies according to the spectral luminous efficiency curve of the human eye with an uncertainty of 0.1 %.
313
This page intentionally left blank
Appendix B SI derived units Examples of SI derived units expressed in terms of base units Derived quantity
SI derived unit name
Symbol
Area Volume Speed, velocity Acceleration Wavenumber Density Current density Magnetic field strength Concentration Luminance Refractive index
square metre cubic metre metre per second metre per second squared reciprocal metre kilogram per cubic metre ampere per square metre ampere per metre mole per cubic metre candela per square metre unity
m2 m3 m$s 1 m$s 2 m 1 kg$m 3 A$m 2 A$m 1 mol$m 3 cd$m 2 1
SI derived units with special names and symbols
Derived quantity
SI derived unit name
Symbol
Plane angle Solid angle Frequency Force
radian steradian hertz newton
rad sr Hz N
In terms of other units
In terms of base units 1 1 s 1 m$kg$s
2
(Continued)
315
316
Appendix B
Derived quantity
SI derived unit name
Symbol
Pressure Energy Power Electric charge Electric potential difference Capacitance Electric resistance Electric conductance Magnetic flux Magnetic flux density Inductance Luminous flux Illuminance Activity (of a radionuclide) Absorbed dose
pascal joule watt coulomb volt farad ohm siemens weber tesla henry lumen lux becquerel gray
Pa J W C V F W S Wb T H lm lx Bq Gy
In terms of other units
In terms of base units
N$m 2 N$m J$s 1
m 1$kg$s 2 m2$kg$s 2 m2$kg$s 3 s$A m2$kg$s 3$A 1 m 2$kg 1$s4$A2 m2$kg$s 3$A 1 m 2$kg 1$s 3$A m2$kg$s 2$A 1 kg$s 2$A 1 m2$kg$s 2$A 2 Cd cd$m 2 s 1 m2$s 2
W$A 1 C$V 1 V$A 1 A$V 1 V$s Wb$m 2 Wb$A 1 cd$sr lm$m 2 J$kg
1
2
Index A Abbe criterion, 134 Abbe error, 40, 41, 92, 94, 274, 283, 284 Abbe offset, 40, 107, 283 Abbe Principle, 40, 41, 82, 275 absorption index, 130 accuracy, 15, 16 acoustic noise, 51 acousto-optic frequency shifter, 88 active vibration isolation, 51 ADC. See analogue-to-digital converter added-mass method, 192 adhesion force, 190, 198 AFM. See atomic force microscope amplitude distribution curve, 222, 226 amplitude-wavelength space, 117 analogue probe, 266 angle, 13 angular distribution of scatter, 153 angular interferometer, 98 angular power spectrum, 242 aperture correction, 78, 81, 127 area-integrating, 123 areal material ratio, 239, 240, 241 areal parameter, 164, 235, 240 areal surface texture, 116, 121, 159, 229 areal topography measuring, 123 area-scale fractal complexity, 256 arithmetic mean of the absolute height, 236 arithmetic mean peak curvature, 250 arithmetical mean deviation of the assessed profile, 219 ARS. See angle-resolved scatter articulated arm CMMs, 268 atomic force microscope, 12, 186
atomic lattice parameter, 108, 109 atomic resolution, 178, 181 autocollimator, 13, 56, 269, 283 auto-correlation function, 237 auto-correlation length, 237 Avogadro constant, 12, 294 Avogadro method, 12 Avogadro project, 294 axial resolution, 17, 138, 141, 156
B backscattered electrons, 199, 200 band-pass filter, 213, 216 base quantities, 7 batwings, 130 beam waist, 129 bearing length ratio, 225 bearing ratio curve, 225, 226 bi-directional fringe counting, 86 bi-directional scatter distribution function, 153 bifurcated sensor, 105 BIPM. See Bureau International des Poids et Mesures birefringent, 87 Bragg angle, 109 Brewster’s angle, 104 BSDF. See bi-directional scatter distribution function buoyancy effect, 290 Bureau International des Poids et Mesures, 11
C CAD. See computer-aided design cantilever, 106, 191 capacitive instrument, 155 capacitive sensor, 99, 100 capillary force, 194, 307 carbon nanotube, 2, 196 Cartesian coordinates, 36, 263
central limit theorem, 19, 20 CGPM. See Confe´rence Ge´ne´rale des Poids et Mesures change tree, 244, 245 charge on an electron, 12 chemical force microscopy, 195 chemical vapour deposition, 196 closed dale area, 250 closed dale volume, 251 closed hill area, 251 closed hill volume, 251 CMM. See coordinate measuring machine CNT. See carbon nanotube Coblentz sphere, 153 coefficient of friction, 195 coefficient of linear thermal expansion, 93 coefficient of thermal expansion, 77 coherence length, 25, 62 coherence scanning interferometer, 131, 149 Combined Optical and X-ray Interferometer, 110 comparator, 57, 271, 289, 295 computer-aided design, 263 Confe´rence Ge´ne´rale des Poids et Mesures, 7 confocal chromatic, 131, 138 confocal curve, 135, 136, 138 confocal instrument, 130, 134 constraint, 36, 37, 38, 88 contact mode, 179, 180, 193 contrast, 61 coordinate measuring machine, 4, 10, 41, 263 coordinate metrology, 4, 263 core material volume, 241 core void volume, 241 correlation length, 237, 238 cosine error, 82, 93 course line, 244 coverage factor, 21
317
318
Index
coverage interval, 18, 19 CSI. See coherence scanning interferometer cumulative error, 92 current balance, 294 cut-off length, 165, 258
equivalent viscous damping, 50 error, 3, 15 error mapping, 269 evaluation length, 213 expanded uncertainty, 21, 92 extreme-value parameter, 218
cylindrical capacitor, 100
D dale change tree, 246, 247 dale volume, 241 damping, 38, 50 deadpath length, 93 deadweight, 12, 301 degrees of freedom, 19, 20 density of peaks, 250 developed interfacial area ratio, 239 DHM. See digital holographic microscope differential plane mirror interferometer, 90, 91, 110 diffuse reflection, 152 digital hologram, 147, 148 digital holographic microscope, 147 dimensionless quantity, 13 displacement interferometry, 30, 86, 92 displacement sensor, 3, 85 Doppler broadening, 24 Doppler shift, 86, 88 dynamic noise, 183
E EBSD. See electron backscattered diffraction elastic compression, 41 elastic element, 301 Electrical Nanobalance, 304 electromagnetic waves, 9, 59 electron backscattered diffraction, 200 electron gun, 201 electron microscope, 116 electron microscopy, 199 electronic balance, 296 electrostatic force balance, 299 elliptical polarization, 59 end standard, 8, 56 energy level, 23, 24 engineering nanometrology, 2, 55
F Fabry-Pe´rot interferometer, 23, 25, 70 feature parameter, 156, 229, 235, 243 feedback, 28, 40 FIB. See focused ion beam field of view, 126 field parameter, 162, 235 film thickness, 79 filter, 26, 125, 129 finesse, 71 five point peak height, 250 five point pit height, 250 Fizeau interferometer, 66 focal length, 78, 129 focal shift error, 141 focus variation instrument, 142 focused ion beam, 178, 205 F-operator, 230, 231 force, 3, 12 force curve, 189, 190 force–distance curve, 189, 190 fractal dimension, 252, 253 fractal geometry, 251, 252 fractal parameter, 235, 251 free spectral range, 71, 91 freeform component, 268 frequency comb, 31 frequency-stabilized laser, 23 Fresnel equations, 153 frictional force, 195 fringe counting, 86, 89 fringe fraction, 74, 80 full change tree, 246 full width at half maximum, 135 fundamental physical constants, 11 FWHM. See full width at half maximum
G gauge block, 8, 9, 79 gauge block interferometer, 81 Gaussian distribution, 20, 21
Gaussian filter, 215, 228 Gaussian probability function, 214 geometric element, 267 geometric error, 268 Geometrical Product Specification, 232, 235 ghost steps, 131 goniophotometer, 155 GPS. See Geometrical Product Specification gradient density function, 242 gravitational wave detector, 48 Guide to the Expression of Uncertainty in Measurement, 18 GUM. See Guide to the Expression of Uncertainty in Measurement
H height discrimination, 219 Heisenberg’s Uncertainty Principle, 16 helium ion microscope, 206 helium-neon laser, 9, 10, 23 heterodyne interferometer, 86, 87 Heydemann correction, 91, 95 high-pass filter, 213 hill change tree, 246 homodyne interferometer, 31, 86
I inductive sensor, 100 integrating sphere, 153 interfacial surface roughness, 152 interference, 17, 62 interference microscopy, 127 interferometer, 13, 64 interferometry, 3, 58 intermittent mode, 180 inter-molecular forces, 190 internal resonances, 50 International Organization for Standardization, 2 International Prototype Kilogram, 11, 290 international prototype of the kilogram, 11 intra-molecular forces, 193 inverse areal material ratio, 239
Index
iodine-stabilised He-Ne laser, 26, 27 ion accumulation approach, 295 ISO. See International Organization for Standardization
J Jamin beam-splitter, 90 Jamin interferometer, 68, 69 Josephson junction, 294, 295
K Kelvin clamp, 37, 38 Kilogram, 7, 10 kinematic design, 36 kinematics, 36 knife-edge, 106, 296 knife-edge balance, 289 Korea Research Institute of Standards and Science, 300 KRISS. See Korea Research Institute of Standards and Science kurtosis of the assessed profile, 223 kurtosis of topography height distribution, 236
L Lamb dip, 25 laser, 3, 9, 23 laser tracker, 56, 268 Lateral Electrical Nanobalance, 304 lateral resolution, 16, 17, 68 Lau pattern, 103 law of propagation of uncertainty, 19, 20 lay, 15, 120 length, 3, 5, 7, 55 length-scale plot, 253 Lennard-Jones potential, 193 levitated superconductor approach, 295 LFB. See low-force balance L-filter, 229, 230 line profiling, 123 line standard, 8, 56 linear calibration, 203 linear filter, 232
linear fractal complexity parameter, 253 linear interpolation, 253, 255 linear variable differential transformer, 100 Linnik objective, 146 low-force balance, 299 low-pass filter, 129, 213 LVDT. See linear variable differential transformer
M Mach-Zehnder interferometer, 68 magnification, 119, 127 mass, 3, 10 mass comparator, 289, 290, 295 material ratio of the profile, 224 material volume, 241 maximum height of the profile, 218 maximum height of the surface, 237 maximum pit height of the surface, 237 maximum profile peak height, 218 maximum profile valley depth, 218 maximum surface peak height, 237 mean height of the profile elements, 219 mean line, 214, 217 mean width of the profile elements, 224 measurand, 16, 18, 35 mechanical comparators, 57 membrane probe, 278 MEMS. See microelectromechanical systems meniscus, 194, 298 method of exact fractions, 75 metre, 5, 7, 8 metrological AFM, 188 metrology loop, 43, 44 Michelson interferometer, 48, 64 micro- and nanotechnology, 1 micro-CMM, 272 micro-electro-discharge machining, 281 microelectromechanical systems, 12 microscope objectiv, 127, 128
microsystems technology, 1 miniature CMM, 263, 272 Mirau objective, 145, 146 MNT. See micro- and nanotechnology modulation depth, 61 moire´ pattern, 103 Monte Carlo method, 18, 19, 21 motif, 229, 232 MST. See microsystems technology multiple scattering, 131, 132
N NA. See numerical aperture nanoguitar, 305 nanomaterials, 2 nanoparticle, 185, 198, 204 National Institute of Standards and Technology, 7 National Measurement Institute, 7 National Metrology Institute Japan, 7 National Physical Laboratory, 7 natural frequency, 46, 49 nesting index, 230 nettoyage-lavage, 291, 292 newton, 12 Nipkow disk, 136, 137 NIST. See National Institute of Standards and Technology NMI. See National Measurement Institute NMIJ. See National Metrology Institute Japan non-contact mode, 180, 195 non-cumulative error, 92 non-linearity, 87, 94 NPL. See National Physical Laboratory numerical aperture, 17, 104, 128 numerical wavefront propagation algorithm, 149
O objective lens, 127, 128 obliquity correction, 79 obliquity factor, 153 optical beam deflection, 181 optical cavity, 23, 24 optical encoder, 102 optical fibre sensor, 104 optical instrument, 16, 44, 126
319
320
Index
optical lever sensitivity, 191 optical resolution, 127, 128
P passive vibration isolation, 49 pattern recognition, 243 peak extreme height, 241 peak material volume, 241 pendulum, 14, 49 performance verification, 269 permittivity, 99, 100 phase change correction, 79, 81 phase change on reflection, 130 phase quadrature, 86, 90 phase sensitive detection, 153 phase-shifting interferometer, 144 phase-unwrapping algorithm, 145 physical quantity, 7 Physikalisch-Technische Bundesanhalt, 7 pickup, 123 piezoelectric scanner, 178 piezoresistive cantilever, 303 piezoresistive strain element, 303 pinhole aperture, 134 Plank’s constant, 12, 294 platen, 57 pneumatic gauging, 156 point autofocus instrument, 139 Poisson’s ratio, 42 population inversion, 23 power spectral density, 155 precision, 3, 35 primary profile, 213, 215 principle of superposition, 59 prismatic component, 268 prismatic slideway, 38 probability density function, 226 probability distribution, 18 probing force, 165 probing system, 160, 266 profile calibration artefact, 158 profile element, 217, 219, 224 profile height amplitude curve, 226 profile peak, 217, 218 profile section height difference, 226 profile valley, 217, 218 PSI. See phase-shifting interferometer PTB. See Physikalisch-Technische Bundesanhalt
Q quality factor, 63, 306 quantity of dimension one, 13 quantum mechanical effects, 11, 106
R radian, 13 random error, 97 random errors, 17 random variable, 20 Rayleigh criterion, 128 reference AFM, 187 reference graticule, 204 reference software, 167 reference surface, 66 refractive index, 28, 77, 81 refractometer, 77 relative length parameter, 254 relative material ratio, 226 resolution, 15, 16 resonant frequency, 49 reversal methods, 281 ridge line, 244 ringlight, 143 robust Gaussian filter, 231 root mean square deviation of the assessed profile, 221 root mean square gradient, 238 root mean square value of the ordinates, 236 roughness profile, 215
S saddle point, 244 sampling length, 159, 213 scale-limited surface, 229 scanning electron microscope, 199 scanning near-field optical microscope, 179 scanning probe, 2, 177 scanning probe microscope, 2, 178 scattering, 127, 152 secondary electrons, 199 segmentation, 229, 243 segmentation filter, 232 seismic vibration spectrum, 47 seismometer, 48 self-affine, 252 SEM. See scanning electron microscope
sensitivity coefficients, 20 sexagesimal, 13 SF surface, 230 S-filter, 229 sharpness, 143 shearing interferometry, 68 SI. See Syste`me International d’Unite´s skewness of the assessed profile, 222 skewness of topography height distribution, 236 skid, 125 SL surface, 230 smooth–rough crossover, 252 SNOM. See scanning near-field optical microscope. softgauge, 167 software measurement standard, 157, 167 solid angle, 13 solid-state laser, 23 sound pressure attenuation, 51 spacing discrimination, 219 Sparrow criterion, 128 specular reflection, 143 speed of light, 9, 59 SPM. See scanning probe microscope spot size, 129, 141 spring constant, 13, 191 standard deviation, 19 standard uncertainty, 12, 19 static noise, 183 steradian, 13 stiffness, 36, 39 stimulated emission, 24 stitching, 132 STM. See scanning tunnelling microscope stratified functional properties, 228 stray capacitance, 100 stray reflection, 97 structural loop, 43 structured light projection, 134 stylus force, 124 stylus instrument, 14, 123 stylus qualification, 263 stylus tip, 124, 125 substitute element, 264 surface damage, 124, 278 surface datum, 125
Index
surface form, 115 surface integrity, 115 surface profile, 10, 120, 212 surface texture, 2, 117, 121 surface texture parameters, 121 surface topography, 35, 115 swept-frequency interferometry, 91 symmetry, 39, 46 systematic errors, 17, 98 Syste`me International d’Unite´s, 6
T tapping mode, 180 t-distribution, 20, 21 TEM. See transmission electron microscope ten point height, 250 texture aspect ratio, 238 texture direction, 242 thermal conductivity, 45 thermal diffusivity, 45 thermal distortion, 45 thermal expansion, 45, 77, 92 thermal expansion coefficient, 43 thermal loop, 43 thermal mass, 45 TIS. See total integrated scatter total height of the surface, 219 total integrated scatter, 153 total internal reflectance, 104 total profile, 212
total traverse length, 213 touch trigger probe, 266 traceability, 7, 14 traced profile, 212 transmissibility, 49 transmission characteristic, 51, 214 transmission electron microscope, 201 triangulation instrument, 132 true value, 15 tunnelling effect, 179 two-pan balance, 296 Twyman-Green interferometer, 64 type A evaluation, 19 type B evaluation, 19
visibility, 61 vision system, 266 void volume, 241 volumetric error compensation, 269
W
uncertainty, 3, 15, 17 unified co-ordinate system, 234 unit, 2
Watt balance, 11, 294 wavelength at 50% depth modulation, 129 waviness profile, 213, 216 weight, 10 weighting function, 214 Welch-Satterthwaite formula, 20 white light interference, 62 white light scanning interferometry, 149 Wolf pruning, 244 work function, 180 wringing, 56, 57
V
X
U
Van der Waals force, 194 vertical scanning white light interferometry, 149 vibrating probe, 279 vibration isolation system, 49 VIM, 15 virtual CMM, 271 viscous damping, 49
X-ray interferometer, 108
Y Young’s modulus, 42, 192
Z Zeeman-stabilised laser, 28
321
This page intentionally left blank