Photonics Rules of Thumb
Ed Friedman John Lester Miller
Optics, Electro-Optics, Fiber Optics, and Lasers
Second Edition
McGraw-Hill New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto
Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Manufactured in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. 9-78-007143345-7 The material in this eBook also appears in the print version of this title: 0-07-138519-3. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. For more information, please contact George Hoare, Special Sales, at
[email protected] or (212) 904-4069. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own noncommercial and personal use; any other use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to comply with these terms. THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WARRANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPERLINK OR OTHERWISE, AND EXPRESSLY DISCLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work will meet your requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom. McGraw-Hill has no responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or inability to use the work, even if any of them has been advised of the possibility of such damages. This limitation of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or otherwise. DOI: 10.1036/0071385193
For more information about this title, click here
Contents
Acknowledgments xi Introduction xiii
Chapter 1 Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 SNR Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The Johnson Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Probability of Detection Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Correcting for Probability of Chance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Detection Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Estimating Probability Criteria from N50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Gimbal to Slewed Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Identification and Recognition Improvement for Interpolation . . . . . . . . . . . . . . . . . . . . . . . 14 Resolution Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 MTF Squeeze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Psychometric Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Rayleigh Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Resolution Required to Read a Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Subpixel Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 National Image Interpretability Rating Scale Criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Chapter 2
Astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Atmospheric “Seeing” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Blackbody Temperature of the Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Direct Lunar Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Number of Actuators in an Adaptive Optic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Number of Infrared Sources per Square Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Number of Stars as a Function of Wavelength. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Number of Stars above a Given Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Photon Rate at a Focal Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Reduction of Magnitude by Airmass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 A Simple Model of Stellar Populations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Chapter 3
Atmospherics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Atmospheric Attenuation or Beer’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Impact of Weather on Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 iii
iv
Photonics Rules of Thumb
Atmospheric Transmission as a Function of Visibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Bandwidth Requirement for Adaptive Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Cn2 Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Cn2 as a Function of Weather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Free-Space Link Margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Fried Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Index of Refraction of Air . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 The Partial Pressure of Water Vapor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Phase Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Shack-Hartmann Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Vertical Profiles of Atmospheric Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Visibility Distance for Rayleigh and Mie Scattering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Chapter 4
Backgrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Clutter and Signal-to-Clutter Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Clutter PSD Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Earth’s Emission and Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Effective Sky Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Emissivity Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Frame Differencing Gain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 General Infrared Clutter Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Illuminance Changes during Twilight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Reflectivity of a Wet Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Sky Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Spencer’s Signal-to-Clutter Ratio as a Function of Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 83
Chapter 5
Cryogenics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Bottle Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Cold Shield Coatings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Cooler Capacity Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Cooling with Solid Cryogen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Failure Probabilities for Cryocoolers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Joule–Thomson Clogging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Joule–Thomson Gas Bottle Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Sine Rule of Improved Performance from Cold Shields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Stirling Cooler Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Temperature Limits on Detector/Dewar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Thermal Conductivity of Multilayer Insulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Cryocooler Sizing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Radiant Input from Dewars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Chapter 6
Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
APD Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Responsivity of Avalanche Photodiodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Defining Background-Limited Performance for Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Digitizer Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 HgCdTe “x” Concentration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Martin’s Detector DC Pedestal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Noise Bandwidth of Detectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Nonuniformity Effects on SNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Peak versus Cutoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Performance Dependence on RoA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Responsivity and Quantum Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Shot Noise Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Specifying 1/f Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Well Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 IR Detector Sensitivity to Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Contents
Chapter 7
v
Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Analog Square Pixel Aspect Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Comfort in Viewing Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Common Sense for Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Gamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Gray Levels for Human Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Horizontal Sweep. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Kell Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 NTSC Display Analog Video Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 The Rose Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Wald and Ricco’s Law for Display Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Display Lines to Spatial Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Chapter 8
The Human Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Cone Density of the Human Eye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Data Latency for Human Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Dyschromatopic Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Energy Flow into the Eye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Eye Motion during the Formation of an Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Frequency at which Sequences of Images Appear as a Smooth Flow. . . . . . . . . . . . . . . . . . . 147 Eye Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Little Bits of Eye Stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Old-Age Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Optical Fields of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Pupil Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 The Quantum Efficiency of Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Retinal Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Rod Density Peaks around an Eccentricity of 30° . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Simplified Optics Transfer Functions for the Components of the Eye . . . . . . . . . . . . . . . . . 160 Stereograph Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Superposition of Colors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Vision Creating a Field of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Chapter 9
Lasers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Aperture Size for Laser Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Atmospheric Absorption of a 10.6-µm Laser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Cross Section of a Retro-reflector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Gaussian Beam Radius Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Increased Requirement for Rangefinder SNR to Overcome Atmospheric Effects . . . . . . . . 172 Laser Beam Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Laser Beam Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Laser Beam Scintillation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Laser Beam Spread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Laser Beam Spread Compared with Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Laser Beam Wander Variance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Laser Brightness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 LED vs. Laser Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 LIDAR Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 On-Axis Intensity of a Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Peak Intensity of a Beam with Intervening Atmosphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Pointing of a Beam of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Pulse Stretching in Scattering Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Thermal Focusing in Rod Lasers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Chapter 10
Material Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Cauchy Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Diameter-to-Thickness (Aspect) Ratio for Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Dip Coating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
vi
Photonics Rules of Thumb
Dome Collapse Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Figure Change of Metal Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Mass Is Proportional to Element Size Cubed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Mechanical Stability Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Mirror Support Criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Natural Frequency of a Deformable Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Pressure on a Plane Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Properties of Fused Silica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Spin-Cast Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Chapter 11
Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Amdahl’s and Gustafson’s Laws for Processing Speedup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Arrhenius Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Cost of a Photon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Crickets as Thermometers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Distance to Horizon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Learning Curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Moore’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Murphy’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Noise Resulting from Quantization Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Noise Root Sum of Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Photolithography Yield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Solid Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Speed of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Chapter 12
Ocean Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Absorption Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Absorption Caused by Chlorophyll. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Absorption of Ice at 532 nm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Bathymetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 f-Stop under Water. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Index of Refraction of Seawater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Ocean Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Underwater Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Underwater Glow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Wave Slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Chapter 13
Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Aberration Degrading the Blur Spot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Aberration Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Acousto-optic Tunable Filter Bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Blur vs. Field-Dependent Aberrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Circular Variable Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Defocus for a Telescope Focused at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Diffraction Is Proportional to Perimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Diffraction Principles Derived from the Uncertainty Principle . . . . . . . . . . . . . . . . . . . . . . . 248 f/# for Circular Obscured Apertures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Fabry–Perot Etalons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Focal Length and Field of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Grating Blockers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Grating Efficiency as a Function of Wavelength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Hollow Waveguides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Hyperfocal Distance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 The Law of Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Limit on FOV for Reflective Telescopes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Linear Approximation for Optical Modulation Transfer Function . . . . . . . . . . . . . . . . . . . . 258 Antireflection Coating Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Maximum Useful Pupil Diameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Minimum f/# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Contents
vii
Optical Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Optical Performance of a Telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Peak-to-Valley Approximates Four Times the Root-Mean-Square . . . . . . . . . . . . . . . . . . . . . 265 Pulse Broadening in a Fabry–Perot Etalon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Root-Sum-Squared Blur. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Scatter Depends on Surface Roughness and Wavelength. . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Shape of Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Spherical Aberration and f/# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Stop Down Two Stops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Chapter 14
Radiometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Absolute Calibration Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Bandpass Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Blackbody or Planck Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Brightness of Common Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Calibrate under Use Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Effective Cavity Emissivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 The MRT/NE∆T Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 The Etendue or Optical Invariant Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Ideal NETD Simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Laboratory Blackbody Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Lambert’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Logarithmic Blackbody Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Narrowband Approximation to Planck’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 The Peak Wavelength or Wien Displacement Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Photons-to-Watts Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Quick Test of NE∆T. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 The Rule of 4f/# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Chapter 15
Shop Optics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Accuracy of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Approximations for Foucault Knife-Edge Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Cleaning Optics Caution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Collimator Margin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Detection of Flatness by the Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Diamond Turning Crossfeed Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Effect of Surface Irregularity on the Wavefront. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Fringe Movement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Material Removal Rate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Oversizing an Optical Element for Producibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Pitch Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Sticky Notes to Replace Computer Punch Cards for Alignment . . . . . . . . . . . . . . . . . . . . . . 308 Preston’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Properties of Visible Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Scratch and Dig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Surface Tilt Is Typically the Worst Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Chapter 16
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Baffle Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Expected Modulation Transfer Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 BLIP Limiting Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Dawes Limit of Telescope Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Divide by the Number of Visits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 General Image Quality Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Good Fringe Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 LWIR Diffraction Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Overlap Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Packaging Apertures in Gimbals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Pick Any Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
viii
Photonics Rules of Thumb
Procedures to Reduce Narcissus Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Relationship between Focal Length and Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Simplified Range Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 System Off-Axis Rejection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Temperature Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Typical Values of EO System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Wind Loading on a Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Largest Optical Element Drives the Mass of the Telescope . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Chapter 17
Target Phenomenology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Bidirectional Reflectance Distribution Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Causes of White Pigment’s Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Chlorophyll Absorptance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Emissivity Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 The Hagan–Rubens Relationship for the Reflectivity of Metals . . . . . . . . . . . . . . . . . . . . . . . 337 Human Body Signature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 IR Skin Characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Jet Plume Phenomenology Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Lambertian vs. Specular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Laser Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 More Plume Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Plume Thrust Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Rocket Plume Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Solar Reflection Always Adds to Signature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Temperature as a Function of Aerodynamic Heating. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Chapter 18
Visible and Television Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Airy Disk Diameter Approximates f/# (for Visible Systems). . . . . . . . . . . . . . . . . . . . . . . . . . 355 CCD Size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Charge Transfer Efficiency Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 CMOS Depletion Scaling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Correlated Double Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Domination of Spurious Charge for CCDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Equivalent ISO Speed of a Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Hobbs’ CCD Noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Image Intensifier Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Increase in Intensifier Photocathode EBI with Temperature. . . . . . . . . . . . . . . . . . . . . . . . . 362 Low-Background NE∆Q Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Microchannel Plate Noise Figure and Noise Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Noise as a Function of Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Noise Equations for CMOS APSs and CCDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Photomultiplier Tube Power Supply Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 P-Well CCDs are Harder than N-Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Richardson’s Equation for Photocathode Thermionic Current. . . . . . . . . . . . . . . . . . . . . . . 369 Silicon Quantum Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Williams’ Lines of Resolution per Megahertz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Appendix A Glossary Index
373 389
409
About the Authors
417
Acknowledgments
The authors owe a debt to many; virtually every rule contained in this book is the result of a specific contribution or the inspiration of someone in the field of electro-optics. Without their efforts to develop the technology and its many applications, this book would neither exist nor have any value. We especially thank those who have contributed to this edition: Bjorn Andressen Joel Anspach Cynthia Archer Bill Bloomquist Gary Emerson Jim Gates Paul Graf Jim Haidt Joel Johnson Noel Jolivet Brian McComas Steve Ridgway Mike Soel Richard Vollmerhausen Scott Way George Williams John Wiltse Separately, we also thank all of those who suggested rules and provided permissions and all of those who helped review and improve the first edition. Finally, the authors recognize the role of our families (especially our beloved wives, Judith Friedman and Corinne Foster) for tolerating the long periods of loneliness. The reader will take note that this is the third edition, and the creation of all three has taken a decade. Our families (wives, parents, children, and five grandchildren born during this period), while supportive of this effort, must be relieved that the saga is over. ix
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
This page intentionally left blank
Introduction
“Few formulas are so absolute as not to bend before the blast of extraordinary circumstances.” Benjamin Nathan Cardozo The evolution of the electro-optical (EO) sciences parallels, and feeds from, developments in a number of somewhat unrelated fields, including astronomy, satellite and remote sensing technology, materials science, electronics, optical communications, military research, and many others. The common thread of all of this effort, which really came into focus in the 1950s, is that scientists and engineers have been able to combine highly successful electronic technologies with the more ancient concepts and methods of optics and electromagnetic wave propagation. The merging of these fields has provided an unprecedented capability for instruments to “see” targets and communicate with them in a wide range of wavelengths for the benefit of security systems, science, defense, and (more recently) consumers. Major departments at universities are now devoted to producing new graduates with specialties in this field. There is no end in sight for the advancement of these technologies, especially with the continued development of electronics and computing as increasingly integral parts of EO instrumentation. One of the disturbing trends in this technology is the constant narrowing of the role of engineers. As the technology matures, it becomes more difficult for anyone working in an area of the EO discipline to understand all that is being done in the related sciences and engineering. This book has been assembled to make a first, small step to expose anyone working in EO to a wide range of critical topics through simple calculations and explanations. There is no intent to compete with stalwart texts or the many journals or conferences devoted to the EO field, all of which provide considerable detail in every area. Rather, this book is intended to allow any EO engineer, regardless of specialty, to make first guesses at solutions in a wide range of topics that might be encountered in system design, modeling, or fabrication, as well as to provide a guide for choosing which details to consider more diligently. Another distinguishing feature of this book is that it has few of the detailed derivations found in typical academic books. We are not trying to replace them but to provide an augmentation of what they provide. xi
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
xii
Photonics Rules of Thumb
This book will help any EO team to make quick assessments, generally requiring no more than a calculator, so that they quickly find the right solution for a design problem. The book is also useful for managers, marketeers, and other semitechnical folks who are new to the electro-optical industry (or are on its periphery) to develop a feel for the difference between the chimerical and the real. Students may find the same type of quick-calculation approach valuable, particularly in the case of oral exams in which the professor is pressuring the student to do a complex problem quickly. Using these assembled rules, you can keep your wits about you and provide an immediate and nearly correct answer, which usually will save the day. But after the day is saved, you should go back to the question and perform a rigorous analysis. These rules are useful for quick sanity checks and basic relationships. Being familiar with the rules allows one to rapidly pinpoint trouble areas or ask probing questions in meetings. They aid in thinking on your feet and in developing a sense of what will work and what won’t. But they are not, and never will be, the last word. It is fully recognized that errors may still be present, and for that we apologize in advance to readers and those from whom the material was derived. Like the previous two editions of this book, the motivation for this edition was to provide a vehicle for engineers working in the electro-optical fields to create and check ideas that allow them to quickly assess if a design idea would work, what it might cost, and how risky it is. All of us do this, but we usually don’t organize our own set of rules and make them public. To assist us in this endeavor, we have solicited the cooperation of as many experts as would agree to help. Their input gives us a wide variety of experience from many different technical points of view. Alas, technology advances, and all of us wonder how we can possibly keep up. Hopefully, this book will not only provide some specific ideas related to electro-optic technology, it will also suggest some ways of thinking about things that will lead to a whole new generation of such rules and ideas. As we discovered with the previous editions of this book, not everyone has the same thing in mind when considering “a rule of thumb.” To qualify for our definition of a rule of thumb, a rule should be useful to a practitioner and possess at least most of the following attributes: ■ It should be easy to implement. ■ It should provide roughly the correct answer. ■ The main points should be easy to remember. ■ It should be simple to express. ■ It should highlight the important variables while diminishing the role of generally unimportant variables. ■ It should provide useful insight to the workings of the subject matter. In the first edition of the book, we found it valuable to create a detailed standard form and stick to it as closely as possible. We did so with the second edition and this edition as well. However, like the second edition, which concentrated on optical applications in telecommunications, we have simplified the format to eliminate duplication of material. For this edition, the format has been simplified, grouping all additional information into the “Discussion” section of the rule, because a reader often will want more detail than is provided in the rule itself. References are provided whenever possible. In addition, reference material is mentioned that can be considered as recommended reading for the reader with a desire for more detail than could be presented in the “rule” and “discussion.” Not every entry in the references was used to create the rule. The reader should note that each rule “stands on its own,” so the abbreviations and terminology may not be entirely consistent throughout. The rules in this book are intended to be correct in principle and to give the right answer to an approximation. Some are more accurate than others. Some are laws of physics, and
Introduction
xiii
some represent existing technology trends. Many derive from observations made by researchers in the field, augmented by curve fitting that results in polynomic approximations. These can be quite good for explorations of the proper operating point of a system, resolving trade studies, and other applications. Readers with a desire for a more precise answer can consult the references, which usually contain some more detailed analyses that can lead to an even better answer. Rules based on the current state of the art will be useful in the future only to demonstrate how hard an early twenty-first century electro-optical engineer had to work in these current Dark Ages. Many of the rules will become less useful and inappropriate as time marches on, but they are valid now and for the near future. Others derive directly from laws of physics and will, we expect, endure forever. However, even today, there may arise odd situations in which a particular rule will be invalid. When this happens, a detailed understanding between management and technician must exist as to why the state of the art is being beaten. It isn’t impossible to beat the state of the art—only unlikely (unless you are trying). The authors arrived at the same place by very different paths. John spent some of his career in astronomy before joining the aerospace industry to work on infrared sensors for space surveillance. He later worked on search-and-rescue and enhanced vision systems. Ed spent much of his career working on remote sensing technologies applied to Earth, its atmosphere and oceans, and, more recently, astronomical instruments and advanced pointing systems. We met in Denver in 1985, both working for a major government contractor on exotic electro-optical systems. Those were halcyon days, with money flowing like water and contractors winning billions of dollars for some concepts that were overly optimistic or barely possible at best. In the center of the whole fray were bureaucrats, politicians, and managers who were demanding that we design systems that would be capable of the impossible. We saw many requirements and goals being levied on our systems that were far from realistic, often resulting from confusing (and poorly understood) interpretations of the capabilities of optical and electro-optical systems and the properties of targets or backgrounds. We found a common ground when managers discovered that many co-workers, in an attempt to outdo the competition, were promising to perform sensor demonstrations that violated many rules of engineering, if not physics. On one multibillion-dollar program, after some consistent exposure to neophytes proposing all sorts of undoable things, we decided to try to educate everyone by creating a half-serious, half-humorous posting for the local bulletin board (this was before web sites were ubiquitous) called “Dr. Photon’s Rules of Thumb.” Its content was a list of basic rules that apply when optics or electro-optics are being used. That first list consisted of simple scientific and engineering truths, inspired by the worst of the erroneous ideas that nontechnical people had proposed. Our early ideas weren’t that far from an EO Dilbert©, but we failed to put them into comic form. The goal was to eliminate many of the bad ideas and try to instill at least a little scientific foundation into the efforts of the team. Although the list of simple rules embarrassed a few of the misinformed, it was generally met with enthusiasm. We found copies of it not only in this project but all across the company, and even among competitors. When we decided to publish the material in a book, we needed many more rules than were contained in the original posting. The quest for rules led to key papers, hallmark books, colleagues, private submissions from experts, out-of-print books, foreign books, technical papers, and into the heart of darkness of past experience. What an education is was! Each of us was surprised and perplexed by at least a few of the things we discovered along the way. Some of these rules are common folklore in the industry. We developed a number ourselves. The original list included nearly 500 such rules, now winnowed down to the 300 or so that survive in this edition. Rule selection was based on our perceptions of
xiv
Photonics Rules of Thumb
each rule’s practical usefulness to a wide range of users, designers, and managers of electro-optical systems in the early twenty-first century. The down-selection was accomplished by examining every rule for truthfulness, practicality, range of applicability, ease of understanding, and, frankly, how “cool” it is. As such, this is an eclectic assortment that will be more useful to some than to others, and more useful on some days than others. Some rules have several nearly identical equations or concepts describing the same concept. The bulk of this book consists of more than 300 rules, divided into 18 chapters. Each chapter begins with a short background and history of the general subject matter to set the stage and provide a foundation. The rules follow. Because many rules apply to more than one chapter, a comprehensive index and detailed table of contents is included. We apologize for any confusion you may have in finding a given rule, but it was necessary to put them somewhere and, quite honestly, it was often an arbitrary choice between one or more chapters. Students and those new to the field will find the glossary useful. Here you will find definitions of jargon, common acronyms, abbreviations, and a lexicon intended to resolve confusing and ambiguous terms. To summarize, this collection of rules and concepts represents an incomplete, idiosyncratic, and eclectic toolbox. The rules, like tools, are neither good nor bad; they can be used to facilitate the transition of whimsical concepts to mature hardware or to immediately identify a technological path worth pursuing. Conversely, misused, they can also obfuscate the truth and, if improperly applied, derive incorrect answers. Our job was to refine complex ideas to simplified concepts and present these rules to you with appropriate cautions. However, it is your responsibility to use them correctly. Remember, it is a poor workman who blames his tools, and we hope you will find these tools useful. Dr. Edward Friedman John Lester Miller
Chapter
1 Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
Acquisition, tracking, and pointing (ATP) naturally decomposes into detection, recognition, and identification (DRI); all are critical functions in a number of scientific, military, and commercial security systems. The ATP function is often used to refer to the servo system, including the gimbals, stabilization, and slewing functions, whereas DRI often refers to the ability of the complete system to present information to a user (human or machine) tasked with performing an intelligent detection, recognition, or identification function. Several recent developments have vastly increased this capability, including multispectral and hyperspectral imagery, image fusion, image enhancement, and automatic target detection algorithms. Generally, the tasks of acquisition, tracking, and pointing occur before the target is detected, or in the first phases of detection, and traditionally have been analog in nature, although modern systems perform this all digitally. The detection, recognition, and identification process occurs after the ATP and generally involves a human, machine vision system, or automatic target recognizer. One can imagine early hunters going through the same activities as today’s heat-seeking missiles when the concepts of ATP/DRI are applied. For instance, the hunter looks for a bird to kill for dinner and, like the fighter pilot, eventually finds prey. He then recognizes it as a bird, identifies it as to the type of bird, and verifies that it is a tasty species of bird. Acquisition takes place as all other distractions are eliminated and attention is turned to this particular target. Next, the brain of the hunter and the computer in the aircraft begin to observe and follow the flight of the victim. Assuming that random motions are not employed (a protection tactic for both types of prey once they know they are being considered as a target), the tracking function begins to include anticipation of the trajectory of the bird/target. This anticipation is critical for allowing appropriate lead-ahead aiming, because the sensor-weapon cannot be pointed at the current position but rather must be pointed at the intended collision point. Finally, the hunter must aim (or point) his weapon properly, taking into account the lead-ahead effect as well as the size of the target. Today’s
1
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
2
Chapter One
technology solves the size problem by constantly tracking and correcting the trajectory of the weapon so that the closer it gets, the more accurate the hit. The same type of function occurs at much lower rates in a variety of scientific applications of electro-optics (EO). For example, astronomical telescopes must acquire and track the stars, comets, and asteroids that are chosen for study. The rate at which the star appears to cross the sky is much lower than that of a military or culinary target, but the pointing must be much more accurate if quality images are to be obtained. Very few EO systems do not have to perform ATP/DRI functions. Of course, the great leaps forward in ATP/DRI over the last century have derived from a combination of optical technologies and electronics. Some advances derive, in theory and practice, from the development of radar during WWII. Indeed, some of the rules in this chapter relate to the computation of the probability of detection of targets. These computations have direct analogs in both the EO and radar worlds. A common feature of both radar and EO tracking systems is the concern about the signal-to-noise ratio (SNR) that results from different types of targets, since the quality of the ATP/DRI functions depends in a quantitative way on the SNR. Targets that are not well distinguished from background and sensor noise sources will be poorly tracked, and the success of the system eventually will be compromised, even if the pointing system is very capable. The U.S. Army’s Night Vision and Electronic Sensors Directorate [part of U.S. Army Communications–Electronics Command (CECOM)] in Fort Belvoir has long been a leader in the scientific and engineering investigation of DRI for EO systems. The development of ATP systems rests on several basic parameters: the shape and size of the target, the range between target and sensor, the contrast between the target surface characteristics and those of the surrounding scene, the atmosphere, the intensity of the target signature, the ability of the system operator to point the weapon, and the speed at which it can accommodate the trajectory (crossing rate) of the target. Some modern implementations of these systems rely on advanced mathematical algorithms for estimating the trajectory, based on the historical motions of the target. They also rely heavily on the control systems that will point both the tracking system and the weapon. It is clear from recent successes in both military and scientific systems that ATP is a mature technology that is becoming limited only by the environment through which the system must view the target. Reference 1 details several of the hardware challenges a gimbaled system must overcome to provide this capability. This chapter covers a number of rules related to EO sensing and detection, and it also addresses some empirical observations related to how humans perform detection and tracking functions. It includes some mechanical and gimbal rules. These are frequently included in ATP discussions, and, frankly, we didn’t have enough good rules to form an independent chapter. Our ancestors did not know it, but they were using the same type of rules that appear in “Target Resolution vs. Line Pair Rules” in that quantitative expression is given for the resolution needed to perform the ATP functions. Throughout this chapter, we refer to the functions of detection, recognition, and identification. Richard Vollmerhausen2 offers the following definitions and cautions for a human searching an image for targets: ■ Target detection means that an observer (or machine) has found something that is potentially a target of interest. A detection does not mean that the observer knows that a target is present. Sometimes an object is detected because it is in a likely place; sometimes it is because a Sun glint or hot spot attracted the observer’s attention. Sometimes a target is detected because it looks like a target (target recognition). Generally, detection means that some further action must be taken to view a location or object. With a thermal infrared imager, if a hot spot is viewed and the observer switches to a narrow field of view to see what the hot spot contains, the detection occurred when the hot spot caused interest by the observer. The result of a detection is further interest by the observer; that is, the
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
3
observer might switch to a narrower field of view, select another sensor, call a lookout position, and so on. Quite often, detection, recognition, and even identification occur simultaneously. ■ Recognition involves discriminating which class of object describes the target. For military vehicles, the observer is asked to say whether the vehicle is a tank, a truck, or an armored personnel carrier (APC). If the observer identifies the class correctly (tank, truck, or APC), the task is scored as correct. That is, the observer might mistake a T72 Russian tank for an old American Sheridan tank. But he has correctly “recognized” that the target is a tank. It does not matter that he incorrectly identified the vehicle. ■ Target identification requires the observer to make the correct vehicle determination. In this case, the observer must correctly identify the target, not just the class. He must call a T72 a T72 and a Sheridan a Sheridan. Calling a T72 tank a Sheridan tank is scored as an incorrect choice. The reader should realize an important fact about recognition and identification. The difficulty of recognizing or identifying a vehicle depends both on the vehicle itself and on the alternatives or confusers. Task difficulty is established by the set of possible choices, not just the individual target that happens to be within the sensor field of view (FOV) at some point in time, and it may require more than the six or eight line pairs that are often stated as necessary for identification. For example, consider the following scenario. The U.S. is engaged with an enemy who has T72 Russian tanks. The U.S. has two allies, one using T62 Russian tanks (which look like T72 tanks) and the other using old American Sheridan tanks (which look different from Russian tanks). A “friend versus foe” decision by U.S. forces is much easier for the ally who uses the Sheridan than the ally who uses the Russian T62.
References 1. L. West and T. Segerstorm, “Commercial Applications in Aerial Thermography: Powerline Inspection, Research and Environmental Studies,” Proc. SPIE, Vol. 4020, 2000, pp. 382–386. 2. Private communications with Rich Vollmerhausen, 2003.
4
Chapter One
SNR REQUIREMENTS A signal-to-noise ratio of 6 is adequate to perform most sensing and tracking functions. Any more SNR is not needed. Any less will not work well. Targets can be sensed with certainty to a range defined by the signal-to-noise ratio (SNR) and allowable false alarm rate. Beyond that range, they are not usually not detectable.
Discussion This rule derives directly from standard results in a variety of texts that deal with target detection in noise. See the rule, “Pd Estimation,” in this chapter for additional details on how to compute the probability of detection for a variety of conditions. Clearly, there are some cases in which the probability of false alarm (Pfa) can be raised, which allows a small drop in the SNR for a required probability of detection (Pd). For example, if one is willing to tolerate a Pfa of 1 in 100, then the requirement on SNR drops from 6 to about 4 for 90 percent Pd. Conversely, there are some applications in which a much higher SNR is required (e.g., optical telecommunication receivers where a high SNR is required to achieve the required very low bit error rate).1 This rule assumes “white noise,” in which noise is present in all frequencies with the same probability, and no “clutter.” The situation is different with noise concentrated at a particular frequency or with certain characteristics. In general, if you have a priori knowledge of the characteristics, you can design a filter to improve performance. However, if you do not know the exact characteristics (which is usually the case with clutter), then your performance will be worse than expected by looking at tables tabulated for white noise. The more complex case in which the noise has some “color” cannot be dealt with so easily, because all sorts of possible characterizations can occur. From the graphic, one can see the high levels of probability of detection for an SNR of 6. Also, note that the probability of detection increases rapidly as SNR increases (at a false alarm of 1 × 10–4, doubling the SNR from 3 to 6 results in the probability of detection increasing from about 0.1 to well above 0.95). For example, the Burle Electro-Optics Handbook2 shows that a probability of detection in excess of 90 percent can be achieved only with a probability of false alarm of around 1 in 1 million if the SNR is about 6. In most systems, a Pfa of at most 1 in 1 million is about right. It must also be noted that the data shown in the reference are pixel rates so, in a large focal plane, there may be around 1 million pixels. Therefore, the requirement of Pfa of around 1 in 1 million limits the system to 1 false alarm per frame.
References 1. J. Miller and E. Friedman, Optical Communications Rules of Thumb, McGraw-Hill, New York, p. 43, 2003. 2. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 112, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 3. D. Wilmot et al., “Warning Systems,” in Vol. 7, Countermeasure Systems, D. Pollock, Ed., The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham WA, p. 61.
THE JOHNSON CRITERIA 1. Detection (an object is present) 0.5 to 1 line pair or less 2. Orientation (the potential direction of motion of the object is known) 2 to 3 line pairs 3. Reading an English alphanumeric: 2.5 to 3 line pairs*
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
5
4. Recognition (the type of object is known) 3 to 4 line pairs 5. Identification (type of vehicle is known) 6+ line pairs
Discussion The functions of detection, recognition, and identification of a target by a human observer (typically called observation tasks) depend on the signal-to-noise ratio and the precision with which the target is resolved. This is founded on empirical observations of actual likely users viewing tactical military targets. The Johnson criteria use the highest spatial frequency visible (fJ) at the apparent target-to-background contrast to quantify target acquisition range. Range is proportional to fJ, with the proportionality depending on task difficulty. It should be noted that, to use the criteria correctly, target area and contrast refer to averages over the group of targets involved in the scenario. The Johnson criteria are used to quantify sensor image quality. First, one takes the RSS of the target to background contrast and determines the highest spatial frequency visible (to a human) through the entire system (telescope, focal plane, electronics, and display) with all of its noise sources. Then, range is given for a particular observation task based on the Johnson criteria. The actual range depends on task difficulty, which is determined by experiment and/or experience. Although the Johnson criteria have some difficulty when noise is present (as there is spread in the data), noise nevertheless is included in the criteria. The most important caution about this rule is that is assumes that the observer is relying on a single spectral bandwidth for the observation task. A multispectral instrument need not be an imager to provide sufficient target detection information with less resolution. Also, the DRI processes are part and parcel to human perception and do not apply to machine vision or advanced image processing algorithms. These results are sensitive to test setup, human proficiency, and target class. The cycles presented here do not include the deleterious background effects of clutter; generally, more cycles are needed to perform a given observation task as clutter increases. Resolution as described above will result in proper performance by 50 percent of the observers asked to perform the observational task under nominal conditions. More detail on the expected number of line pairs on a target is contained in Table 1.1, from Ref. 1. If you are unsure about the number of cycles required and need to do a calculation, use the nominal value in Table 1.2 as a guide. TABLE 1.1 Selected Specific Number of Cycles of Resolution to Accomplish a Given Observation Task for Specified Targets Target Truck M48 tank Stalin tank Centurion tank Halftrack Jeep Command car Soldier 105 Howitzer *Johnson
Detection
Orientation
Recognition
Identification
0.9 0.75 0.75 0.75 1.0 1.2 1.2 1.5 1.0
1.25 1.2 1.2 1.2 1.5 1.5 1.5 1.8 1.5
4.5 3.5 3.3 3.5 4.0 4.5 4.3 3.8 4.8
8.0 7.0 6.0 6.0 5.0 5.5 5.5 8.0 6.0
never addressed reading alphanumerics; however, this has become increasingly important with security and surveillance systems, and a body of work is developing that indicates that the required number of cycles is about 2.5 for a 50 percent probability. See Refs. 3 and 4 and associated rules in this chapter.
6
Chapter One
TABLE 1.2 Typical Cycles Required for a Given Observation Function (Assuming Two-Dimensional Sampling)
Number of line pairs required
Typical minimum required
Detection Classification Reading alphanumeric Recognition Identification
n/a 1 2.5 2.5 5
Nominal value (applicable when more detail is unknown) 0.75 2 2.8 3 6
Typical maximum required 1.5 3 4 4 n/a
A line pair is a particular way to define spatial resolution. It is equal to a dark bar and a white space, often called one cycle, across the critical dimension. Neophytes sometime confuse this with the number of pixels, but it is not the same. It can be crudely assumed to be twice the number of pixels, assuming perfect phasing across the target, focal plane, and display (see the Kell factor rule in Chapter 7, “Displays”). So, when 4.0 line pairs are quoted, it is equal to identifying a pattern with 4 bars and equal-width spaces between them. This requires a minimum of the footprints of 8 pixels across the target and often as much as 11 (8/0.7). The above criteria tend to fall apart somewhat at the extremes of detection and identification. Certainly, subpixel detection can easily occur if the SNR is high. If the intensity is high enough, a target need not subtend a full pixel to be detected, or we would never be able to see a single star at night. (Actually, the optics of the eye provide a diffraction blur larger than a single rod or cone and, in fact, this technique of making the blur larger than the detector is often used in star trackers.) Conversely, it is often impossible to correctly distinguish between similar targets with many more than the generally accepted 6 or 8 line pairs across them (e.g., the difference between a Camaro and Firebird, or two similar Kanji symbols). In this rule, the mentioned line pairs are across the target’s “critical” dimension, which can be assumed to be the minimal dimension for a worst-case scenario. However, the critical dimension is generally calculated as the × for two-dimensional sampling. The Johnson criteria are inherent to the classic FLIR performance codes of ACQUIRE and NVTHERM and have become part of the basic lexicon in the electronic imaging community. Therefore, it must be noted that the current Night Vision and Electronic Sensors Directorate [a part of U.S. Army Communication–Electronics Command (CECOM)] has a rich history in this field, and its staff were pioneers of the basic concept of using the number of resolution elements across a target to detect, recognize, or identify it. Historically, the subject appeared in 1940s literature, authored by Albert Rose, J. Coltman, and Otto Shade, involving research into the human eye and perception. In the late 1950s, John Johnson (of the U.S. Army) experimented with the effect of resolution on one’s ability to perform target detection, orientation, recognition, and identification functions using image intensifiers. This was followed by much additional research by Johnson, Ratches, Lawson, and others from the 1950s through 1970s. The effects of signal-to-noise were added in the 1970s by Rosell, Wilson, and Gerhart, and Vollmerhausen added to the models in the 1990s. Driggers, Vollmerhausen, and others are continuing to refine this concept in the early twenty-first century. The U.S. Army is developing a new metric based on the Johnson criteria, but able to accommodate digital imagery. The Johnson criteria use the highest spatial frequency seen through the sensor and display at the apparent target-to-background contrast to quantify image “quality.” The Johnson criteria relate to average contrast at one frequency, so there are problems (e.g., with sampled imagers, image boost, and digital filters)
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
7
that make the Johnson criteria conservative for 2-D imaging sensors. The new metric will attempt accommodate these situations. Called the target task performance metric (TTP), it is equal to2 C tgt MTF ( ξ ) 1 ⁄ 2 TTP = ⎛ -------------------------------⎞ dξ ⎝ CTF ( ξ ) ⎠
∫
where
Ctgt = contrast of the target ξ = sampling frequency MTF(ξ) = modulation transfer function as a function of ξ CTF(ξ) = contrast transfer function as a function of ξ
Then, the range is calculated from TTP A --------------------t NR where At = area of the target NR = value for the “N” required Lastly, there has been much discussion and controversy about the actual number of line pairs needed to do the functions, depending on clutter, spectral region, quality of images, test control, display brightness, and a priori knowledge. However, rarely have there been suggestions that the above numbers are incorrect by more than a factor of 3.
References 1. J. Howe, “Electro-Optical Imaging System Performance Prediction,” in Vol. 4, ElectroOptical Systems Design, Analysis and Testing, M. Dudzik, Ed., The Infrared and ElectroOptical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 92, 99, 1993. 2. R. Vollmerhausen and E. L. Jacobs, “New Metric for Predicting Target Acquisition Performance,” Proc. SPIE, Vol. 5076, Infrared Imaging Systems: Design, Analysis, Modelling and Testing, XIV, April 2003. 3. J. Miller and J. Wiltse, “Resolution Requirements for Reading Alphanumerics,” Optical Engineering, 42(3), pp. 846–852, March 2003. 4. J. Wiltse, J. Miller, and C. Archer, “Experiments and Analysis on the Resolution Requirements for Alphanumeric Readability,” Proc. SPIE, Vol. 5076, Infrared Imaging Systems: Design, Analysis, Modelling and Testing, XIV, April 2003. 5. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 121, 1974, found at http://www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 6. NVTherm Users Manual, U.S. Army or ONTAR, 2000. 7. L. Biberman, “Introduction: A Brief History of Imaging Devices for Night Vision,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 1-11 through 1-16, 2000. 8. G. Holst, CCD Arrays Cameras and Displays, JCD Publishers, Winter Park, FL, pp. 362–364, 1998. 9. J. Ratches et al., “Night Vision Laboratory Static Performance Model for Thermal Viewing Systems,” ECOM Report, ECOM-7043, 1975. 10. G. Gerhart et al., “The Evaluation of Delta T Using Statistical Characteristics of the Target and Background,” Proc. SPIE, Vol. 1969, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing, IV, pp. 11–20, 1993.
8
Chapter One
PROBABILITY OF DETECTION ESTIMATION Some simple expressions provide very good approximations to the probability of detection. For example,1 ⎛ I s – I t⎞ 1 Pd ≈ --- 1 + erf ⎜ ------------⎟ 2 ⎝ 2I ⎠ n
where Is,t,n = the detector current associated with s (signal), t (threshold), and n (noise) In addition, Kamerman2 has a different approximation, as follows: ⎫ 1 1⎧ 1 Pd = --- ⎨ 1 + erf ⎛ --- + SNR⎞ – ln ⎛ --------⎞ ⎬ ⎝ 2 ⎠ ⎝ P fa⎠ 2⎩ ⎭ for SNR > 2 and Pfa (the probability of false alarm) between 10–12 and 10–3.
Discussion The detection of targets is described in mathematical terms in a number of contexts. However, there is commonalty between the formulations used for radar, optical tracking, and other applications. The above equations are based on theoretical analysis of detection of point sources in white noise and backed up by empirical observations. The calculation of the probability of detection, Pd, of a signal in Gaussian noise is unfortunately quite complex. The exact expression of this important parameter is2 ∞
1 Pd = --π
∫
VT
where
π ⎞ ⎛ x2 + A2 ⎞ ⎛⎜ x exp ⎜ – ------------------⎟ exp ( xA cos y )dy⎟ dx ⎟ 2 ⎠⎜ ⎝ ⎝ ⎠
∫ 0
x, y = integration variables A=
2SNR
VT = –2 ln ( P fa ) Pfa = probability of false alarm SNR = signal-to-noise ratio As in any other approximation, the limits of application of the simplified version must be considered before using the rule. However, the rules shown above are quite broad in their range of application. These rules assume “white noise,” which means that the noise that is present in the system has equal amplitude at all frequencies. This approximation is required to develop any general results, because the spectral characteristics of noise in real systems are more complex and can, of course, take on an infinite number of characteristics. This assumes point source detection, without consideration to resolution or “lines across a target.” In all cases, erf refers to the error function, defined as x
2 –u2 erf ( x ) = ------- e du π
∫ 0
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
9
The error function is widely available in reference tables and is included in most modern mathematical software. For example, it can be found in virtually any book on statistics or mathematical physics.
References 1. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 111, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 2. G. Kamerman, “Laser Radar,” in Vol. 6, Active Electro-Optical Systems, C. Fox, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham WA, p. 45, 1993. 3. K. Seyrafi and S. Hovanessian, Introduction to Electro-Optical Imaging and Tracking Systems, Artech House, Norwood, MA, pp. 147–157, 1993.
CORRECTING FOR PROBABILITY OF CHANCE Vollmerhausen1 states that the probability of chance must be accounted for before using experimental data to calibrate DRI models. Pmeasured – Pchance Pmodel = ------------------------------------------1 – Pchance where
Pmodel = the probability to use in the model Pmeasured = the measured probability Pchance = the probability of correctly identifying (or recognizing) the target or target class just by chance
Discussion Even a blind squirrel will occasionally get a nut. That is, by chance, a number of correct answers will result. Models for detection, recognition, and identification (DRI) must have this “chance” probability of guessing correctly removed using the above equation. Guessing doesn’t inherently reduce the chance of doing things right. This can be determined only if you know the truth with certainty. The point is that developing models requires that the amount of guessing be known and appropriately considered. If 4 targets (target classes) are used in the experiment, then Pchance is 0.25, and if 12 targets (target classes) are used, then Pchance is 0.08. To compare model predictions with field data, the above formula is inverted as shown below: Pmeasured = Pmodel ( 1 – Pchance ) + Pchance
Example Assume that the Johnson criteria are used to find the probability of classifying tracked versus wheeled vehicles. The critical dimension of the vehicles (square root of area) is 4 m. From the Johnson criteria rule, explained later in this chapter, the N50 (the number of cycles across the critical dimension for 50 percent of respondents to correctly perform the observation task) for this task is 2. In general, if there are only two categories, the observer will be correct half the time with his eyes closed! The model probabilities must be corrected for chance as follows: Predicted measured probability = Model probability( 1 – Pchance ) + Pchance Predicted measured probability= 0.5( 1 – 0.5 ) + 0.5
10
Chapter One
Predicted measured probability = 0.75 Now consider an example. A scanning forward-looking infrared (FLIR) system with an instantaneous field of view of 0.1 milliradians (mrad) (and a cutoff frequency, Fcutoff , of 1/ 0.1 cycles/mrad) is used to detect a target with a critical dimension (w) of 4 m. Therefore, from the resolution requirement rule, wF cutoff R ≈ -------------------1.5N 50 1 ( 4 )⎛ -------⎞ ⎝ 0.1⎠ R ≈ -------------------( 1.5 )( 2 ) R ≈ 13.3 kilometers Thus, at the range 13.3 km, the task is performed correctly 75 percent of the time.
References 1. Private communications with Rich Vollmerhausen, 2003. 2. R. Duda, P. Hart, and D. Stork, Pattern Classification, John Wiley & Sons, New York, 2001, pp. 20–83.
DETECTION CRITERIA Targets can be easily detected if half (or more) of the photons in the pixel are from that target and half are from noise sources.
Discussion The signal-to-noise ratio of a detector is –
es -----------------– – es + en In this equation, × represents the number of electrons generated in the detector by signal photons [and for point sources, the background photons within the instantaneous field of view (IFOV)]. It is equal to the rate of arrival at the focal plane of signal photons times the quantum efficiency of the detector and the integration time. × is the number generated by all of the noise sources. × is made up of contributions from photon noise in the background along with the contribution from leaked background clutter along with internal detector – – noise. If es = en , the signal-to-noise ratio is –
es – ----------- or 0.71 es – 2es Because the generation of electrons from the signal is proportional to the arrival of photons from the target, and this number is generally quite large, the SNR can be quite high. This is rarely the result expected by the uninitiated.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
11
Signal-to-noise ratios must still be greater than about 5 for good target detection prospects; therefore, more than 70 photons per integration time should fall on the detector (assuming a 70 percent quantum efficiency). If no noise sources are present, then this equation is reduced to –
es
and only about 25 signal electrons are needed for detection. Photons from noise sources in the scene generate electrons in direct proportion to the quantum efficiency at their wavelength. This is because both the signal and background photon fluxes follow Poisson statistics. For laser detection, avalanche photodiode, UV, and visible systems, the noise is frequently dominated by photon noise from the target and background. Therefore, this rule holds true if the total of the target and background photogenerated electrons is higher (on the order of 50 electrons or more), because this assures a current signal-to-noise ratio in excess of 6 or 7. This also holds true for IR systems, as both their targets and backgrounds generate many more electrons. This rule does not apply for detection in high clutter or background conditions or where noise sources other than that of the detector dominate. For example, in a Sony charge-coupled device (CCD), the noise per pixel per field from the video amplifier is typically 50 or 60 electrons, corresponding to a 2500 to 3600 electron well charge. Thus, this rule would not apply to low light level applications of a commercial CCD.
ESTIMATING PROBABILITY CRITERIA FROM N50 The Johnson criteria, mentioned elsewhere in this chapter, provide for a 50 percent probability for target acquisition tasks (N50). To estimate higher or lower probabilities, one can use this empirical curve fit: N ⎞E ⎛ -------⎝ N 50⎠ P( N ) = ------------------------EN 1 + ⎛ ---------⎞ ⎝ N 50⎠ where N50 = the number of cycles needed to be resolved across the target dimension for 50 percent of the observers to get the target choice correct (with the probability of chance subtracted); target dimension is typically taken as the square root of target area N = the number of cycles actually resolved across the target E = an empirical scaling factor, equal to N E = 1.7 + 0.5⎛ ---------⎞ ⎝ N 50⎠
Discussion The Johnson criteria (see the “Johnson Criteria” rule, p. 4) were originally developed to yield the probability of detection, recognition, and identification (DRI) of military targets with a 50 percent correct probability. Much work has been done to extend the 50 percent
12
Chapter One
to higher numbers (e.g., for weapon targeting systems) and lower probabilities (e.g., for general surveillance systems). Frequently, the engineer will be asked for various levels of probability. The above equations are based on curve fits to empirical data. A similar equation is provided by Vollmerhausen in Ref. 1. The difference is that N --------N 50 is replaced by MTF cutoff ----------------------K where K is fitted by experimental data and MTF stands for the modulation transfer function, a measure of the optical performance of the system. Table 1.3 can be used for some specific probabilities and represents modernized data from Ref. 2. TABLE 1.3 Suggested Multiplier from N50 to Nx* Probability (Nx)
Multiplier to Go from N50 to the Probability on the Left
*For
0.98
>2 and ≤3
0.95
2
0.8
1.5
0.7
1.2
0.3
0.75–0.8
0.1
0.5
example, to go from N50 to N95, multiply N50 by 2.
The equation assumes that the observer has plenty of time for the observation task. Also, the term probability has a unique meaning in this case. Given that a group of observers try to detect, recognize, or identify each vehicle in a group of targets, then the probability is the fraction of correct answers out of the total number of tries. Readers should realize that the empirical equation for “E” has varied over the years, and they may find related but different versions in the references and older models such as (3.8 + 0.7)(N/N50). The version used in this rule was the most recent and acceptable as of the publication of this book and should be used. However, authorities are refining this, so the equation might experience slight changes in the future. Figure 1.1 shows the performance described by the above equation as a function of N for each of the tasks. On the horizontal axis is the number of resolution elements across the critical dimension, and the vertical axis shows the probability of successfully completing the task of detection, classification, recognition, or identification. This is based on the number of resolution elements (not line pairs) for N50 of 1.5 for detection, 3.0 for classification, 6.0 for recognition, and 12.0 for identification. Thus, when there are 3 cycles across the critical dimension of a target, there are 6 pixels, and the plot shows a 50 percent probability of classification. When there are 10 pixels, the probability of correct recognition jumps to almost 90 percent. Much of the work in this field has been done by the Night Vision and Electronic Sensors Directorate, part of U.S. Army Communication–Electronics Command (CECOM). Inter-
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
13
FIGURE 1.1 Probability of detection, classification, recognition, and identification as a function of the number of pixels across the critical dimension.
ested readers should refer to the several other rules in this chapter relating to similar topics—particularly the one on Johnson criteria.
References 1. R. Vollmerhausen et al., “Influence of Sampling on Target Recognition and Identification,” Optical Engineering 38(5): p. 763, 1999. 2. G. Holst, CCD Arrays Cameras and Displays, JCD Publishing, Winter Park, FL, 1996, pp. 364–365. 3. J. Howe, “Electro-Optical Imaging System Performance Prediction,” in Vol. 4, ElectroOptical Systems Design, Analysis and Testing, M. Dudzik, Ed., The Infrared and ElectroOptical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 92, 1993. 4. NVTHERM Users Manual, ONTAR, Andover, MD, 1999. 5. FLIR92 Users Manual, U.S. Army. 6. Software Users Manual for TV Performance Modeling, p. A36, Sept. 1991. 7. R. Harney, “Information-Based Approach to Performance Estimation and Requirements Allocation in Multisensor Fusion for Target Recognition,” Optical Engineering, 36(3): March 1997. 8. J. Howe, “Thermal Imaging Systems Modeling—Present Status and Future Challenges,” in Infrared Technology XX, B. F. Andresen, Ed., Proc. SPIE 2269, pp. 538–550, 1994. 9. R. Driggers et al., “Targeting and Intelligence Electro-Optical Recognition Modeling: A Juxtaposition of the Probabilities of Discrimination and the General Image Quality Equation,” Optical Engineering, 37(3): pp. 789–797, March 1998. 10. J. Ratches et al., “Night Vision Laboratory Static Performance Model for Thermal Viewing Systems,” ECOM Report ECOM-7043, 1975. 11. Private communications with R. Vollmerhausen, 2003.
14
Chapter One
GIMBAL TO SLEWED WEIGHT The mass of a gimbal assembly is directly proportional (or raised to a slight power such as 1.2) to the mass of the slewed (payload).
Discussion This is based on typical industry experience for state-of-the-art systems. To use this scaling, accelerations, velocities, base rejection, and stability must be identical; slewed masses should be within a factor of 3 of each other; and the gimbals must be of the same type, material and number of axes. Lastly, environmental, stability, stiffness, and slewing specifications must be similar. A given gimbal’s mass highly depends on its design, material composition, size, required accelerations, base motion rejection, stability, the lightweighting techniques applied, and the mass and momentum of the object to be pointed. An estimation of the mass of an unknown gimbal assembly can be scaled based on the known mass of a similar gimbal. Past designs and hardware indicate that the gimbal mass scales approximately linearly to the mass to be slewed. See Fig. 1.2.
FIGURE 1.2 FLIR Systems’ SAFIRE, an example of a gimbaled electro-optical system. (Courtesy of FLIR Systems Inc.)
IDENTIFICATION AND RECOGNITION IMPROVEMENT FOR INTERPOLATION Performance increases with the conditions and interpolation technique, but results vary widely. Generally, identification range improvement increases from 20 percent to 65 percent with simple interpolation.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
15
Discussion The authors of this book have long heard anecdotal evidence of increased detection, recognition, and identification (DRI) ranges as a result of simple interpolation between pixels to pseudo-enhance the display resolution. The above rule is based on one of the few actual tests of this concept.1 Pixel interpolation is very useful when combined with electronic zoom (e-zoom). Ezoom makes the picture bigger and overcomes the modulation transfer function (MTF) of the display and eye. However, e-zoom is often the trivial duplication of pixels. When the duplicated group of pixels becomes easily visible to the eye, the pixel structure hides (disrupts) the underlying image. When pixel structure is visible, the eye cannot spatially integrate the underlying image. To avoid this situation, interpolation is used. This can make a substantial difference in the operation of real-world sensors. For example, imagine the electronic zoom being so large that the pixilation is readily apparent. In this case, the loss of resolution (MTF loss) from the display or video recorders has little impact on the final system resolution, ignoring spurious response. This becomes especially important when dealing with surveillance and security legal criteria and post-surveillance processing issues, as these situations involve factors other than detection and are frequently recorded or digitally transmitted. Many algorithms exist for obtaining superresolution. These superresolution algorithms take pixel interpolation a step further. Rather than a simple interpolation, they actually increase the signal frame spatial content. This is typically done by sampling a scene faster than the displayed rate and imposing a slight pixel offset (sometimes using the natural optical flow, a deliberately imposed “dither,” target motion, or jitter to accomplish the offset). For point sources (or sharp edges), the Gaussian blur across several pixels can be used to calculate a position much more accurately than the IFOV and to provide superresolution accuracy. The above discussion assumes that resolution is the limiter, not noise. Superresolution and pixel interpolation do not help to reduce noise.
References 1. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham, WA, pp. 105–108, 2000. 2. J. Schuler and D. Scribner, “Dynamic Sampling, Resolution Enhancement, and Super Resolution,” in Analysis of Sampled Imaging Systems, R. Vollmerhausen and R. Driggers, Bellingham, SPIE Press, Bellingham, WA, pp. 125–138, 2000. 3. http://www.geocities.com/CapeCanaveral/5409/planet_index.html, 2003. 4. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 60, 61, 292, 1994. 5. N. Nguyn, P. Milanfar, “A Computationally Efficient Superresolution Image Reconstruction Algorithm,” IEEE Transactions on Image Processing, 10(4), April 2001. 6. Private communications with Rich Vollmerhausen, 2003.
RESOLUTION REQUIREMENT 1. For scanning sensors, the required system cutoff spatial frequency in cycles per milliradian (generally determined by the detector cutoff spatial frequency) can be estimated from 1.5NR F cutoff ≈ --------------w 2. For staring sensors, the required half-sample frequency in cycles per milliradian can be estimated from
16
Chapter One
0.85NR F half -sample ≈ -----------------w Fcutoff = = Fhalf-sample = = N=
required system resolution in units of cycles per milliradian 1/detector active area subtense half-sample rate of sensor 0.5/detector pitch required number of cycles or line pairs across the target (typically 1.5 for detection, 3 for recognition, and 6 for identification). w = square root of the target projected area in meters R = slant range in kilometers
Discussion This rule assumes that the resolvable spatial frequency is 65 percent of the system’s cutoff frequency for scanning systems. In some cases, the resolvable frequency can be nearer the cutoff frequency. The factor 1.5 can vary from 1.5 to 1.2. For staring sensors, the factor can vary from 0.7 to 1. That is, resolved frequency normally is beyond the half-sample rate (sometimes called the Nyquist frequency). The required resolutions of an electro-optical system, supporting a human operator, depend on the target recognition (or detection) range and the number of pixel pairs or line pairs required for a human to perform the function at a given level. In general, humans can resolve frequencies in the range of 60 to 80 percent of the system’s resolution. Also, see the rule in this chapter that describes the number of line pairs needed to resolve various types of targets. As mentioned in a related rule in this chapter, a system that can resolve about five line pairs across a target can provide very effective target recognition. Using this rule, we obtain that 1.5NR ( 1.5 )( 5 )( 1 ) F r ≈ --------------- = --------------------------- = 2.5 w 3 Therefore, for a 3-m target at a range of 1 km, the required number of cycles per milliradian for recognition is about 2.5.
References 1. B. Tsou, “System Design Considerations for a Visually Coupled System,” in Vol. 8, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham WA, pp. 520–521, 1993. 2. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham WA, pp. 92–95, 2000. 3. NVTHERM Users Manual, ONTAR (www.ontar.com), 2000.
MTF SQUEEZE 1. The modulation transfer function (MTF) squeeze factor (SQ) for a target identification or recognition task is SQ = 1 – 0.5 SRR so that the MTF of a sampled imager is “squeezed” to become MTFSQ.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
17
2. Equivalently, SQ can be used to adjust N50 or even the range itself. However, SQ should be applied only once (to MTF, N50, or the range predicted for a nonsampled imager). N 50 N 50 – corrected = -------SQ or 3. RangeSA = rangeNS SQ, where rangeSA is the prediction for sampled imager, and rangeNS is the range predicted for a nonsampled imager. SRR is the total spurious response ratio, SR(f) = aliased content at spatial frequency (f), and TR(f) = the system transfer response.
∫
SR( f ) sin ( θ )d f SRR = --------------------------------------TR( f )d f θ = tan
∫
–1
SR( f ) --------------TR( f )
Discussion The range performance of sampled imagers can be affected by sampling artifacts. For nonsampled imagers, the Johnson criteria are generally used to predict range. For sampled imagers, the Johnson criteria are again used, but the range is “squeezed” to account for the degrading effect of sampling. The “squeeze” can be applied in three ways, each essentially equivalent to the others. 1. The MTF can be squeezed as described below. 2. The N50 used for range prediction can be divided by the squeeze factor. 3. The range prediction itself can be squeezed by multiplying by the squeeze factor. Although staring arrays have significant signal-to-noise advantages over scanning systems, in the 1990s researchers in several organizations noted that some nonscanning (staring) systems were not performing as well as might be expected, based on scanning performance models. They traced the cause to sampling artifacts associated with staring focal plane arrays. A small presample blur results in aliased frequency content in the displayed image. A small presample blur is associated with either fast optics (so that the diffraction blur is small) or with a small detector fill factor. A small display blur, which results in raster or visible pixel edges, also causes sampling artifacts. This occurs with displayed images, regardless of wavelength (visible, infrared, MMW, and so on). Vollmerhausen and Driggers quantified this effect and included this in the NVTHERM performance model.1 As Vollmerhausen2 astutely explains, “MTF squeeze factor adjusts the predicted detection, recognition, or identification range for sampled imagers to account for sampling artifacts.” Figure 1.3 illustrates the transfer and spurious response for a sampled imager. The presample blur MTF is sampled, resulting in replicas of the presample MTF at all multiples of the sample frequency. The display MTF multiplies the presample MTF, resulting in the transfer response of the system. The display MTF multiplies the presample MTF replicas, resulting in the aliased content—also called the spurious response. The transfer response and spurious response are used to calculate the SRR and the resulting squeeze factor SQ.
18
Chapter One
FIGURE 1.3 Sampling results in replicas of presample MTF at multiples of sample frequency. The display MTF multiplies the presample MTF to form the transfer response. The display MTF multiplies the replicas to form the spurious response (aliasing). (From Ref. 3.)
References 1. NVTHERM Users Manual, ONTAR, 2000. 2. Private communications with Richard Vollmerhausen, 2003. 3. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham, WA, p. 92–95, 2000.
PSYCHOMETRIC FUNCTION The psychometric function is well matched by a Weibull function as follows:1
P( x ) = 1 – (1 – γ ) ⋅ 2
x –⎛ ---⎞ ⎝ α⎠
β
where P = the fraction of correct responses x = the stimulus strength β = a parameter that determines the steepness of the curve γ = guess rate (0.50) α = stimulus strength at which 75 percent of the responses are correct
Discussion Reference 1 points out that “the probability of a correct response (in a detection task) increases with stimulus strength (e.g., contrast). If the task is impossible because the contrast is too low, the probability of a correct response is 50 percent (guess rate), and if the
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
19
task is easy, the observer score will be 100 percent correct. The relationship between stimulus strength and probability of a correct response is called the psychometric function.” An example is plotted in Fig. 1.4. Threshold is defined as the stimulus strength at which the observer scores a predefined correct level (e.g., 75 percent). This figure plots the function for various values of β. These plots assume a γ value of 0.5 and an α value of 3 (hence, they all converge at that stimulus point). Similarly, Refs. 2 and 3 give a related value for a probability of detection (Pd) as a function of contrast as 2 1 1 Pd ≅ --- ± --- 1 – exp [ –4.2( C/C t – 1 ) ] 2 2
where C = the contrast of interest, other than Ct Ct = the threshold contrast of 50 percent correct detection A minus sign is used when C < Ct where the ± is shown. The probability (P1) that a human will search a field that is known to contain one target and lock onto the target with his fovial (see Chap. 8, “The Human Eye,” for a discussion of fovia) vision for a sufficient time (say, 1/4 of a second) is difficult to estimate, but Refs. 2 and 3 suggest a relationship of P1 = 1 – exp [ –( 700/G )( at /as )t ] where at = area of the target as = area to be searched
FIGURE 1.4 Example of the psychometric function. The fraction of correct responses gradually increases with stimulus strength (e.g., contrast) from 50 percent (representing chance) to 100 percent. The threshold is defined as the stimulus strength at which the observer scores 75 percent correct. The threshold is independent of the decision criterion of the observer. The various plotted lines in the figure are for different betas from 2 to 8.
20
Chapter One
t = time G = a congestion factor, usually between 1 and 10 The total probability of detection is P1 Pd η where η = overall degradation factor arising from noise
References 1. P. Bijl and J. Valeton, “Bias-Free Procedure for the Measurement of the Minimum Resolvable Temperature Difference and Minimum Resolvable Contrast,” Optical Engineering 38(10): 1735–1742, October 1999. 2. Electro-Optics Handbook, Burle Inc., Lancaster, PA, pp. 120–124, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 3. H. Bailey, “Target Detection Through Visual Recognition: A Quantitative Model,” Rand Corporation, Santa Monica, CA, February 1970. 4. P. Bijl, A. Toet, and J. Valeton, “Psychophysics and Psychophysical Measurement Procedures—Introduction,” in The Encyclopedia of Optical Engineering, R. Driggers, Ed., Marcel Dekker, New York, 2003. 5. K. Brunnstrom, B. Schenkman, and B. Jacobson, “Object Detection in Cluttered Infrared Images,” Optical Engineering 42(2): 2003, pp. 388–399.
RAYLEIGH CRITERION The Rayleigh criterion states that the minimum angular separation at which two point sources can be resolved occurs when the peak of one of the Airy disks falls at the position of first minima of the other.
Discussion Lord Rayleigh, in an early attempt at quantifying resolution, developed this criterion for prism and grating spectroscopes. It says that one can determine two distinct, equal intensity point sources if the peak of one of the Airy disks (as produced by an optical instrument) falls at the first dark ring of the other. The distribution from the second point source peaks at this dark band of the distribution from the first point source (Fig. 1.5). When the energy is added, as is the case for most sensors, a distinct “saddle” results with a dip between the two peaks (Fig. 1.6). The energy distribution of a diffraction-limited Airy disk follows a Bessel function (see associated rule about the Airy Disk in Chap. 13, “Optics”), the general function being 2
4J 1 ( x ) ----------------2 x The addition of the intensity peaks from intersecting Airy disks results in a simple saddle with one minimum. For grating systems of unobscured, diffraction-limited, monochromatic light with each point source having the same intensity, the minimum (in the middle of the saddle) has an intensity of 8/π2 or 0.811 of the peak intensity. For circular apertures (which are more common) it turns out that the saddle is somewhat less. As shown in Fig. 1.6, for circular apertures, it is much closer to 0.73.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
FIGURE 1.5
21
Intersection of two monochromatic Airy disks.
FIGURE 1.6 When the Rayleigh criteria is met, the addition of the intensity of the two Airy disks produces two peaks with a minimum between. This figure illustrates the Rayleigh criterion for two point sources viewed through a circular aperture.
For unobscured, circular diffraction-limited optics, this works out to some simple equations that can be used to provide quick “on-your-feet” estimations. For example, angular diffraction blur spot radius is equal to 1.22( λ/d ) where λ = wavelength d = aperture diameter
22
Chapter One
In terms of distance on the focal plane, 1.22λ( f /# ) and in terms of numerical aperture (NA), this reduces to 0.61λ/NA Another related criterion is the Sparrow limit. This limit indicates that resolution can be achieved with a closer spacing of the Airy disks. The Sparrow limit is defined as 0.47λ/NA
RESOLUTION REQUIRED TO READ A LETTER To reliability read an English alphanumeric character (letter), the system should clearly display at least 2.8 cycles along the letter’s height.
Discussion By definition, one’s ability to read an alphanumeric depends on resolution. Generally, letters have a high probability of being correctly read when between 2.5 and 3 cycles across the height of the letter can be resolved by the human viewing the letter (whether printed or on a display). The curve of identifying the letter is very steep between 2 and 3 cycles; with 2 or less, it is practically indistinguishable; and it is almost always readable when 3 or more cycles are displayed, as illustrated in Figs. 1.7 and 1.8. One of the authors (Miller) has conducted several field experiments leading to the results presented in the figures. You can prove this for yourself by observing Fig. 1.7. It is almost impossible to read the “letters” on the low-resolution image, yet it is easy to read those of the high-resolution image. Of course, the ability to resolve two or three cycles across the letter depends on its distance from the observer and the quality of the imaging system (eyes). This rule is based on English block letters only; this does not apply to highly detailed language characters such as Kanji, Hindi, and Arabic symbols. A cycle defines the resolution required to separate high-contrast white and black lines, generally 2 pixels (ignoring phase). Assuming a Kell factor of approximately 0.7, digital imaging requires about 4 cycles or 8 pixels across the height of the letter (see the associated “Kell Factor Rule” in Chap. 7, “Displays”).
FIGURE 1.7 Images of letters at various resolutions. When these were originally displayed, they were of 2, 2.5, 3, and 4 cycles, respectively; however, they have suffered an additional resolution loss through the frame grabbing and book printing process.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
23
FIGURE 1.8 The numbers of correct readings of an alphanumeric as a function of cycles. The plot represents data from 50 individuals viewing three different letters (C = diamonds, F = triangles, and B = squares).
In addition to the empirical evidence of Figs. 1.7 and 1.8, theoretical analysis supports these assertions. Figure 1.9 clearly indicates that the power spectral content of English alphanumerics is concentrated below four cycles per letter height. The requirements for a human to successfully read these alphanumerics is somewhat lower and is demonstrated by empirical results to be about 2.5 cycles for about a 50 percent success and 3 cycles for a much higher success rate. Figure 1.10 is a cumulative plot of the standard deviation of
FIGURE 1.9 Contour plot of the Fourier transform of the entire “Helvetica” alphanumeric set. This is based on a 32 × 32 pixel “frame” where the letter height occupied all of the vertical 32 pixels.
24
Chapter One
FIGURE 1.10 Cumulative distribution of the standard deviation of the 36 alphanumeric characters in the “Helvetica” font.
the power in Fourier space for the entire alphabet and numerals in the Helvetica font. This represents the power spectral contents of the difference between the characters. About half of this distribution falls below 2.5 cycles, indicating that most letters can be distinguished with such resolution.
References 1. J. Miller and J. Wiltse, “Resolution Requirements for Alphanumeric Readability,” Optical Engineering, 42(3), pp. 846–852, March 2003. 2. W. A. Smith, Modern Optical Engineering, McGraw-Hill, New York, p. 355, 1990. 3. J. Wiltse, J. Miller, and C. Archer, “Experiments and Analysis on the Resolution Requirements for Alphanumeric Readability,” Proc. SPIE, Vol. 5076, Infrared Imaging Systems: Design, Analysis, Modelling and Testing, XIV, April 2003. 4. Private communication with Dr. John Wiltse and Dr. Cynthia Archer, 2003.
SUBPIXEL ACCURACY One can determine the location of a blur spot on the focal plane to an accuracy that equals the resolution divided by approximately the signal-to-noise ratio or,1 Angular Limit ALOS ≈ Constant -----------------------------------SNR where
ALOS = line-of-sight noise (or tracking accuracy), sometimes called the noise equivalent angle Angular Limit = larger of the diffraction limit, blur spot, and pixel footprint SNR = signal-to-noise ratio Constant = constant is generally 0.5 to 1
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
25
Discussion A system using centroiding or other higher processing techniques can locate a target to an angular position of roughly the optical resolution of the system divided by the signal-tonoise ratio. This is valid for SNRs up to about 100. Beyond 100 or 200, a variety of other effects limit the noise equivalent angle. The limit in performance of such systems is about 1/100 of the pixel IFOV, although higher performance has been demonstrated in some specialized scientific systems with much calibration. Of course, the minimum pixel SNR must exceed about 5, or the target will not be reliably tracked. Often, this subpixel accuracy does not work at SNR of less than 5 or so. This rule is based on empirical performance of existing systems. In a staring system, the target rarely falls centered on a given pixel. Usually, it is split between two or four pixels. The signal on the adjacent pixels can be used to better define its location. A number of people have calculated the theoretical limit of subpixel tracking using the method of centroids. In this analysis, it is assumed that the light from the target is projected onto a number of pixels in the focal plane, with the minimum being 4 for a staring system (scanning systems can use fewer pixels, because the detector can move over the image). This means that the focal plane array (FPA) tends to performs like the quad cells used in star trackers and other nonimaging tracking systems. By measuring the light falling in each of the four cells, the centroid of the blur can be computed. This superresolution requires that the signal from each pixel in the focal plane be made available to a processor that can compute the centroid of the blur. In advanced systems, either electronic or optical line-of-sight stabilization is employed to ensure that each measurement is made with the blur on the same part of the focal plane. This eliminates the effect of noise that results from nonuniformity in pixel responsivity and noise. Results do not include transfer errors that build up between coordinates at the focal plane and the final desired frame of reference. We find the following variants of the above rule, first from Ref. 2: π1.22λ Subpixel resolution = -----------------8DSNR where λ = wavelength D = aperture diameter This means that the constant referred to at the beginning of this rule is equal to about 0.39. Also, Ref. 3 gives 3πλ --------------------16DSNR and Ref. 4 has included position tracking error and defined it as Pixel IFOV θLSB = ----------------------------SNR where
θLSB = least significant bit of greatest resolution Pixel IFOV = instantaneous resultant field of view of a pixel
In a scanning system, the blur circle produced by the target at the focal plane is scanned over the FPA element. The FPA element can be sampled faster than the time it takes the
26
Chapter One
blur to move over the FPA. This provides a rise-and-fall profile with which the location can be calculated to an accuracy greater than the pixel footprint or blur extent. The higher the SNR, the faster the samples can be and the more accurate the amplitude level will be, both increasing the accuracy of the rise-and-fall profile. Lloyd5 points out that the accuracy for a cross scan, or when the only knowledge is that the target location falls within a detector angular subtense (DAS) is DAS ----------12 Additionally, the angular resolution is 0.31λ -------------------( D )SNR Also, McComas6 provides this form: 0.61λ 1 σo = ------------- -----------------d SN Rave Lastly, the measurement error in the angular separation of two objects is just like the rule above except that, if the SNRs approximately the same, then SNR1 or 2 SNR = ---------------------2 and the angular resolution is 2 pixel field of view ---------------------------------------------------SNR
References 1. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 60–61, 1994. 2. K. J. Held and J. D. Barry, “Precision Optical Pointing and Tracking from Spacecraft with Vibrational Noise,” SPIE Press, Vol. 616, Optical Technologies for Communication Satellite Applications, 1986. 3. M. Shao and M. Colavita, “Long-Baseline Optical and Infrared Stellar Interferometry,” Annual Review of Astronomy and Astrophysics, Vol. 30, pp. 457–498, 1992. 4. C. Stanton et al., “Optical Tracking Using Charge Coupled Devices,” Optical Engineering, Vol. 26, pp. 930–938, September 1987. 5. M. Lloyd 1993, “Fundamentals of Electro-Optical Imaging Systems Analysis,” in Vol. 4, Electro-Optical Systems Design, Analysis, and Testing, M. Dudzik, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 42–44. 6. B. K. McComas and E. Friedman, “Wavefront Sensing for Deformable Space-Based Optics Exploiting Natural and Synthetic Guide Stars,” Optical Engineering, 41(8), August 2002.
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
27
NATIONAL IMAGE INTERPRETABILITY RATING SCALE CRITERIA TABLE 1.4 National Image Interpretability Rating Scale Criteria NIIRS Examples of exploitation Examples of exploitation Examples of exploitation rating level tasks (visible) tasks (multispectral) tasks (infrared) 0
Interpretability of the Interpretability of the Interpretability of the imagery is precluded imagery is precluded imagery is precluded by obscuration, degraby obscuration, noise, by obscuration, noise, dation, or very poor poor registration, degdegradation, or very resolution. radation, or very poor poor resolution. resolution.
1 (Over 9 m GSD*)
Distinguish between Distinguish between Distinguish between major land use urban and rural areas; runways and traffic classes (urban, foridentify a large wetways; detect large est, water, etc.); detect land (greater than 100 areas (greater than 1 a medium-size port acres); delineate km2) of marsh or facility; distinguish coastal shoreline. swamp. between taxiways and runways.
2 Detect large buildings (4.5 to 9 m (hospitals, factories); GSD) detect military training areas.
Detect multilane highDetect large aircraft and ways; detect strip minlarge buildings; distining; delineate extent of guish between naval cultivated land. and commercial port facilities.
3 (2.5 to 4.5 m GSD)
Distinguish between Detect individual houses Detect vegetation/soil moisture differences large (707, A300) and in residential neighalong a linear feature; small (A-4, L-39) airborhoods; detect identify golf courses; craft; distinguish trains on tracks, but detect reservoir deplebetween freighters not individual cars; tion. and tankers of 200 m detect a helipad; idenof more in length; tify a large surface identify individual thership in port by type. mally active flues running between the boiler hall and smoke stacks at a thermal power plant.
4 (1.2 to 2.5 m GSD)
Identify farm buildings Distinguish between as barns, silos, or resitwo-lane improved dences; identify, by and unimproved general type, tracked roads; detect small vehicles and field artilboats (3 to 5 m) in lery; identify large open water. fighters by type.
Identify the wing configurations of small fighter aircraft; detect a 50-m2 electrical transformer yard in an urban area.
28
Chapter One
TABLE 1.4 National Image Interpretability Rating Scale Criteria (Continued) NIIRS Examples of exploitation Examples of exploitation Examples of exploitation rating level tasks (visible) tasks (multispectral) tasks (infrared) 5 (0.75 to 1.2 m GSD)
Detect large animals in grasslands; identify a radar as vehicle mounted or trailer mounted.
6 (0.4 to 0.75 m GSD)
Identify individual teleDetect a foot trail phone/electric poles in through tall grass; residential neighbordetect recently hoods; identify the installed minefields in spare tire on a ground forces deploymedium-size truck. ment areas; detect navigational channel markers and mooring buoys in water.
7 (0.2 to 0.4 m GSD)
Identify individual railroad ties; identify fitments and fairings on a fighter-size aircraft.
Detect small marine Identify automobiles as mammals on sand or sedans or station waggravel beaches; distinons; identify antenna guish crops in large dishes on a radio relay trucks; detect undertower. water pier footings.
8 (0.1 to 0.2 m GSD)
Identify windshield wipers on a vehicle; identify rivet lines on a bomber aircraft.
Recognize the class of chemical species on small surfaces such as human limbs or small containers.
Identify limbs on a person; detect closed hatches on a tank turret.
9 Identify individual barbs (Less than on a barbed wire 0.1 m fence; detect individGSD) ual spikes in railroad ties; identify vehicle registration numbers.
Identify chemical species on limbs of person or on the surface of small containers.
Identify individual rungs on bulkhead-mounted ladders; identify turret hatch hinges on armored vehicles.
*GSD
Detect an automobile in Distinguish between sina parking lot; detect gle-tail and twin-tail disruptive or deceptive fighters; identify outuse of paints or coatdoor tennis courts. ings on buildings at a ground force installation. Distinguish between thermally active tanks and APCs; distinguish between a two-rail and a four-rail launcher; identify thermally active engine vents atop diesel locomotives.
refers to ground sample distance, a measure of resolution.
Discussion The National Image Interpretability Rating Scale (NIIRS, see Table 1.3) was developed by the reconnaissance and remote sensing community for evaluating and grading “perceptional-based” image quality from airborne and space platforms, usually viewing in a near nadir angle. Introduced in 1974, its application is to quickly convey the “quality and usefulness” of an image to analysts, without going through detailed MTF analysis and without ambiguities tied to “resolution” and photographic scale. It provides a standardized method for a number of photointerpreters (PIs) to agree on the information content in an image. Presumably, a number of them, all shown the same image, would give the picture
Acquisition, Tracking, and Pointing/Detection, Recognition, and Identification
29
about the same interpretability score. In the late 1990s and early 2000s, NIIRS has been appearing more and more in the discussion of the performance of tactical airborne imagers as well as other nontraditional nonintelligence systems. NIIRS is defined and developed under the auspices of the U.S. government’s Imagery Resolution Assessments and Reporting Standards (IRARS) committee. It is largely resolution based, assuming there is sufficient contrast. The NIIRs image rating system is a scale from 0 to 9, with 9 having the most image detail and content. Generally, it is defined only as whole integers, but sometimes one will see a fractional value. The fractional value is sometimes referred to as ∆NIIRS. Fiete1 states, “A ∆NIIRS that is less than 0.1 is usually not perceptible and does not impact the interpretability of the image, whereas a ∆NIIRS above 0.2 NIIRS is easily perceptible.” The scale is more qualitative than the Johnson criteria or detailed MTF analysis. It is less easy to model, quantify, or argue in a meeting, although we include a rule elsewhere in this chapter that explains how to estimate NIIRS from optical properties of the sensor. The qualitative nature does allow for flexibility, and sometimes a custom the NIIRS scale will be established for a given mission or objective. However, NIIRS easily and quickly conveys the level of image content to nontechnical people.
References 1. R. Fiete, “Image Quality and λFN/p for Remote Sensing,” Optical Engineering, Vol. 38, pp. 1229–1240, July 1999. 2. http://www.physics.nps.navy.mil/appendices.pdf, 2003. 3. http://www.fas.org/irp/imint/niirs.htm, 2003. 4. J. Lubin et al., Vision Model-Based Assessment of Distortion Magnitudes in Digital Video, 2002, http://www.mpeg.org/MPEG/JND, 2003.
This page intentionally left blank
Chapter
2 Astronomy
This chapter contains a selection of rules specifically involving the intersection of the disciplines of astronomy and electro-optics (EO). Sensors frequently look upward, so astronomical objects often define the background for many systems. Moreover, many sensors are specifically designed to detect heavenly bodies, so astronomical relationships define the targets for many sensors. Over the past few hundred years, astronomy has driven photonics and optics. Likewise, photonics and optics have enabled modern astronomy. The disciplines have been as interwoven as DNA strands. Frequently, key discoveries in astronomy are impossible until photonic technology develops to a level that permits them. Conversely, photonic development often has been funded and refined by the astronomical sciences as well as the military. Military interests have been an important source of new technology that has furthered the application of electro-optics in astronomy. The authors contend that one of the most important contributions of the Strategic Defense Initiative (SDI) was the advancement of certain photonic technologies that are currently benefiting astronomers. Some of these include adaptive optics, synthetic guide stars, large and sensitive focal planes, advanced materials for astronomical telescopes, new methods of image stabilization, and advanced computers and algorithms for interpreting images distorted by atmospheric effects. The new millennium will include a host of new-technology telescopes that may surpass space-based observation capabilities (except in the spectral regions where the atmosphere strongly absorbs or scatters). The two Keck 10-m telescopes represent an amazing electrooptical engineering achievement. By employing segmented lightweight mirrors and lightweight structure, and by adjusting the mirrors in real time, many of the past notions and operating paradigms of both ground-based and space-based telescopes have been discarded. Soon, the Kecks will be eclipsed by larger and more powerful phased arrays of optical telescopes, all using new photonic technology that was not available 20 years ago. This new emphasis on novel technology applied to Earth-based telescopes represents a major addition to the astronomical community’s toolbox and a shift in the electro-optical and astronomical communities’ perceptions. In the near future, these high-technology telescopes, coupled with advanced precision instruments, will provide astronomers with new tools to make new and wondrous discoveries. Of course, there is no inherent reason why the technologies used in ground telescopes cannot be used in space. In fact, the next generation of science telescopes will
31
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
32
Chapter Two
feature these advances. For example, at this writing, the James Webb Space Telescope (formerly the Next Generation Space Telescope) will exploit a segmented, actuated primary mirror. In honor of the important role that adaptive optics now plays in ground-based astronomy and may soon play in space astronomy, we have included a number of rules on that topic. For the reader interested in more details, there are myriad observational astronomy books, but few deal directly with observational astronomy using electro-optics. For specific EO discussions, an uncoordinated smattering can be found throughout SPIE’s Infrared and Electro-Optical Systems Handbook, Janesick’s Scientific Charge Coupled Devices, and Miller’s Principles of Infrared Technology. Additionally, Schroeder’s Astronomical Optics addresses many principles in detail. A recent additional to the library is Bely’s The Design and Construction of Large Optical Telescopes. Do not forget to check the journals, as they seem to have more material relating to EO astronomy than do books. SPIE regularly has conference sessions on astronomical optics, instruments, and large telescopes, and the American Astronomical Society has regular conferences that feature many EO-related papers. Additionally, there are many good articles and papers appearing in Sky and Telescope, Infrared Physics, and the venerable Astrophysical Journal. Finally, do not overlook World Wide Web sites on the Internet; much good information is made available by several observatories.
Astronomy
33
ATMOSPHERIC “SEEING” Good “seeing” from the ground is seldom much better than about 1 arc second (arcsec) (or about 5 µrad).
Discussion The inhomogeneous and time-varying refractive index of the atmosphere degrades images of distant objects. The varying atmosphere induces wavefront tilt (apparent displacement of the target), scintillation (fluctuating apparent brightness of the target), and wavefront aberrations (blurring). The combination of these effects is called “seeing.” Typical seeing obtainable on good nights at high-altitude observatories is approximately 1 arcsec (about 5 µrad). This empirical limit is imposed by the atmosphere; it is not strongly related to the aperture of the optics. Common amateur telescopes’ apertures of 10 cm or less are well matched to the atmosphere in the sense that larger apertures do not permit finer resolution. A small-aperture telescope is sensitive to wavefront tilts, which are manifest as images that seem to move around over time intervals of one-tenth of a second or so. Large-aperture telescopes such as used by professional astronomers are sensitive to additional aberrations caused by the atmosphere, which are manifest as fuzzy images that appear to boil. Over the long exposures typically employed, the boiling is averaged out, and the fuzzy images have typical angular extents of an arcsec. Large apertures do, of course, collect more light, some of which can be used to control active optical elements that can undo much of the effect of bad atmospheric seeing. Large telescopes also tend to be deliberately sited where the seeing has been measured to be good—mountaintops, for example. Seeing is better at high altitudes and at longer wavelengths. Bad sites and bad weather (lots of convection) make seeing worse. Seeing tends to improve with wavelength (to something like the 6/5th power); that is, the seeing angle gets smaller (better) as the wavelength increases. Also see the various rules in the Chap. 3, “Atmospherics,” particularly the Fried parameter rule of resolution. Although very rare, seeing may approach 0.1 to 0.2 arcsec (or around 0.5 to 1 µrad) with ideal conditions.
BLACKBODY TEMPERATURE OF THE SUN Consider the to be Sun a 6000 kelvin (K) blackbody.
Discussion The Sun is a complex system. It is a main-sequence star (G2) of middle age. Almost all of the light we see is emitted from a thin layer at the surface. Temperatures can reach over 10 million degrees in the center. However, at the surface, the temperature is usually something under 6,000 K, with the best fit to a blackbody curve usually at 5770 K. The Sun appears as a blackbody with a temperature anywhere from about 5750 K and 6100 K, depending on the wavelength of observation chosen for matching the various candidate blackbody curves. At shorter than the visible wavelengths the Sun appears somewhat brighter than the temperatures shown above. The Sun (and all other stars) do have some absorption and emission lines, so using a blackbody approximation is valid only for broad bandpasses. The absorption lines are well documented and can be found in collections of Fraunhofer line shapes and depths. The lines result from various metals in the outer atmosphere of the Sun, with iron, calcium, and
34
Chapter Two
sodium causing some of the most prominent lines. Earth’s atmosphere strongly absorbs some wavelengths, so solar radiation reaching the surface may not resemble a blackbody spectrum for those wavelength bands. The above is from general curve fit for wide bandpasses, disregarding atmospheric absorption. A star’s blackbody temperature based on spectral type is (to a first order) approximately B: 27000 K A: 9900 K F: 7000 K G: 5900 K K: 5200 K M: 3800 K It is likely an accident that the peak of the Sun’s radiation is well matched to a major transmission window of the atmosphere. On the other hand, it is no accident that the peak performance of the human vision system is well matched to the solar radiation that reaches the ground. Evolution of visual systems have assured that performance is best around 555 nm. Of course, due to the absorption properties of the atmosphere, the Sun deviates significantly from blackbody properties when seen from the ground.
DIRECT LUNAR RADIANCE In the visible wavelength range, the dominant signal from the Moon is the reflectance of sunlight. This is expressed in the form of its radiance, L, bb
L ( λ,5900 K ) ΩRm ( λ ) Lreflected ( λ ) = ---------------------------------------------------------π where the equation specifically notes the approximate blackbody temperature of the Sun (5900 K). Rm is the reflectivity of the Moon, which has typical values of 0.1 in the visible wavelengths, 0.3 for 3 to 6 µm, and 0.02 for 7 to 15 µm. Ω is the solid angle formed by the Moon when viewed from Earth. For the infrared, the Moon’s thermal emission must also be considered as it can be the dominant source of photons in some bands.
Discussion The Moon is an important (and sometimes dominant) source of radiation in the night sky. Its signature includes radiation ranging from the visible to the infrared, so all types of sensors must be designed to tolerate its presence. Many sensors (such as image intensifiers and low-light-level cameras) exploit this light. The total radiance seen when viewing the Moon is the superposition of emitted radiation, reflection of solar radiation, and emission from the atmosphere. Lmoon ( λ ) = τatm ( λ )[ Lreflected ( λ ) + Lemitted ( λ ) ] + Latm ( λ ) where
τ = transmission of the atmosphere Lemitted = radiance of the Moon Latm = radiance of the atmosphere
The infrared signature from the full Moon is defined by its apparent blackbody temperature of 390 K. Anyone using the following equation should take note that the actual temperature of the Moon depends on the time elapsed since the location being imaged was last
Astronomy
35
illuminated by the Sun. This can result in a substantial difference from the following equation, but it is good enough if you don’t know the details of the lunar ephermeris. bb
Lemitted = ε( λ )L ( λ,390 K ) The spectral emissivity, ε( λ ) , in the equation above can be estimated by using the reflectivity numbers quoted previously, remembering that 1 − R = ε. As a result of changes in the distance from Earth to the Moon, the solid angle of the Moon seen from Earth is Ω = 6.8 × 10
–5
sr ( with variation from 5.7 × 10
–5
–5
to 7.5 × 10 )
Reference 1. J. Shaw, “Modeling infrared lunar radiance,” Optical Engineering, 38(10), October 1999, pp. 1763–1764.
NUMBER OF ACTUATORS IN AN ADAPTIVE OPTIC To correct for atmospheric turbulence effects, an adaptive optic system needs a minimum number of actuators. The number of actuators that is required if evenly spaced over the adaptive optic is Telescope Aperture 2 ≈ ⎛ --------------------------------------------------⎞ ⎝ ⎠ r0 where r0 = the form of Fried’s parameter used for spherical waves, 2 2
r 0 = 3.024( k C n L )
–3 ⁄ 5
For plane waves, such as are received from star light, 2 2
r 0 = 1.68( k C n L )
–3 ⁄ 5
In addition, L = distance the light propagates through the disturbing atmosphere k = 2π/λ 2
C n = atmospheric structure coefficient, which, in its simplest form, is equal to about 10–14 m–2/3 λ = wavelength
Discussion These results derive directly from the turbulence theory of the atmosphere, which has been described elsewhere in this book. Considerable effort has gone into confirming the accuracy of the theory, as described in the introduction to this chapter and several of the rules. The results shown here are for the simplifying situation in which the properties of the atmosphere are assumed to be constant over the path through which the light propagates. It also assumes that r0 is smaller than the aperture that is equipped with the adaptive optics technology. Otherwise, adaptive optics are neither necessary nor helpful. For example, adaptive optics provide no improvement for the relatively small apertures used by most
36
Chapter Two
amateur astronomers, because the aperture is about the size of Fried’s parameter, meaning that only tilt occurs in such systems. Tilt can be removed with a steering mirror. We also note that the typical astronomical case involves correcting for the turbulence in a nearly vertical path through the atmosphere. The descriptions above for Fried’s parameter apply only for constant atmospheric conditions. The complexity of computing r0 for the nearly vertical case can be avoided by assuming that r0 is about 15 cm. This also shows the wavelength dependence of the performance of an adaptive optics system. Some algebra reveals that the wavelength dependence of the number of actuators goes as λ–12/5, so longer wavelengths require fewer adaptive elements, as expected. The number of actuators depends on the properties of the atmosphere, the length of the path the light travels, the wavelength of the light, and the application of the adaptive optics. The latter point derives from whether the light is a plane wave, such as pertains to star light, or spherical waves, such as characterize light beams in the atmosphere. To properly compensate for atmospheric turbulence, the number of actuators depends on the above form of the Fried parameter and the size of the optic. The optical surface must be divided into more moveable pieces than the maximum amount of turbulent cells that can fit on the same area. If fewer actuators are used, then the atmosphere will cause a wavefront error that cannot be compensated. Tyson2 shows that a more accurate representation of the number of actuators is ⎛ 0.05k 2 LC n D5 ⁄ 3⎞ ≈ ⎜ -------------------------------------⎟ ln ( 1/S ) ⎝ ⎠ 2
6⁄5
where S = the desired Strehl ratio The Strehl ratio is a commonly used performance measure for telescope optics and essentially defines how closely an optical system comes to performing in a diffraction-limited way. A little algebra shows that the two results are equal if one desires a Strehl ratio of 0.88. Diffraction-limited imaging is usually assumed to require a Strehl ratio of 0.8. Another way to look at this issue is to investigate the fitting error for a continuous facesheet. The following equation shows the variance in the fitting error in radians squared as a function of Fried’s parameter (r0 ) and d, the actuator spacing.3 d 5⁄3 2 wavefront variance = 0.28⎛ ----⎞ ( rad ) ⎝ r 0⎠ Thus, we see that, using the simple form of the rule, a 1-m aperture operating at a location that typically experiences a Fried parameter of 5 cm will need 400 actuators. This is typical of nearly vertical viewing associated with astronomical applications. The reader should keep in mind that there are at least two ways to implement the corrections in wavefront. The first approach is to actually change the shape of the primary mirror, which is rarely done at high bandwidth. The more common approach is to correct a smaller optic located at a pupil. Because of the magnification of the telescope, the pupil will necessarily be smaller than the primary mirror, meaning that the number of actuators computed above must fit into this smaller area.
References 1. H. Weichel, Laser System Design, SPIE Course Notes, SPIE Press, Bellingham, WA, p. 144, 1988. 2. R. Tyson, Principles of Adaptive Optics, Academic Press, Orlando, FL, p. 259, 1991. 3. R. Dekany et al., “1600 Actuator Tweeter Mirror Upgrade for the Palomar Adaptive Optics System (PALAO),” Proc. SPIE 4007, Astronomical Telescopes and Instrumentation 2000, March 29–31, 2000.
Astronomy
37
NUMBER OF INFRARED SOURCES PER SQUARE DEGREE The number of infrared sources (Ns) brighter than the irradiance at wavelength λ per square degree is log N s [ s( b ) ] ≈ log [ A( b,l ) ] + B( b,l )log [ E 12 { λ,s(b) } ] where E 12 { λ,s( b ) } = equivalent spectral irradiance at 12 µm producing Nλ{s(b)} sources per square degree, jansky s(b) = spectral index, defined as the ratio of the 12-µm spectral irradiance to the 25-µm spectral radiance that produced the same source count N; as a function of galactic latitude, the spectral index is b s( b ) = – 0.22 – 1.38⎛ 1.0 – exp ⎛ ------⎞ ⎞ ⎝ ⎝ 15⎠ ⎠ b = galactic latitude in degrees, 0° ≤ b ≤ 90° l = galactic longitude in degrees 0° ≤ l ≤ 180° 2
0.000061l – 0.02082l + 3.0214 log[A(b,l)] = 0.000488l – 0.78 + ------------------------------------------------------------------------b 1.4 1 + ⎛ ------⎞ ⎝ 12⎠ b B(b,l) = ( – 0.00978l + 0.88 )⎛ 1.0 – exp ⎛ ------------------------⎞ ⎞ + ( 0.00978l – 1.8 ) ⎝ ⎝ 8.0 – 0.05l⎠ ⎠ for 0° ≤ l ≤ 90° for l > 90°, B = 0.92
Discussion The IR sky is rich with astronomical sources. This complicated rule provides an excellent match to the distribution of sources found in the archive developed by the IRAS spacecraft. Note that it is the function of the spectral index portion of the equation to extend the model to other wavelengths. To do so, the spectral energy distribution of the mean ensemble of sources must be known. This rule works well from wavelengths of about 2 to 40 µm. The largest uncertainty exists in the approximate longitude range of 0 to 90° and 270 to 360° for galactic latitudes within ±3° of the galactic equator. The jansky unit deserves some attention. This term has its genesis in radio astronomy but is finding wide use in infrared astronomy. A jansky is defined as 10–26 watts per square meter of receiving area per hertz of frequency band (W/m2/Hz) and is named for the pioneer radio astronomer Karl Jansky. The following discussion shows how the conversion from typical radiant intensity to janskys is performed. We start by noting that there is an equivalence between the energy E expressed in either frequency or wavelength as follows: E λ dλ = E v dv dv E λ = E v -----dλ We also note that v = c/λ so that dv/dλ= –c/λ2. This leads to
38
Chapter Two
c –6 E λ = E v ----2-10 λ where the numerical factor converts from W/m2/Hz on the right side of the equation to W/ m2/µm on the left side. Both c and λ are expressed in meters. From this equation, we find that a jansky at 20 µm is equal to about 7.5 × 10–15 W/m2/µm. At 0.56 µm, a jansky is equal to 9.6 × 10–12 W/m2/µm. Visible-wavelength stars can also be a source of confusion and a source of wavefront control signals. Figures 2.1 and 2.2 illustrate the variation in the density of stars of a particular magnitude as a function of galactic latitude. The data can be matched to about a factor of 2 to 5 with the following expression: –4 – latitude ⁄ 30 mv
N ( mv ,latitude ) = 6.55 × 10 e
e
where N = number of stars per square degree with magnitude greater than mv, and the latitude is expressed in degrees
References 1. D. D. Kryskowski and G. Suits, “Natural Sources,” in Vol. 1, Sources of Radiation, G. Zissis, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 179–180, 1993. 2. Data derived from W. Wolfe and G. Zissis, The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 3–22, 1978.
FIGURE 2.1 The number of stars per square degree is approximately exponential up to magnitudes of about 18. The legend shows galactic latitude.
Astronomy
39
FIGURE 2.2 The presence of the high density of stars near the galactic plane is easy to see in this graph. The legend is visual magnitude from the Earth.
NUMBER OF STARS AS A FUNCTION OF WAVELENGTH At visible wavelengths and beyond, for a given sensitivity, the longer the wavelength, the fewer stars you can sense. The falloff in the number of stars approximates the following: #Sλ2 ≈ #Sλ1 × 10
–0.4R
where #Sλ2 = number of stars at wavelength λ2 (λ2 larger than λ1) at a given irradiance R = ratio of one wavelength to another ( λ2 ⁄ λ1 ) #Sλ1 = number of stars at wavelength λ1
Discussion This rule is based on curve-fitting empirical data. Curves supporting this can be found in Seyrafi’s Electro-Optical Systems Analysis and Hudson’s Infrared Systems Engineering. This is useful for separate narrow bands, from about 0.7 to 15 µm, and irradiance levels on the order of 10–13 W/cm2/µm. Generally, this provides accuracy to within a factor of 2. The authors’ curve fitted data to derive the above relationship. Most stars are like our Sun and radiate most of their energy in what we call the visible part of the spectrum. As wavelength increases, there are fewer stars, because the Planck function is dropping for the stars that peak in the visible, and fewer stars have peak radiant output at longer wavelengths.
40
Chapter Two
NUMBER OF STARS ABOVE A GIVEN IRRADIANCE 1. The total number of stars at or above an irradiance of 10–13 W/cm2/µm is approximately 1400 × 10–0.626λ where λ is the wavelength in micrometers. 2. The number of stars at or above 10–14 W/cm2/µm is approximately 4300 × 10–0.415λ, where λ is the wavelength in micrometers. 3. The number of stars at or above a radiance of 10–15 W/cm2/µm is approximately 21,000 × 10–0.266λ where λ is the wavelength in micrometers.
Discussion As one observes in longer and longer wavelengths, there are fewer stars to observe at a given brightness. This phenomena stems from stellar evolution and populations as well as Planck’s theory. Most stars fall into what astronomer’s call “the main sequence” and have their peak output between 0.4 and 0.8 µm. From Planck’s equations, we also note that the longer the wavelength, the less output (brightness) a star has for typical stellar temperatures, because the infrared emission is on the tail of the Planck function. These simple equations seem to track the real data within a factor of two (or three) within the wavelength bounds. The curve for 10–15 W/cm2/µm tends to underpredict below 4 µm. The 10–13 W/cm2/µm curve was based on data from 1 to 4 µm, the 10–14 equation on wavelengths of 2 to 8 µm, and the 10–15 on wavelengths from 2 to 10 µm. The authors curve fitted some reasonably crude data to derive the above relationships. The above rule highlights two phenomena. First, the longer the wavelength (beyond visual) you observe, the fewer stars you will detect at a given magnitude or irradiance. Second, increased instrument sensitivity provides an increased number of stars detected.
PHOTON RATE AT A FOCAL PLANE The photon rate at a focal plane from a star of magnitude m is1 π 2 2 –0.4m S = NT --- ( 1 – ε )D ∆λ10 4 where
S = photon flux in photons/second 7 photons ⎞ - for a star of magnitude N = irradiance of a magnitude-zero star ⎛ ≈10 ------------------------2 ⎝ ⎠ cm secµm 0 in the band centered on 0.55 µm (See other rules in this chapter for details.) D = diameter of the telescope (cm) T = unitless transmittance of the atmosphere and optics ∆λ = bandpass of interest (µm) m = visual magnitude of the star ε = obscuration ratio (This number represents the ratio of the size of the secondary mirror to the size of the primary mirror. The additional obscuration of the struts that hold the secondary mirror is included as well. The latter effect will not occur if the telescope is an off-axis design.)
Discussion The above rule allows the approximate calculation of the number of photons per second at the instrument focal plane. Additionally, Ref. 2 gives us the following handy approximations:
Astronomy
41
A difference of 1 magnitude results in a difference of about 2.5 in spectral irradiance. A difference of 5 magnitudes is a factor of 100 difference in spectral irradiance. ■ A small magnitude difference is equivalent to an equal percentage difference in brightness (10.01 magnitudes is ≈1 percent dimmer than 10.00 magnitudes). This rule was developed for A class stars (the hottest subclass of white stars with surface temperature about 9000 K and prominent hydrogen lines). It is valid for narrow visible bandpasses. (Use with caution elsewhere.) Most on-axis reflecting telescopes have circular obscurations in the center of the aperture. Therefore, Schroeder suggests that ■ ■
2
π(1 – ε ) -------------------- = 0.7 4 for a Cassegrain telescope so the equation simplifies to S = 0.7 NT D2∆λ 10–0.4 m. Finally, we note that a star of magnitude m and temperature T will produce the following number of watts/m2/µm:3 –12
3.12 × 10 1 --------------------------- ------------------------------------------------------------------m 0.0144 5 2.5 0.2444 λ ⎛ exp ---------------- – 1⎞ ⎝ ⎠ λT
References 1. D. Schroeder, Astronomical Optics, Academic Press, Orlando, FL, p. 319, 1987. 2. Private communications with Dr. Walt Kailey, 1995. 3. D. Dayton, M. Duncan, and J. Gonglewski, “Performance Simulations of a Daylight LowOrder Adaptive Optics System with Speckle Postprocessing for Observation of Low-Earth Orbit Satellites,” Optical Engineering, 36(7), pp. 1910–1917, July 1997.
REDUCTION OF MAGNITUDE BY AIRMASS The atmosphere absorbs about 0.2 magnitudes per airmass.
Discussion Ground-based astronomers frequently represent atmospheric path length as airmass normalized to the zenith. Looking straight up, one has an “airmass of 1.” As the telescope’s line of sight is reduced in elevation, the amount of air through which it must view is increased and reaches a maximum at the horizon. The total airmass for viewing an object at the horizon from sea level is about 10 times the vertical view. This is because the densest part of the atmosphere is near the ground. In this rule, we will show the general calculation of the path length as a function of the elevation angle of the telescope. To start the calculation, let us first make the “flat-Earth” assumption. That is, let us take the case where the zenith angle is small (less than about 45°). This allows a simple computation of the total concentration of airmass between the ground telescope and the vacuum of space. In performing this calculation, we assume that the density of the atmosphere decreases as an exponential of the altitude, ρ(h) = ρoe–Lh where L is the reciprocal of the scale height of the atmosphere, h is the altitude, and ρ is the density of air molecules. A typical value for the scale height is 7 km, meaning that, at an altitude of 7 km, the pressure and density are 1/e (37 percent) of their surface values. This ideal-model atmosphere is
42
Chapter Two
easy to derive from the common model of an exponential pressure profile from the ground to space. The total column of air along a path from the ground to space is found by the following integration, where ρ is the density of air as a function of position along the integration path: ∞
∫ ρ(S)ds 0
Written in terms of the path over which we are viewing (s), the integral is ∞
∫ ρo e
–Ls cos Z
ρo ds = ---------------L cos Z
0
where Z = the zenith angle For this simple model, the elevation angle at which the airmass is 10× is 5.7°. Now consider the more complex case of a curved Earth of radius Re. Here, we find that h and s are related by 2
2
h( s ) = – Re + Re + s + 2sRe cos Z where Z = the zenith angle The path integral is now ∞
∫ ρo e
–Lh ( s )
ds
0
Although this looks impossibly complex, a little numerical analysis shows that (except for angles greater than about 69°) h and s are still related by the cosine of the zenith angle. This means that, for a wide range of angles, the flat-Earth result works just fine. A detailed analysis shows that, when the elevation angle is about 2.8°, the total molecular path is about 10 times the case for the shortest (vertical) path through the atmosphere. In any case, the horizontal view assures a very long atmospheric path. We are all familiar with the effects of such a path, as we have seen the intense red of the setting Sun. In this view, the path is so long that the blue component of the Sun’s light is scattered away, leaving only the long-wavelength component. Although the term airmass to represent the pressure-weighted atmospheric path length was developed by astronomers and previously almost exclusively used by observatories, it is finding its way into the lexicon of the general electro-optical and security sensor practitioner.
A SIMPLE MODEL OF STELLAR POPULATIONS The number of stars above a given visual magnitude mv can be estimated from #S = 11.84 × 10 where #S = approximate number of stars
0.4204mV
Astronomy
43
Discussion This simple rule is accurate to within a factor of 3 between magnitudes 0 and 20. The simple equation provides a good match—no worse than a factor of 5 for most magnitudes. It tends to underpredict the number of stars between magnitudes 13 and 15 and overpredict the number of stars between magnitudes 16 and 20. The issue of magnitudes is widely discussed. A reminder, however, is appropriate about the difference between visual and absolute magnitudes. The definition of the relationship between the two is quite simple. The absolute magnitude is the magnitude that the star would exhibit if at a distance of 10 parsecs (about 33 light years). We already know that two stars can be compared in apparent magnitude by the rule, 2
d m1 – m2 = 2.5 log ----12d2 where d1 and d2 are the distances of stars 1 and 2. Therefore, using 10 parsecs for d2, the formula becomes 2
d m1 – m2 = 2.5 log ----12- = m – M = 5log d 1 – 5 d2 where M indicates a measure of absolute magnitude. Of course, both d1 and d2 are measured in parsecs. The one value to remember is that, in the V band, a magnitude 0 star has a photon flux of very close to 1 × 107 photons/cm2/sec/micron. This can be easily derived from the number in Fig. 2.3 for the V band using the fact that the energy in a photon is hc/λ. The properties of other bands can be found in the appendix.
44
FIGURE 2.3
The number of stars brighter than a particular visual magnitude is an exponential function.
Chapter
3 Atmospherics
It is hard to imagine a subject more complex, and yet more useful, than the study of the propagation of light in the atmosphere. Because of its importance in a wide variety of human enterprises, considerable attention has been paid to this topic for several centuries. Initially, the effort was dedicated to learning how the apparent size and shape of distant objects depend on the properties of the atmosphere. Maturation of the field of spectroscopy led to a formal understanding of the absorption spectra of significant atmospheric species and their variation with altitude. Computer models that include virtually all that is known about the absorption and scattering properties of atmospheric constituents have been assembled and can provide very complete descriptions of transmission as a function of wavelength with a spectral resolution of about 1 cm–1. This is equivalent to a wavelength resolution of 0.1 nm at a wavelength of 1 µm. In addition to gradually refining our understanding of atmospheric absorption by considering the combined effect of the constituents, we also have developed a rather complete and elaborate theory of scattering in the atmosphere. The modern model of the scattering of the atmosphere owes its roots to the efforts of Mie and Rayleigh. Their results have been extended greatly by the use of computer modeling, particularly in the field of multiple scattering and Monte Carlo methods. For suspended particulates of known optical properties, reliable estimates of scattering properties for both plane and spherical waves can be obtained for conditions in which the optical thickness is not too large. Gustav Mie (1868–1957) was particularly influential, as he was the first to use Maxwell’s equations to compute the scattering properties of small spheres suspended in a medium of another index of refraction. A number of references1 suggest that Mie was not the first to solve the problem but was the first to publish the results. His work, along with work by Debye, is now generally called “Mie theory.” Rayleigh had already shown that scattering should vary as the fourth power of the wavelength using dimensional analysis arguments. Mie theory is often compared with the earlier approach of Airy. The interested reader will find technical and historical details in Ref. 2. Two technologies have been in the background in all of these theoretical developments: spectroscopy and electro-optical technology. Spectroscopes, which are essential instruments in the measurement of the spectral transmission of the atmosphere, are EO systems relying on improvements in detectors, optics, control mechanisms, and many of the other topics addressed in this book. As these technologies have matured, continuous improve-
45
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
46
Chapter Three
ments have been seen in the ability to measure properties of the atmosphere and to turn those results into useful applications. The applications range from measuring the expected optical properties being considered for astronomical telescope location to determining the amount of sunlight that enters the atmosphere and is subsequently scattered back into space. Laser technology has also been a key factor in new measurements of atmospheric behavior, in both scattering and absorption phenomena. Tunable lasers with very narrow line widths have been employed to verify atmospheric models and have provided the ability to characterize not only the “clean” atmosphere but also the properties of atmospheres containing aerosols and industrial pollution. Laser radar is regularly used to characterize the vertical distribution of scattering material, cloud density, and other features of the environment, weather, and climate. New advances in EO technologies have also allowed new insight into radiation transfer into, out of, and within the atmosphere. Satellite-based remote sensors have been measuring the radiation budget of the Earth in an attempt to define its impact on climatic trends. There are many other examples of space-based EO sensors that concentrate on measuring properties of the atmosphere, including the concentration of trace constituents in the stratosphere, ozone concentrations over the poles, and so on. Recently, at the urging of the military, measurements and improved theory have led to the development of methods for estimating and removing clear-air turbulence effects with important improvements for astronomers, imaging, and optical communications. New advancements in measuring the wavefront errors resulting from turbulence are included in adaptive optics. This technology is able to remove, up to a point, adverse atmospheric effects, which leads to telescope images that parallel those that would occur in transmission through a vacuum. We see, then, that atmospherics and astronomy have an intertwined history. The most recent advances in astronomical technology have come in two areas: space telescopes and overcoming the adverse impacts of the atmosphere. As an example of this connection, consider that even the casual observer of the night sky has noticed that planets do not twinkle but may not know why. This is because the angular size of planets is sometimes larger than the isoplanatic angle of the atmosphere. Similarly, for the same reason, a passenger in a high-flying jet, viewing a city at night, will see no twinkling. We can expect continual improvement in our understanding of the atmosphere and the way that it interacts with light propagating within it. All of these improvements in theory, supported by advancements in instrumentation quality, will result in even more capable EO systems and allow them to reduce the perturbing effects of their operating environments. The interested reader can find technical articles in Applied Optics and similar technical journals. At the same time, magazines such as Sky and Telescope occasionally include information on the way astronomers are using new technologies to cope with the effects of the atmosphere. A few new books have come out that deal specifically with imaging through the atmosphere. The International Society for Optical Engineering (SPIE) is a good source for these texts.
References 1. Scienceworld.wolfram.com/physics/MieScattering.html, 2003. 2. R. L. Lee, Jr., “Mie theory, Airy theory, and the Natural Rainbow,” Applied Optics, 37(9), March 20, 1998, p. 1506. This paper is also available at http://www.usna.edu/Users/oceano/ raylee/papers/RLee_MieAiry_paper.pdf, 2003.
Atmospherics
47
ATMOSPHERIC ATTENUATION OR BEER’S LAW The attenuation of light traversing an attenuating medium can often be estimated by the simple form, Transmission = e
– αz
where α = attenuation coefficient in units of distance–1 z = path length in same units as the attenuation coefficient
Discussion This common form is called Beer’s law and is useful in describing the attenuation of light in atmospheric and water environments and in optical materials. Because both absorption and single scattering will remove energy from the beam, α is usually expressed as α = a+γ where a = absorption per unit length γ = scattering per the same unit length The rule is derived from the fact that the fractional amount of radiation removed from the beam is independent of the intensity but is dependent on path length. The idea is that scattering and absorption remove light at a rate proportional to the length of the path and the amount of scattering or absorbing material that is present. This leads to a differential equation of the form dz ----- = constant z The solution of this simple equation is of exponential form. The numerical values in the equations are derived from field measurements. For example, downwelling light in the atmosphere or ocean from the Sun is described by a different attenuation coefficient that must take into account the fact that the scattered light is not removed from the system but can still contribute to the overall radiation. Deviation from exact adherence to Beer’s law can result if the medium is high in multiple scattering and if the sensor does not have a small field of view. In those conditions, multiply scattered light can be detected, dramatically altering the equation in the rule. This is widely observed in fog conditions when images cannot be formed but the total light level is not necessarily low. In a beam case, scattering removes light from the beam in an explicit way. Consider this example. When observing a person at distance, light from the target is emitted into a hemisphere, but you see only the very narrow-angle beam that happens to encounter your eye. If you are using a telescope or other instrument that restricts your field of view (FOV) to a small angle, Beer’s law applies. If you use a wide-FOV lens, multiply scattered light may be detected, affecting the intensity and clarity of the target. In this case, Beer’s law does not apply. Multiple scattering in turbid media results in a violation of the simple equation at the start of this rule. Beer’s law works only for conditions that do not allow multiple scattering, either because there is little scattering present or because the instruments involved do not allow multiply scattered light to be detected. In most applications that involve imaging, Beer’s law is the one to use. In situations where only the intensity of the light field is needed, an adequate estimation is to include only the absorption term.
48
Chapter Three
The presence of an exponential attenuation term in transmission of the atmosphere is no surprise, as this mathematical form appears for a wide variety of media (including the bulk absorption of optical materials such as glass).
References 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 161–165, 1969. 2. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA., p. 87, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003.
IMPACT OF WEATHER ON VISIBILITY In Beer’s law, the scattering coefficient, γ, associated with rain can be estimated in the following way: γ = 3.92/V
(1)
where V = visual range (usually defined as contrast between target and object of 2 percent)
Discussion A similar rule is Koschmeider’s,2 in which visibility = 3/α
(2)
for black targets against a bright sky. In this case, α is the total attenuation, including both scattering and absorption. See the rule on Beer’s law (p. 47) for more details on this parameter. Allard’s law2 applies to visibility of lights at night and is – αV
e E T = I -----------2 V
(3)
where ET = illumination detection threshold of the human eye I = power of the light source in watts V = visibility in kilometers Choose units that result in a meaningful value of ET . For example, we expect α to be in km–1, so V must be in kilometers. Other units can be used as long as the product of the distance and attenuation coefficient is unitless. For ET to be in watts/m2, the units of V in the denominator must be meters. It is obvious that the equation above derives from a 1/R2 type beam spread model, coupled with an attenuation term. Reference 2 also describes a measure of pilots viewing down a runway as log ( E T ) = 0.64 log ( B ) – 5.7
(4)
where B = background luminance level Some authors suggest that Eqs. (3) and (4) apply under different conditions; pilots should use the larger of the two during the day but use only Eq. (4) at night.
Atmospherics
49
Reference 3 also shows that the effect of rain on visual range and scattering coefficient can be estimated from γ = 1.25 × 10
–6 R ----3
r
where R = rainfall rate in centimeters per second r = radius of the drop in centimeters Alternatively, Ref. 1 gives the scattering coefficient of rainfall as γ = 0.248 f
0.67
where f = rainfall rate in millimeters per hour Reference 4 provides some insight into the effect of aerosols into scattering in the atmosphere. The authors of Ref. 4 point out that, for a uniform distribution of particles of concentration D and radius a, the scattering coefficient is 2
β sc = Dπa Q sc where Qsc = Mie scattering coefficient, which is a strong function of the ratio 2πa α = ---------λ As a increases, either by considering scattering at shorter wavelengths or by increasing the aerosol size, Qsc becomes 2. The result is that, for a large particle size or short wavelength, the particles have a scattering cross section twice their geometric size. The combined atmospheric extinction for the MWIR tends to be between 0.2 and 0.3, as illustrated in the Fig. 3.1. This is based on the U.S. Navy’s R384 database,5 which includes 384 detailed observations of atmospheric conditions in a multitude of maritime locations. All of the path lengths were horizontal. The graph included here stops at the 95th percentile, as the final 5 percent had very high extinction coefficients (over 1; the highest recorded in this database was 7.66 per km).
References 1. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 87, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003. 2. U.S. Department of Transportation, Federal Aviation Administration, United States Experience Using Forward Scatterometers for Runway Visual Range, March 1997, DOT-VNTSCFAA-97-1. 3. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 161–165, 1969. 4. J. Oakley and B. Satherley, “Improving Image Quality in Poor Visibility Conditions Using a Physical Model for Contrast Degradation,” IEEE Transactions on Image Processing 7(2), p. 167, February 1998. 5. L. Biberman, “Weather, Season, Geography, and Imaging System Performance,” Chap. 29 in Electro-Optical Imaging System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 29-33 to 29-37, 2001.
50
Chapter Three
FIGURE 3.1 MWIR extinction. The figure comes from R384. It shows the cumulative extinction coefficient for MWIR wavelengths.
ATMOSPHERIC TRANSMISSION AS A FUNCTION OF VISIBILITY Atmospheric transmission can be estimated via the range, visibility, and wavelength by – 3.91 λ – q τ = exp ⎛ -------------- ⎛ ----------⎞ R⎞ ⎝ V ⎝ 0.55⎠ ⎠ where V = visibility in the visual band in kilometers λ = wavelength in micrometers q = a size distribution for scattering particles; typical values are 1.6 for high visibility, 1.3 for average, and 0.585 V1/3 for low visibility R = range in kilometers Transmission is the ratio of the intensity of the light received to the light transmitted.
Discussion All sorts of human enterprise involves looking through the atmosphere. Simple rules for establishing how far one can see have always been of interest. This little rule mixes a little physics, in the form of the size distribution effects, into the empirical transmission. The longer the wavelength, the less the scatter, so wavelength is in the numerator. As in another rule, the absorption can be estimated from the visibility by 4/V. In another rule, we also note that the total attenuation is approximated by 3/V for black targets against a bright sky.
Atmospherics
51
As can be seen, the choice of q depends on the visibility for each particular situation. Of course, this requires that the visibility be known. Furthermore, the rule assumes that the visibility is constant over the viewing path. This never happens, of course. Nonetheless, this is a useful and easy-to-compute rule that can be used with considerable confidence. Visibility is easily obtained from the FAA or the National Weather Service and is readily available at most electro-optical test sites. Modern visibility measurements are usually made with scatterometers using near IR lasers and a 1-m path length. The measurement is then extrapolated for range, conditions, and the responsivity curve of the eye. An extensive data base has been acquired and algorithms refined over the past few decades so that this technique works well for the visible bandpass. However, because this data is not typically from transmissometers, scaling it is extremely questionable for other wavelengths. This rule provides a quick and easy approach for estimating the transmission of the atmosphere as a function of wavelength if some simple characteristics are known. Field work related to lasers, observability of distant objects, or the applicability of telescopes can make use of this rule if visibility is known. Of course, one can attempt to estimate the visibility if transmission can be measured. In general, aerosols have particle radii of 0.01 to 1 µm with a concentration of 10 to 1000 particles per cc, fog particles have a radius of 10 to 50 µm with a concentration of 10 to 100 per cc, clouds have particle radii of 1 to 10 µm with a concentration of 10 to 300 per cc, and rain has particle radii of 100 to 10,000 µm with a concentration of 0.01 to 10–5 per cc.1 This rules is an expansion of Beer’s law, and the interested reader should review that rule as well.
References 1. M. Thomas and D. Duncan, “Atmospheric Transmission,” in Vol. 2, Atmospheric Propagation of Radiation, F. Smith, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 12, 1993. 2. P. Kruse, L. McGlauchlin, and R. McQuistan, Elements of Infrared Technology, John Wiley & Sons, New York, pp. 189–192, 1962. 3. D. Wilmot et al., “Warning Systems,” in Vol. 7, Countermeasure Systems, D. Pollock, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 31, 1993.
BANDWIDTH REQUIREMENT FOR ADAPTIVE OPTICS To correct phase fluctuations induced by the atmosphere, an adaptive optics servo system should have a bandwidth of 0.4v w -------------λL where vw = wind velocity λ = wavelength L = path length inducing the phase fluctuations
Discussion This handy relationship indicates that the shorter the wavelength and the higher the wind velocity, the faster the servo system needs to be controlled. The bandwidth is lowered as
52
Chapter Three
the path length increases, because of the effect of averaging over the path. The bandwidth defined by this formula is often referred to as the Greenwood frequency. A more complete expression for the Greenwood frequency is
–1 fG =
3⁄5
∞
5⁄3 2 0.102k sec θ C n ( z ) V ( z ) dz 2
∫ 0
where
θ = angle from the line of sight to the zenith V(z) = vertical profile of the wind 2
C n = atmospheric structure function k = 2π/λ With a little work, it can be shown that the Greenwood frequency goes as the –6/5 power of wavelength. An even simpler form of the rule is2 0.43ν w -----------------r0 where r0 = Fried parameter, defined elsewhere in this chapter Finally, it can be shown that there is a relationship between Greenwood frequency and Strehl ratio (S).3 A number of rules about Strehl ratio, an important metric for the performance of laser and optical systems, are found in Chap. 9, “Lasers.” It is shown that ⎛ f G⎞ S = exp – 0.95 ⎜ -------⎟ ⎝ f B⎠
5⁄3
where fG, fB = the Greenwood and system bandwidth, respectively
References 1. R. Tyson, Principles of Adaptive Optics, Academic Press, Orlando, FL, p. 36. 1991. 2. J. Mansell, Ph.D. dissertation Stanford University, Micromachined Deformable Mirrors for Laser Wavefront Control, Chap. 2, p. 3, 2002. Available at www.intellite.com, 2003. 3. P. Berger et al., “AEOS adaptive-optics system and visible imager,” Proc. 1999 AEOS Technical Conference, U.S. Air Force, Maui, HA.
Cn2 ESTIMATES The index of refraction structure constant C 2n is a measure of the index of refraction variation induced by small-scale variations in the temperature of the atmosphere. The effect results from the fact that the index of refraction of air changes with temperature. There are several quick ways to estimate C 2n . The easiest1 is that it varies with altitude in the following way:
Atmospherics
53
– 13
2 1.5 × 10 C n = ------------------------------ for h < 20 km h ( in meters )
and 2
C n = 0 for altitudes above 20 km Note that C 2n (h) has the rather odd units of m–2/3.
Discussion No other parameter is so common as C 2n in defining the impact of the atmosphere on the propagation of light. C 2n (h), in which we have explicitly shown that the function depends on altitude, is critical in determining a number of key propagation issues such as laser beam expansion, visibility, and adaptive optics system performance, as well as in defining the performance impact that the atmosphere has on ground astronomical telescopes, surveillance sensors, and high-resolution FLIRs. Each of the estimates shown below is just that—an estimate. However, for the modeling of systems or the sizing of the optical components to be used in a communications or illumination instrument, these approximations are quite adequate. This field of study is particularly rich, having benefited from work by both the scientific and military communities. Any attempt to provide really accurate approximations to the real behavior of the atmosphere is beyond the scope of this type of book but is covered frequently in the astronomical literature, particular conferences and papers that deal with the design of ground-based telescopes. 2 Propagation estimates rely on knowledge of C n . Many of those estimation methods appear in this chapter. Although a number of estimates of C 2n are widely used, most will provide adequate results for system engineers trying to determine the impact of turbulence on the intensity of propagating light as well as other features of beam propagation. The most widely used analytic expression for C 2n (h) is the so-called Hufnagel-Valley (HV) 5/7 model. It is “so-called” because the profile of C 2n results in a Fried parameter (see the rule, “Fried parameter”) of 5 cm and an isoplanatic angle of 7 µrad for a wavelength of 0.5 µm. Beland2 expresses the Hufnagel-Valley (HV) 5/7 model as 8.2 × 10
– 16 – h/1500 – 14 – h/100 h 10 2 – h/1000 ------------⎞ W e + 2.7 × 10 e + 1.7 × 10 e ⎝ 1000⎠
– 26 ⎛
where h = height in meters W = wind correlating factor, which is selected as 21 for the HV 5/7 model Note that the second reference has an error in the multiplier in the last term. That error has been corrected in what is presented above. In many cases, C 2n value can be crudely approximated as simply 1 × 10–14 during the night and 2 × 10–14 during the day. The R3843 database has a minimum value of 7.11 × 10–19 and a maximum value of 1.7 × 10–13 for ground-based, horizontal measurements. The R384 average is almost exactly 1 × 10–14. C 2n is strictly a property of the turbulence induced in the atmosphere by tiny fluctuations in temperature. These temperature fluctuations induce very small variations in the index of refraction of the air. The temperature fluctuations are usually described in terms of the temperature structure parameter, C 2T . While the index of refraction of air varies slightly with wavelength, the effect is minor and, as will be seen in other rules, has a minor impact on imaging performance.
54
Chapter Three
References 1. J. Accetta, “Infrared Search and Track Systems,” in Vol. 5, Passive Electro-Optical Systems, S. Campana, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 287, 1993. 2. R. Beland, “Propagation through Atmospheric Optical Turbulence,” in Vol. 2, Atmospheric Propagation of Radiation, F. Smith, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 221, 1993. 3. L. Biberman, “Weather, Season, Geography, and Imaging System Performance,” Ch. 29, in Electro-Optical Imaging System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 29-33 to 29-37, 2001. 4. M. Friedman, “A Collection of C 2n Models,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, VA, May 1, 2002. Available in NVTHERM at www.ontar.com, 2003. 5. M. Friedman, “A Turbulence MTF Model,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, VA, May 24, 2002. Available in NVTHERM at www.ontar.com, 2003.
Cn2
AS A
FUNCTION OF WEATHER
For altitudes above 15 m and up to an altitude of a few hundred meters, the following approximation can be used to estimate C 2n for various wind speeds and humidities: 2
C n = 3.8 × 10
– 14
W + 2 × 10
– 17
RH – 1.1 × 10
+ 2.9 × 10 – 2.5 × 10
2
– 15
WS + 1.2 × 10
– 17
WS – 5.3 × 10
– 8.5 × 10 where
– 15
3
T – 2.8 × 10 –19
–15
RH
WS
– 15
RH
3
2
–13
W = temporal hour weight (described below) T = air temperature in kelvins RH = relative humidity (percent) WS = wind speed (m/s) 2
C n = defined elsewhere in this chapter
Discussion A variety of C 2n models abound in the literature, but few attempt to capture its relationship to environmental conditions. Once the value of C 2n is found, it can be scaled with altitude using an appropriate model, as defined elsewhere in this chapter. This rule provides an algorithm for the weather-related effects in the important altitudes near the ground. The authors of Ref. 1 point out that an even more complete model is created when one includes the effects of aerosols. They do this by estimating the total cross-sectional area (TCSA), as below, and modifying the estimate of C 2n . Note that the units of TCSA are cm2/m3.
Atmospherics
–4
–5
TCSA = 9.69 × 10 RH – 2.75 × 10 RH –7
2
–9
3
+ 4.86 × 10 RH – 4.48 × 10 RH + 1.66 × 10
– 11
5
RH – 6.26 × 10 4
–5
– 1.34 × 10 SF + 7.30 × 10
–3
55
4
ln RH
–3
and C n = 5.9 × 10
– 15
W + 1.6 × 10
+ 6.7 × 10
– 17
RH – 3.9 × 10
2
× 10
– 15
2
WS + 1.3 × 10
+ 2.8 × 10
– 14
+ 1.4 × 10
– 15
– 15
2
– 19
RH
3
2
– 14
– 15
RH – 3.7
WS – 8.2 × 10
SF – 1.8 × 10
– 14
T – 3.7 × 10
– 17
WS
3
TCSA
TCSA – 3.9 × 10
– 13
where SF = solar flux in units of kW-m–2 We also note the introduction of the concept of temporal-hour in the equation and a weighting function (W) associated with it. The temporal-hour is defined as one-twelfth of the time between sunrise and sunset.2 In winter, a temporal-hour is less than 60 min, for example. Table 3.1 shows the values of W that should be used to change the estimate of C 2n during the day. Figure 3.2 is the cumulative probability of C 2n from the U.S. Navy’s R384 database. This is a database of R384 detailed atmospheric measurements at multiple maritime locations across the globe, with a horizontal path.
References 1. Y. Yitzhaky, I. Dror, and N. Kopeika, “Restoration of Atmospherically Blurred Images According to Weather-Predicted Atmospheric Modulation Transfer Functions,” Optical Engineering, 36(11), November 1997, pp. 3064–3072. 2. D. Sadot and N. S. Kopeika, “Forecasting Optical Turbulence Strength on the Basis of Macroscale Meteorology and Aerosols: Models and Validation,” Optical Engineering, 31(2), February 1992, pp. 200–212. 3. M. Friedman, “A Collection of C2n Models,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, Virginia, May 1, 2002. Available in NVTHERM at www.ontar.com, 2003. 4. M. Friedman, “A Turbulence MTF Model,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, Virginia, May 24, 2002. Available in NVTHERM at www.ontar.com, 2003.
56
Chapter Three
TABLE 3.1 Values of W Temporal hour interval
Relative weight (W)
Until –4 –4 to –3 –3 to –2 –2 to –1 –1 to 0
0.11 0.11 0.07 0.08 0.06
Sunrise
0 to 1 1 to 2 2 to 3 3 to 4 4 to 5 5 to 6 6 to 7 7 to 8 8 to 9 9 to 10 10 to 11
0.05 0.1 0.51 0.75 0.95 1 0.9 0.8 0.59 0.32 0.22
Sunset
11 to 12 12 to 13 Over 13
0.10 0.08 0.13
2
FIGURE 3.2 Cumulative probability of maritime C n . Data represent the results of experiments with a horizontal view near the ground.
FREE-SPACE LINK MARGINS The atmosphere has a distinct impact on the ability of terrestrial laser communications. The following data indicates the relative impact of different conditions.
Atmospherics
57
Discussion Atmospheric absorption, scatter, and scintillation all will decrease the SNR and, if bad enough, will eliminate the ability for an electro-optical system to detect a target or send information to another location. Table 3.2 gives some guidelines for the link margins suitable for various weather types. This is highly subjective, as all of these weather conditions, in the real world, can be “clumpy,” both spatially and temporally, and such definitions are often in the eye of the beholder, which doesn’t see in the 1550 nm bandpass. The user must use these with an extreme grain of salt, but they provide a good starting point. TABLE 3.2 Suitable Link Margins Weather Condition
Required link margin, dB/km
Urban haze
0.5
Typical rainfall
3
Heavy rainfall
6
Typical snow, heavy downpour, or light fog
10
White out snowfall or fog
20
Heavy to severe fog
30–120
The practitioner is encouraged to get the local weather statistics for his link to determine the link margin needed for a give locale. Obviously, Adelaide and Tucson will need a lower margin for a given reliability than Seattle or Halifax. The above link margins are for wavelengths of 1550 nm. Visible wavelengths perform slightly worse, and the long-wave infrared (LWIR) slightly better. Reference 1 gives the first five entries. The last entry is derived from the author’s experience.
Reference 1. R. Carlson, “Reliability and Availability in Free Space Optical Systems,” Optics in Information Systems, SPIE Press 12(2), October 2001.
FRIED PARAMETER The Fried parameter is computed as follows: L
Fried parameter = r 0 =
2
0.423k sec
2
– 3/5
∫
2 β C n ( z ) dz 0
where k = propagation constant of the light being collected, k = 2π/λ β = zenith angle of the descending light waves L = path length through which the light is collected 2
C n = atmospheric refractive structure function, discussed elsewhere in this chapter λ = wavelength sec = secant function (the reciprocal of the cosine function) z = dummy variable that represents the path over which the light propagates
58
Chapter Three
Discussion Fried has developed a useful characterization of the atmosphere. He has computed a characteristic scale of an atmospheric path, commonly referred to as r0 (and now known as Fried’s parameter). It is pronounced “r zero.” It has the property that a diffraction-limited aperture of diameter r0 will have the same angular resolution as that imposed by atmospheric turbulence. Clearly, for any path over which C 2n is constant we get 2
2
2
r 0 = ( 0.42k sec βC n ( z )L )
– 3/5
For a vertical path that includes the typical profile of C 2n , the value of r0 is about 15 cm. Fried derived this expression using the Kolmogorov spectrum for turbulence. Continued development of the theory has led to new approximations and more accurate characterizations of the impact of the atmosphere on light propagation. This is particularly true for performance evaluation of space and aircraft remote sensors. Astronomical telescopes and laser applications such as optical communication have also benefitted. Proper characterization of C 2n is necessary to get a good estimate of Fried’s parameter. Note also that there is a wavelength dependence in the results, hidden in the parameter k, which is equal to 2π/λ. Unfortunately, characterization of C 2n is an imprecise empirical exercise. Astronomical sites measure it in a limited way, but it varies with location, season, and weather conditions, so it is usually only approximated. Of course, attention must be paid to using the correct units. Because C 2n is always expressed as m–2/3, meters are the preferred units. This rule provides a convenient characterization of the atmosphere that is widely used in atmospheric physics, including communications, astronomy, and so on. Properly applied, it provides a characterization of the maximum telescope size for which the atmosphere does not provide an impediment that blurs the spot beyond the diffraction limit. That is, a small enough telescope will perform at its design limit even if the presence of the atmosphere is taken into account. The Fried parameter is often used in adaptive optics to determine the required number of active cells and the number of laser-generated guide stars necessary for some level of performance. Fried’s parameter continues to find other uses. For example, the resolved angle of a telescope can be expressed as approximately λ/r0, or about 3.3 µrad for an r0 of 15 cm and a wavelength of 0.5 µm. Note that this result is consistent with those in the rule, “Atmospheric Seeing,” in Chap. 2, “Astronomy.” Convection, turbulence, and varying index of refraction of the atmosphere distort and blur an image, limiting its resolution. The best “seeing” astronomers can obtain (on good nights, at premier high-altitude observatories such as Mauna Kea) is on the order of 0.5 to 1.5 µrad. This limit is regardless of aperture size. It is not a question of diffraction limit but one of being “atmospheric seeing limited” by atmospheric effects. The seeing tends to improve with increasing altitude and wavelength. The Fried parameter is the radius in which the incoming wavefront is approximately planar. In the visible, it ranges from about 3 to 30 cm. The Fried parameter is strongly spatially and temporally dependent on the very localized weather at a given location, and it varies with the airmass (or the telescope slant angle). It also can be affected by such localized effects as the telescope dome and air flow within the telescope. Moreover, the Fried parameter can vary across the aperture of a large telescope. Moderate-size telescopes (say less than 5 m in aperture), operating at 10 µm or longer, tend to be diffraction limited. Stated another way, the Fried parameter in those cases exceeds, or at least equals, the telescope aperture. The amateur astronomer’s 5- to 10-inch aperture telescope is about as big as a telescope can be before atmospheric effects come into play, if the local environment (city lights and
Atmospherics
59
so forth) is not a factor. The really large telescopes have the same angular resolution, because they too are affected by the atmosphere. Of course, the big telescopes are not in your backyard but are sited where atmospheric effects are as insignificant as possible. In addition, large telescopes collect light faster than smaller ones and thus allow dimmer objects to be seen in a reasonable length of time. One method to correct for this atmospheric distortion is to employ a wavefront sensor to measure the spatial and temporal phase change on the incoming light, and to use a flexible mirror to remove the distortions that are detected, in essence removing the atmospheric effects in real time. The wavefront sensor can be a Shack-Hartmann sensor, which is a series of lenslets (or subapertures) that “sample” the incoming wavefront at the size of (or smaller than) the Fried parameter. The size of the wavefront sensor subaperture and the spacing of the actuators that are used to deform an adaptive mirror can be expected to be less than the Fried coherence cell size. The diameter of the telescope divided by the Fried parameter indicates the minimal number of subapertures needed. The optimal size of the wavefront spacing and correction actuators seems to be between 0.6 and 1.0 times the Fried cell size. More details on this topic are covered in other rules in this chapter. Reference 2 notes that r0 may be estimated by a number of practical methods, the simplest of which relies on measurement of image motion. ⎛ λ2 ⎞ r 0 = 0.346 ⎜ --------------------⎟ ⎝ σ 2 D 1 ⁄ 3⎠
3/5
where D = telescope aperture diameter σ = rms angular image motion
References 1. C. Aleksoff et al., “Unconventional Imaging Systems,” in Emerging Systems and Technologies, Vol. 8, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, SPIE Press, p. 132, 1993. 2. D. S. Acton, “Simultaneous Daytime Measurements of the Atmospheric Coherence Diameter r0 with Three Different Methods,” Applied Optics, 34(21), pp. 4526-4529, July 20, 1995.
INDEX OF REFRACTION OF AIR The index of refraction of air can be approximated by n = 1 + 77.6 ⎛ 1 + 7.52 × 10 λ ⎝
–3 –2 ⎞ P –6 --- × 10
⎠T
where P = pressure in millibars λ = wavelength in microns T = temperature in kelvins1
Discussion Algorithms for expressing the index of refraction of air can be very important for computing ray traces related to imaging through the atmosphere, particularly descriptions of color properties of the atmosphere (rainbows, glory, and so on). In addition, because the optical
60
Chapter Three
effect of turbulence in the atmosphere derives from variations the index of refraction, the above expression can be useful in scaling C 2n , as described in Ref. 1. The reference points out that fluctuations in the index depend on wavelength and temperature according to ∆n = 77.6 ⎛ 1 + 7.52 × 10 λ ⎝
–3 –2 ⎞ P –6 ------ × 10 ∆T ⎠ 2
T
This result is obtained by simply taking the derivative of the equation in the rule. This type of result can be useful in dealing with observed scintillation and pointing jitter induced by local temperature fluctuations as described in other rules in this chapter. In addition, one can estimate the change in index as pressure or temperature changes occur. An experimenter who is dealing with changing weather conditions while trying to perform longpath experiments will find application of this simple rule. To make these estimates complete, we include a simple method of estimating the density (ρ) of air from Ref. 2. ρ = 1.286 – 0.00405T Here, density is in kg/m3, and T is in degrees Celsius. Zanjan3 suggests a small correction to the first equation, resulting in –3 –2 ⎞ P –6 4810e --- × 10 ⎛ 1 + ---------------⎞
n = 1 + 77.6 ⎛ 1 + 7.52 × 10 λ ⎝
⎠T
⎝
PT ⎠
where e = water vapor pressure in millibars Another approach4 is to define the index as a function of frequency rather than wavelength.
( n – 1 )10
where
6
2 2 ⎛ 526.3v 1 11.69v 2⎞ P dry ⎜ = 237.2 + ------------------- + -------------------⎟ ⎛ -----------⎞ ⎠ ⎜ 2 2⎟⎝ 2 2 v1 – v v2 – v ⎠ T ⎝
N = the refractivity v = the wave number in cm–1 v1 = 114,000 cm–1 v2 = 62,400 cm–1 Pdry = dry air pressure in kilopascals T = temperature in kelvins
Finally, Ref. 5 compares a number of different strategies for estimating the index of refraction of dry air and water vapor. They use the concept of reduced refraction, A(λ), to simplify the equations. Reduced refraction uses the fact (seen above) that the index usually is of the form P n λ – 1 = --- A ( λ ) T
Atmospherics
61
In what follows, we will provide the formulae for A(λ). For dry air (using the subscript D), we can use either of the following formulae:
A D1
⎛ ⎞ –9 ⎜ 2406030⎟ 15997 2.84382 × 10 ⎜ 8342.13 + ----------------------- + ----------------------⎟ 1 1 ⎜ 38.9 – ------ 130 – ------⎟ 2 2⎠ ⎝ λ λ
A D2
–9 1.36 162.88 2.69578 × 10 ⎛ 28760.4 + ---------- + -----------------⎞ ⎝ 4 2 ⎠ λ λ
These two algorithms match each other and the one provided in the rule almost exactly. Because the version in the rule is the simplest of the three, it is the one that most people will want to use. The reduced refractivity for water (subscript W) is provided by either of the two following formulae: A w1
–9 1.36 162.88 2.84382 × 10 ⎛ 24580.4 + ---------- + -----------------⎞ ⎝ 4 2 ⎠ λ λ
A w2
–7 0.004028 0.03238 2.6422 2.24756 × 10 ⎛ 295.235 + ----------------------- – -------------------- + -----------------⎞ ⎝ 6 4 2 ⎠ λ λ λ
The reader should be alert to the fact that the two forms above differ by about 3 percent. This difference seems not to have been resolved by the community, as both versions are in use. Using the formulas presented using the A(λ) formulation, we can compute the index of refraction as 1 n λ = 1 + --- [ A D ( λ )P D + A W ( λ )P W ] T where
D and W = subscripts in the equations above PD and PW = the partial pressure of dry air and water vapor, respectively
References 1. W. Brown et al., “Measurement and Data-Processing Approach for Estimating the Spatial Statistics of Turbulence-Induced Index of Refraction Fluctuations in the Upper Atmosphere,” Applied Optics, 40(12), p. 1863, April 20, 2001. 2. D. L. Hutt, “Modeling and Measurements of Atmospheric Optical Turbulence over Land,” Optical Engineering, 38(8), pp. 1288–1295, August 1999. 3. M. Sarazin, Atmospheric Turbulence in Astronomy, 2001, available at www.eso.org/astclim/espas/iran/zanjan/sanjan01.ppt, 2003. 4. G. Kamerman, “Laser Radar,” in Vol. 2, Atmospheric Transmission, M. Thomas and D. Duncan, Eds., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 88, 1993. 5. S. van der Werf, “Ray Tracing and Refraction in the Modified U.S. 1976 Atmosphere,” Applied Optics, 42(3), p. 354–366, January 20, 2003.
62
Chapter Three
THE PARTIAL PRESSURE OF WATER VAPOR The following equation shows the partial pressure of water vapor as a function of air temperature and relative humidity: P = 1.333 RH {[(C3 Tc + C2) Tc + C1] Tc + C0} where
P = partial pressure of water vapor in millibars RH = relative humidity C0 = 4.5678 C1 = 0.35545 C2 = 0.00705 C3 = 3.7911 × 10–4 Tc = air temperature in degrees Celsius
Discussion The amount of water in the path length affects nearly all wavebands by reducing the transmission. Additionally, a number of rules related to upwelling and downwelling radiation in the atmosphere depend on knowing the partial pressure of water vapor. Upwelling and downwelling describe the flow direction of radiation. Scattered sunlight (as might be produced by a thick cloud cover) is downwelling, whereas reflections from the surface are upwelling. The rule matches the observed distribution by fitting data with a least-squares curve. The partial pressure is useful in a number of applications. For example, the same reference shows that the downwelling radiance onto a horizontal plane is proportional to the square root of the partial pressure of water vapor. This rule gives immediate results for estimating the partial pressure of water vapor. It clearly shows that as the temperature rises, the partial pressure increases rapidly, and vice versa. In fact, review of the equation shows that the increase goes as T 3 for one term and T2 for another.
Reference 1. D. Kryskowski and G. Suits, “Natural Sources,” in Vol. 1, Sources of Radiation, G. Zissis, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 147, 1993.
PHASE ERROR ESTIMATION The maximum phase error induced by the atmosphere can be written as 2
maximum phase error = 0.57k zC n D where
5⁄3
z = distance through which the aberrated wave propagates 2 C n = atmospheric structure function
D = aperture diameter k = wave propagation constant, 2π/λ
Atmospherics
63
Discussion The stroke of actuators in a deformable mirror system must be able to accommodate the phase errors induced by the atmosphere and aberrations in the telescope. The rule shown here compares with a discourse by Tyson2 in which he shows the phase effects of various optical aberration terms induced by the atmosphere. Some algebra shows that what is shown above (as well as the terms described by Tyson) can be put in a form that includes the ratio of the aperture diameter and Fried’s parameter, r0. Fried’s parameter is described in another rule. We can compare the various results using the following equation: σ
2
D 5⁄3 = n ⎛ ----- ⎞ ⎝r ⎠ 0
We will assume that r0 is expressed as [0.423k2 C 2n z]–3/5, the value it takes on for constant C 2n . Tyson shows the following values for n: Variance in phase (σ2)
Value of n
Piston
1.0299
One-dimensional tilt
0.582
Two-dimensional tilt
0.134
Focus
0.111
The rule above, taking into account that it is described as the maximum (which we might assume is 3 σ).
0.256
Thus, we see that there is not complete agreement between the rule and Ref. 2, but all values fall in the same range. Consider the case in which D = 1 m, C 2n is 2 × 10–14 m–2/3, L = 5000 m (approximately the distance in the atmosphere over which turbulence is a significant factor), and λ = 1 µm. From these numbers, we get about 35 radians (about 6 waves or about 3 to 6 µm) for the maximum piston stroke needed to accommodate the effects of the atmosphere.
References 1. R. Mali et al., “Development of Microelectromechanical Deformable Mirrors for Phase Modulation of Light,” Optical Engineering, 36(2), pp. 542–548, February 1997. 2. R. Tyson, Principles of Adaptive Optics, Academic Press, New York, p. 79, 1991.
SHACK-HARTMANN NOISE The variance in the wavefront resulting from nonperfect sensing in a Shack-Hartmann sensor can be expressed as1 2
2 L 2 Kb + Ks + (K N N d) 2 var = ( 0.86π ) ⎛ ----- ⎞ ------------------------------------------------ rad ⎝r ⎠ 2 0 Ks
where Ks = average number of detected signal photons in each Hartmann spot Kb = number of background photons detected in each subarray
64
Chapter Three
KN = read noise in each pixel Nd = number of detectors in each subarray L = outer scale of the atmosphere r0 = Fried’s parameter (defined elsewhere in this book)
Discussion The Shack-Hartmann sensor is widely used to determine the wavefront error to be corrected by an active optical system. The sensor divides the telescope pupil into a number of small areas (or subapertures) and determines the wavefront tilt at each area. The ensemble of tilts measured in each subaperture is used to fit the overall wavefront shape and to command the correction system. Noise or other error corrupts the tilt measurements and leads to incomplete correction of wavefront error. To implement such a system, groups of pixels in a typical focal plane are assigned to a particular location in the pupil. Each group of pixels (a subarray) represents a point in the array of samples of the wavefront sensor. Each point in the array collects light over an area called the subaperture. A wavefront entering such a subaperture is imaged in the part of the focal plane assigned to it, falling in the center of the subarray only if there is no tilt. A tilted wavefront will be brought to focus in a noncentered location. The center of light of the spot that is formed is determined by measuring the intensity of the light in each pixel and performing a center of mass calculation. The location of the center of the spot indicates the two-dimensional tilt for the particular subaperture. The outer scale describes a feature of the turbulence of the atmosphere that indicates the source of the kinetic energy of the atmosphere. In most cases, the outer scale at low altitude is about one-half the altitude, becoming 100 m or more away from the ground where free flow occurs. Fried’s parameter indicates the lateral spatial scale over which the phase of rays arriving at the sensor are about the same. The formula below is a simpler version that explicitly shows the role of signal-to-noise ratio in determining the variance in the measurements. The noise equivalent angle of the individual tilt measurements is2 2
2 λ σ tilt = 0.35 ---------------------------2 2 d s ( SN R v )
where
σtilt = rms tilt error λ = wavelength ds = subaperture diameter SNRv = voltage signal-to-noise ratio
The first figure, Fig. 3.3, is an image of the subaperture spots taken at a zenith angle of 33° on the 3.5-m Apache Point Observatory. Note that the spots vary in position over the pupil. That is, they do not form a uniform grid. The locations of the spots provide the information that indicates what type of tilt imperfections are present in the subaperture of the wavefront. The second figure, Fig. 3.4, shows the resulting wavefront error mapped across the pupil. In this figure, the wavefront error is shown as an optical path error (in nanometers) for each position on the pupil. This is called a zonal representation of the error. In some cases, it is useful to convert the zonal representation into its decomposition in terms of Zernike polynomial coefficients, which is called a modal representation. This type of data can be input to a computer to command positioners to deform a mirror (for phase conjugation) to reduce these wavefront errors. The result is a much cleaner image.
Atmospherics
65
FIGURE 3.3 Image of subaperture spots.3 A close look reveals that the spots do not form a uniform grid. This results from the wavefront tilt that is being measured.
FIGURE 3.4 The resulting wavefront error mapped across the pupil as derived from the wavefront error present in the SH results shown above.3
66
Chapter Three
References 1. D. Dayton, M. Duncan, and J. Gonglewski, “Performance Simulations of a Daylight LowOrder Adaptive Optics System with Speckle Postprocessing for Observation of Low-Earth Orbit Satellites,” Optical Engineering, 36(7), pp. 1910–1917, July 1997. 2. R. Tyson and P. Ulrich, Adaptive Optics, Ch. 2 of Vol. 8 of the Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 215, 1993. 3. www.Astro.Washington.edu/morgan/APO/SH-measurements/initial_Trials/report.html, 2003.
VERTICAL PROFILES OF ATMOSPHERIC PARAMETERS Using the following functional form, one can estimate vertical profiles of temperature, pressure and humidity. 2
f ( h ) = f ( 0 ) exp ( – a h – bh ) where f(0) represents the surface value of each parameter and h is the height in kilometers. Note that the pressure is in millibars. Atmospheric parameter
a (km–1)
b (km–2)
Humidity (g/m3)
0.308
0.05
Temperature (K)
0.01464
0.00081
Pressure (mB)
0.11762
0.00109
Discussion The fact that there is a strong exponential component to these models will not be a surprise to anyone who has studied the thermodynamics of the atmosphere. Both pressure and temperature can be evaluated for an equilibrium atmosphere by investigating the impact that gravity has on the pressure profile. Further analysis of the entropy and molecular properties of the atmospheric constituents leads to the conclusion that both pressure and temperature exhibit an idealized exponential change with altitude. Modeling the vertical profile of water vapor is not so easily done, but it should be clear that the thermodynamics of water, particularly the exchange of energy that generates changes in phase from liquid to vapor to solid (ice), are entwined with the temperature profile. The typical exponential property of the atmosphere can be seen in the first term of the equation. The quadratic term assists in fitting data used in MODTRAN calculations. The approximation works well up to 4 km altitude in a marine environment.
Reference 1. F. Hanson, “Coherent Laser Radar Performance in Littoral Environments—A Statistical Analysis Based on Weather Observations,” Optical Engineering, 39(11), pp. 3044–3052, November 2000.
VISIBILITY DISTANCE FOR RAYLEIGH AND MIE SCATTERING Rayleigh scattering is the only significant type of scattering that occurs when the visibility is greater than 10 to 12 km.
Atmospherics
67
Discussion When the relative humidity is greater than roughly 75 percent, aerosols (haze) grow into the size range for Mie scattering. As a rule of thumb, Mie scattering is the type of scattering that reduces visibilities below the criterion for unrestricted visibility (<10 km). Some other impacts of various weather conditions appear in Table 3.3. As is usually the case, the impact of weather is less and less important as one goes to longer wavelengths. TABLE 3.3 Other Impacts of Various Weather Conditions
Weather conditions
Visible and near infrared
Shortwave infrared
Mediumwave infrared
Longwave infrared
Submillimeter wave (T waves)
Low visibility
Severe
Moderate
Low
Low
None
Rain/snow
Moderate
Moderate
Moderate
Moderate
Moderate/low
High humidity
Low
Low
Moderate
Moderate
Low/none
Dust
Severe
Moderate/severe
Moderate
Moderate
Low/none
Fog
Severe
Severe
Moderate
Low
None
Wind increases the amount of heat lost from all objects and reduces contrast for all thermal-infrared target scenes, but it has little impact on the visible contrast other than the higher C 2n , as noted in another rule in this chapter. In other words, the greater the wind speed, the smaller the thermal differences between targets and backgrounds. The degree to which wind cools a target varies the most for low wind speeds, generally less than 5.1 m/ sec (10 kt). A 4.1 m/sec (8-kt) wind cools a target much more than a 2.1 m/sec (4-kt) wind, but a 25.5 m/sec (50-kt) wind speed does not have much more effect than a 10.2 m/sec (20-kt) wind. Although the effect is not as pronounced at generally higher wind speeds, getting the wind speed forecast correct is essential.
This page intentionally left blank
Chapter
4 Backgrounds
A typical problem with most electro-optical sensors is that backgrounds can be complex and can include influences that reduce the chances of discerning targets that are superimposed on such backgrounds. The backgrounds against which targets are viewed are often as bright as, and sometimes brighter than, the targets themselves. This complicates and sometimes prevents the task of detection. To detect targets, the sensed background noise (or clutter) levels must be different from the sensed target levels and able to be processed so as to provide acceptable false detection rates. In addition, the background may include spatial variation that includes a size distribution that matches that of the target. However, spatial variation in the background at all spatial frequencies will have a negative impact on the ability of a sensor/human to find the target, as we shall illustrate below. In general, it is desirable for any sensor to possess the capability of detecting targets in a variety of environments and backgrounds. Borrowing from the radar community, the spatial and temporal amplitude variation in the background is usually referred to as clutter. Clutter is a structured background phenomenon that is not spatially or temporally constant. It is not a process that allows the signals to be combined in a root-sum-squared (RSS) fashion; rather, the signals must be simply added to other noise sources. From elementary statistics, we know that adding noise sources creates a total that is larger than would be the case if the terms were added in an RSS manner. Clutter cannot be simply filtered out via elementary signal processing used to eliminate the DC background; it requires more sophisticated techniques of image processing as described below. The physical sources of clutter for the chosen bandpass may include large weather fronts, Sun glints, clouds and cloud edges, variations in water content, lakes, certain bright ground sources, and variances in emissivity, reflectivity, and temperature. For viewing from space, the altitude of the clutter sources will depend on the wavelength of operation. Only at wavelengths that penetrate the atmosphere will ground sources be detected by a space-based sensor, but they are present for most terrestrial or down-looking airborne sensors. Examples of how clutter prevents detection are easily observed. A poppy seed dropped on a white piece of paper is easy to find. The same seed dropped on a Jackson Pollock painting is almost impossible to detect. In addition to the effect induced by clutter, detection systems must also deal with a background effect that is analogous to trying to detect the seed on a gray piece of paper.
69
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
70
Chapter Four
Spatial, spectral, morphological, and temporal bandpass filtering are commonly used techniques in contemporary sensor systems to reduce the effects of unwanted backgrounds. Spectral bandpass filters can reduce the resulting DC background level and restrict the light entering the sensor to a target signature that includes reduced amounts of clutter. Spatial filters can be used to reduce or attenuate the spatial frequency components of clutter that do not match the approximate target size. Figure 4.1 illustrates the target, background, and clutter issue. Figure 4.1a is a highbackground, cluttered scene with multiple targets. Figure 4.1b represents the scene after
(a)
(b)
(c)
(d)
FIGURE 4.1 Illustrated effects of background and clutter. (a) Original notional image with targets, constant (DC) background, and clutter. (b) Notional scene with the DC background subtracted. (c) Scene after clutter rejection algorithms are applied. (d) Scene with automatic target identification and IFF algorithms applied.
Backgrounds
71
simple background subtraction. Figure 4.1c is after spatial filtering to reduce clutter. Any clutter that is also present in the bandpass will not be highly attenuated. Targets of the size of the clutter are still difficult to identify in this picture. Clutter that passes through the bandpass is called clutter leakage. Clutter leakage is the artifact that results in false detections that must be handled by the human-machine decision system. Figure 4.1d includes the symbology after the automatic target recognition system applies its display symbology. Although not all of the rules are related to clutter and its effects, clutter remains one of the most difficult problems encountered in the design of EO tracking systems. This chapter includes several rules that deal with various sources of background radiation. Some rules relate to the levels of typical backgrounds such as the sky. The environment of the target and the characteristics of the sensor system determine the ability of the sensor system to detect targets of interest. An ideal sensor would be able to measure the signature of a target without any noise contributions from the sensor itself. However, ideal sensors do not exist. In addition to the sensor noise, there may be clutter in the scene being viewed by the sensor. This scenario occurs in space-based sensors viewing targets with Earth as the background. In fact, sensor technology is sufficiently advanced that typical sensors viewing a scene with Earth background will be background clutter limited rather than sensor noise limited. In the case of nonboosting targets, the clutter level in an Earth background scene is likely to be so large that the target may not be detectable until well resolved. Sources for additional information on backgrounds include The Infrared and ElectroOptical Systems Handbook, Spiro and Schlesinger’s Infrared Technology Fundamentals, various IRIA databases (check their web site at http:www.iriacenter.org) and publications, SPIE, and the excellent (although restricted) Military Sensor Symposium (MSS, formally IRAS) proceedings.
72
Chapter Four
CLUTTER AND SIGNAL-TO-CLUTTER RATIO 1. Clutter is typically defined as the variance in a scene. Schmieder and Weathersby1 provide the following metric: 1 rms clutter variance = ---N
N
∑ σi
2
i=1
where N = number of blocks (or search areas) in the image σi = standard deviation of gray shades in the ith block of pixels 2. The signal-to-clutter ratio (SCR) can be written as It – Ib SCR = ----------------------------------------------------( rms clutter variance) ) where It = maximum target value (in the same units as Ib and the clutter variance, e.g., all in digitized counts, or photons) Ib= mean of the background pixels in a region near the target and of an area equal to the size of the spatial filter, or twice the area of the target if the spatial filter is unknown or not used
Discussion Clutter is hard to define, but easy to recognize, and means different things to different sensors and users and is illustrated in the introduction to this chapter. Originally, a radar term, it has taken its place as a critical background characteristic for electro-optical sensors. Reference 2 suggests that clutter can be partitioned into three distinct regions of: ■ High clutter, where the SCR is <1 ■ Moderate clutter, where the SCR is between 1 and 10 ■ Low clutter, where the SCR is greater than 10 Reference 3 suggests calculating σi for an area that is twice the area of the target. When divided by the N, this results in higher weights for any clutter that happens to be the size of the target (often the most pesky spatial component of the clutter). Clutter can be the bane of high sensitivity imaging systems and automatic target recognizing algorithms. The more sensor sensitivity that is needed to detect the target against the noise, the larger the clutter problem. Sophisticated image processing is the only way to deal with clutter in a single-band system. Clutter can filter through the best algorithms and cause a false alarm. If this occurs frequently enough, the user will turn the system off, mitigating the usefulness of the sensor, and future upgrades will be cancelled. Frequently, clutter will not be correlated from one spectral bandpass to another. Thus, if one employs a multispectral system, and the target is somewhat correlated across the bands but the background is not, clutter can easily be negated. Therein lies much of the power of multispectral and hyperspectral systems.
References 1. D. Schmieder and M. Weathersby, “Detection Performance in Clutter with Variable Resolution,” IEEE Trans. Aerospace and Electronic Systems, 19(4), p. 622–630, 1983.
Backgrounds
73
2. M. Kowalczyk and S. Rotman, “Sensor System Psychophysics,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 28-3 to 28-21, 2000. 3. K. Brunnstrom, B. Schenkman, and B. Jacobson, “Object Detection in Cluttered Infrared Images,” Optical Engineering, 42(2), pp. 388–399, 2003. 4. T. Edwards, R. Vollmerhausen, and R. Driggers, “NVEOSD Time-Limited Search Model,” Proc. SPIE, Infrared Imaging Systems, Analysis, Modeling, and Testing XIV, Vol. 5076, 2003.
CLUTTER PSD FORM The clutter power spectral densities (PSDs), a.k.a. Wiener spectra, usually follow the following form: n
PSD( f ) = C ( 1/ f ) where PSD(f) = power spectral density of the clutter as a function of spatial frequency. Usually, it has the units of in-band (W/cm2/sr/µm) per cycles/km in one dimension, squared for two dimensions. C = a constant, usually in the range of 10–7 to 10–2 for a PSD expressed in watts (as opposed to photons/sec). f = spatial frequency (magnitude of the items causing the clutter when converted into spatial frequency; that is, the Fourier transform of their size), usually in cycles per kilometer. n = a “constant,” usually between 1 and 3. The choice of the constant depends on the background conditions and the range of spatial frequencies a particular sensor detects on the background. The interested reader should also review the rule, “Spencer’s Signal-to-Clutter Ratio” (p. 83).
Discussion Use this rule with caution, as it is a great generalization for most cases. One-dimensional PSDs do not reflect the nonisotropic structure of the real three-dimensional world. Procedures for estimating two-dimensional PSDs from one dimension can be found in the Infrared and Electro-Optical Systems Handbook. Usually, they entail incrementing the power (above constant) by +1. This rule is useful for quick estimations of clutter in the absence of real data, drawing curves through one or two points, estimating the clutter rejection capability needed, and estimating the effects of pixel field of view on clutter-induced noise. Atmospheric attenuation of clutter should be considered. Absorption across the bandpass may be included in the PSD, but usually it is not. The user should estimate a range to the clutter and estimate the atmospheric attenuation across the expected path length and include this effect. Clutter is very bandpass sensitive, regardless of the “per micron” in the units. Using a PSD from an SWIR band to estimate LWIR clutter, or from a UV band to estimate visible clutter, is rarely accurate, as the mechanisms causing the clutter are different. The power spectral density function is frequently used to describe background clutter. These can be one or two dimensional. For the one-dimensional models, the slope of the clutter PSD usually falls as 1/f 2 to 1/f 3 at lower frequencies (for clutter causing phenomena that subtend a large angle) and 1/f 1 to 1/f 1.5 at higher spatial frequencies.
74
Chapter Four
EARTH’S EMISSION AND REFLECTION There is a big difference in the brightness of Earth backgrounds between day and night when viewed from space (or a high altitude) in the UV, visible, and SWIR bands. However, there is almost no difference for wavelengths beyond about 4 or 5 µm.
Discussion The relative contributions of the day and night backgrounds of Earth are of concern in dealing with background levels and clutter in the scene if dealing with downlooking space sensors or uplooking ground sensors during cloud-lit conditions. Using this rule, we find that the processing system need not be concerned with the time of day for systems operating above about 5 µm or so, but in wavelengths shorter than this, the background will be distinctly different in day versus night cases. In fact, about 99 percent of the solar irradiance is between 0.3 and 3 µm.1 Bjorn Andresen first published a theoretical paper on the tactical implications this,2 and one of the authors (Miller) followed up with experimental confirmation.3 Beyond the CO2 absorption feature ≈4.3 µm, the component of total radiation and reflection from the Earth that results from reflected sunlight is usually less than that of Earth’s blackbody emission. At 4 µm, the emission from the Sun is smaller by orders of magnitude, as compared with its peak emission (near 0.5 µm). In addition, the Earth’s atmosphere becomes less transparent in this region, and the surface tends to become less reflective and more absorptive. These phenomenologies combine so that the 280 K blackbody radiation tends to dominant, somewhere between 4 and 5 µm. When viewing a hotter Earth surface/atmosphere, such as looking at the equator from space in a transparent band, this crossover tends to move to shorter (say, 3.5 µm) wavelengths. Alternatively, if viewing near the poles or in a band than extends only to the cold upper atmosphere, this crossover will occur at longer wavelengths (say, 5 µm). Note that this crossover occurs right in the midst of the MWIR 3- to 5-µm atmospheric window and within the sensitivity of the popular InSb, Pt:Si, and MWIR HgCdTe detector materials. Thus, this rule has applicability to tactical ground-based systems viewing skyward to sunlit clouds and tactical aircraft systems viewing the ground. In popular MWIR bands, an observed blackbody target can easily exhibit negative contrast or (a much worse case) approach zero contrast when viewed against a reflected solar background2–4 as illustrated in Fig. 4.2. The increasing line is the blackbody emission, and the falling line is the solar contribution. For these assumptions, the crossover occurs at about 3.7 µm. By slightly shifting the assumptions, it is easy to make the crossover occur anywhere from about 3 to 5 µm. The reader is cautioned to consider the CO2 absorption feature at 4.3 µm and the effects that this may have on any diurnal MWIR observations; these are not considered in the graphic. Generally, the solar constant is about 1350 W/m2 in space or the very upper atmosphere. Nearer to the ground, it is more like 900 to 1100 W/m2 at the equator, and it falls off with latitude and weather conditions. Thus, for ground sensors looking at the ground or low clouds, the solar constant is smaller, and the crossover often occurs below 4 µm.
References 1. http://www.eppleylab.com, 2002. 2. B. Andresen and B. Levy, “MWIR Target Contrast Reduction Due to Solar Scattering,” Proc. SPIE, Vol. 2020, Infrared Technology XIX, pp. 120-130, 1993. 3. J. Miller, “Multispectral MWIR Measurements of Aircraft with a Background of Sunlit Clouds,” Proc. National Military Sensing Symposia, 46(1), November 2001.
Backgrounds
75
FIGURE 4.2 Comparison of the solar background to that of the target. (From G. Sarisi, “Third Generation IR Detectors,” Laser Focus World, September 2002, © Pennwell Corp., 2000, used by permission.)
4. G. Sarisi, “Third Generation IR Detectors,” Laser Focus World, September 2002. 5. http://www.pmodwrc.ch/virgo/virgo.html, 2002. 6. J. Mooney and W. Ewing, “Characterization of a Hyperspectral Imager,” IRIS Passive Sensor Meeting, March 1998.
EFFECTIVE SKY TEMPERATURE For thermal calculations, the temperature of the sky can be estimated using the following two rules: 1.1–3 1⁄4
T sky = εs × T ambient ( T in K ) where
T dew εs = 0.787 + 0.764 × ln ⎛ -----------⎞ × F cloud , the effective emissivity ⎝ 273 ⎠ Fcloud = 1.0 + 0.024 N – 0.0035 N 2 + 0.00028 N 3 Tdew= the dew point temperature N = the “tenths cloud cover,” taking values between 0.0 and 1.0, so N is 0.1, 0.2, 0.3, and so on
2.4–6 –5
T sky = [ 0.7 + 5.95 × 10 δe
1500/T air 1 ⁄ 4
]
T air
76
Chapter Four
where Tsky = effective blackbody broadband temperature that would give a radiant emission similar to that of the zenith sky Tair = ambient temperature of the air δ = water vapor pressure in millibars
Discussion When pointing up, a sensor views through the atmosphere into space, depending on the bandpass. In the absence of any bright object in the field of view, the radiant emission sensed by the device is equal to the 3 K effective temperature of the universe plus a highertemperature contribution from the intervening atmosphere. Idso4 treated the atmosphere as a graybody at ground level air temperature with an emissivity that depends on temperature and humidity. Generally, for infrared bandpasses, the effective temperature of the sky is between 200 and 300 K, depending on elevation and climate. Part 2 of the rule can be further reduced to multiplying the fourth root of the emissivity by the ambient air temperature to estimate the sky temperature in the 8- to 12- µm bandpass or T sky = ε
1⁄4
1⁄4
C/T
T air , which equals
A + Bδe
air
T air
where ε = sky emissivity A = bandpass-dependent coefficient (use 0.24 for the 8- to 14-µm band) B = another bandpass-dependent coefficient (use 2.98 × 10–8 for the 8- to 14-µm band) C = another bandpass-dependent coefficient (use 3000 for the 8- to 14-µm band) For 10.5- to 12.5-µm bands, use A = 0.1, B = 3.53 × 10–8, and C = 3000. The reader is cautioned that these equations are calculated for zenith only; one must adjust for other angles. Also, the above coefficients apply for 10.5 to 12.5 µm and 8 to 14 µm only. Sky emissivity, like atmospheric emissivity, is complex and highly dependent on bandpass, airmass, and the atmospheric constituents through which one is viewing. The reader who is interested in a discussion of methods for estimating the effective emissivity of the cloudless sky should refer to Idso4 or run an accepted computer model. The actual radiance from the atmosphere may be estimated using the MODTRAN code from Ontar.7 During the day, a clear sky is blue and thus has a much higher effective temperature for visible or short-wave sensors. Jacobs8 gives the temperature of the blue sky (facing away from the Sun) as 7880 K at 70° of elevation and 9660 K at 20°. It is interesting to note that the cloudless night sky is usually colder than the surroundings, which is not surprising, as it is transparent to much of the electromagnetic spectrum. The colder effective temperature enhances radiation from surfaces to the sky. Thus, a surface exposed to clear night sky tends to cool much more than one exposed to clouds, trees, building overhangs, and so on. This is why frost forms on exposed surfaces on clear nights but not on covered surfaces, and why gardeners cover plants to prevent frost.
References 1. http://ciks.cbt.nist.gov/bentz/nistir6551/node5.html, 2003. 2. G. N. Walton, Thermal Analysis Research Program-Reference Manual, NBSIR 83-2655, U.S. Department of Commerce, March 1983, updated 1985. 3. R. Zarr, Analytical Study of Residential Buildings with Reflective Roofs, NISTIR 6228, U.S. Department of Commerce, October 1998.
Backgrounds
77
4. S. Idso, “A Set of Equations for Full Spectrum and 8- to 14-µm and 10.5- to 12.5-µm Thermal Radiation from Cloudless Skies,” Water Resources Research, 17(2), pp. 295–304, April 1981. 5. S. Idso and R. Jackson, “Thermal Radiation from the Atmosphere,” Journal of Geophysical Research, Vol. 74, 5397, 1969. 6. D. Wilmot et al., “Warning Systems” in Vol. 7, Countermeasure Systems, D. Pollock, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 49, 1993. 7. PcModWin at http://www.Ontar.com, 2003. 8. P. Jacobs, Thermal Infrared Characterization of Ground Targets and Backgrounds, SPIE Press, Bellingham, WA, pp. 83–87, 1996.
EMISSIVITY APPROXIMATIONS When viewing a vegetated surface from a remote sensor, the emissivity can be estimated as1 ε = εv Pv + εs ( 1 – Ps ) + dε where εv = vegetation emissivity εs = surface emissivity Pv = fraction of surface covered with vegetation Ps = fraction of surface covered with soil dε = error in estimate of effective emissivity of the scene
Discussion Emissivity (ε) controls the total blackbody radiation of any object according to the following equation: exitance = εσT
4
where T = temperature σ = Boltzmann’s constant Emissivity also controls the spectral exitance is a similar fashion. It is important to know an object’s emissivity when doing calculations related to target signatures, cooling time, and a myriad of other possible applications. The emissivity can cause an order of magnitude change in exitance. Optically smooth (polished) metal surfaces have emissivities of 0.01 or less. However, typical objects made of metal have emissivities closer to 0.3. In addition, a cavity, such as a jet or rocket engine, will have an effective emissivity that tends to approach unity regardless of the engine’s construction material. This is discussed in detail in another rule in this chapter. Hudson2 points out, “For metals, emissivity is low, but it increases with temperature and may increase tenfold or more with the formation of an oxide layer on the surface. For nonmetals, emissivity is high, usually more than 0.8, and it decreases with increasing temperature.” The reader should be aware that, in the remote sensing world, there are a great number of algorithms for estimating the emissivity of the surfaces observed by the sensor. Of course, without some sort of calibration, it is impossible to separate the effects of emissivity from those of surface temperature and the effects of the atmosphere. Some use the nor-
78
Chapter Four
malized difference vegetation index (NDVI) as a surrogate for emissivity. This parameter is derived from reflectance in the scene (corrected for atmospheric effects) to estimate the photosynthetically absorbed proportion of the radiation. Algorithmically, this is done by comparing reflectance in two channels [for example, 580 to 680 nm (channel 1) and 725 to 1100 nm (channel 2)] as follows: Ch2 – Ch1 NDVI = ------------------------Ch1 + Ch2 The next step is to note that both the emissivity and NDVI are related to the fraction of the scene that is covered by soil and vegetation. By knowing the emissivity and NDVI of the soil and vegetation (through ground truth measurements), the value of the emissivity can be computed. This is by no means the only method of computing emissivity in remote sensing applications. Yet another issue is the relative size of the absorptance of the material in solar radiation bands to the emissivity in the infrared. This ratio, often stated as α/ε, is a critical factor in defining the equilibrium temperature that is reached, particularly for objects that can cool only by radiation, such as spacecraft. For example, highly polished aluminum has a very high value of α/ε, whereas the value for titanium oxide paint is very low. In the latter case, the body temperature is quite a bit lower than that of the polished case. Finally, it is important to realize that emissivity can also be described by its directional properties, given that a surface may have different apparent emissivity as a function of viewing angle. Emissivity is also wavelength dependent for most materials. Appendix A includes a table of emissivities for selected common materials.
References 1. C. Watts et al., “Sensible Heat Flux Estimates Using AVHRR and Scintillometer Data over Grass and Mesquite in Northwest Mexico,” available at www.tucson.ars.ag.gov/salsa/ archive/publications/ams_preprints/watts.pdf, 2002. 2. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, p. 42, 1969.
FRAME DIFFERENCING GAIN Frame differencing (or temporal processing) can reduce background and static clutter. The gains that can be realized are approximately a net increase in signal-to-clutter ratio of about 50 percent for single-order differencing, 80 percent for second-order differencing, and 150 percent for third-order (three frames) differencing. At the same time, the noise per pixel is reduced by approximately the square root of the number of frames processed.
Discussion Background subtraction, or differencing one frame from another, can result in almost zero contribution from the static component of background or clutter. However, uncompensated jitter, target trajectory across the background, and changing aspect angles can limit this performance. The above factors give an empirical/calculated expectation based on the order of differencing. While the obvious application is in complex, clutter-filled military scenes, frame differencing has also been used successfully to restore old motion picture images. By comparing a series of frames, the static artifacts can be detected and used to correct future frames. Academic researchers have also found uses for frame differencing. For example, it is used in automated pedestrian counting schemes and is part and parcel to many motion detection algorithms for security systems.
Backgrounds
79
On the other hand, there are substantial limits to the capability of frame differencing in some applications. For example, a sniper who is crawling through a cluttered scene and who is observed by a 30-frame-per-second video surveillance system will create almost no perceptible change in each of the frames. Therefore, a more complex frame differencing concept must be employed. This algorithm would search for scene changes, but not by comparing successive frames. It would have to be trained (or train itself) to watch for frame differences induced by very slow-moving objects.
Reference 1. I. Spiro and M. Schlessinger, Infrared Technology Fundamentals, Marcel Dekker, New York, pp. 282–283, 1989.
GENERAL INFRARED CLUTTER BEHAVIOR 1. The dominant factors in the generation of clutter are variations in heat capacity, thermal conductivity, and solar radiation absorption of the terrain, along with variations in solar reflectance. 2. In the presence of solar heating, variations in scene emissivity sometimes are not very important, because the emissivity of natural ground materials tends to be greater than 0.8. 3. As solar insolation increases, the standard deviation of the irradiated power increases, and correlation length decreases, leading to clutter with higher spatial frequencies. 4. Terrain features that are unobservable in steady-state conditions may become apparent during warming or cooling transients. 5. The measured clutter statistics of an IR scene depend on the spatial resolution of the sensor acquiring the data. 6. Although scene clutter statistics from a high-resolution sensor may not be Gaussian, averaging by a low-resolution sensor will tend to produce Gaussian statistics.
Discussion Ben-Yosef et al.1–5 have made several key observations about infrared scene clutter and clutter power spectral densities in numerous papers summarized in the above rules. They developed a heat balance equation and experimentally validated it. These rules are based on empirical observations and radiation and atmospheric theory. Most of these observations are based on desert terrain. These rules are useful in infrared bands and so do not necessarily apply to UV, visible, and radar bands. The background must be clutter dominated by natural terrain, and they are not necessarily applicable to maritime or urban environments or other than desert types of non-natural backgrounds. In addition, these rules are useful when trying to estimate the effects of clutter on the system design. An additional consideration for items 2 and 3 above is that cloud cover tends to reduce thermal signatures (at least in the 8- to 12-µm spectral band) and changes the amount of solar contribution to the MWIR. For cases in which objects are at the same temperature, and the thermal contrast signature depends on differences in emissivity, the objects that emit less reflect more. Finally, the U.S. Army completed a series of tests regarding infrared clutter and found that high-resolution imagers are less affected by clutter than are low-resolution imagers.
80
Chapter Four
References 1. N. Ben-Yosef, K. Wilner, and M. Abitbol, “Radiance Statistics vs. Ground Resolution in Infrared Images of Natural Terrain,” Applied Optics, Vol. 26, pp. 2648–2649, 1986. 2. N. Ben-Yosef et al., “Temporal Prediction of Infrared Images of Ground Terrain,” Applied Optics, Vol. 26, pp. 2128–2130, 1987. 3. N. Ben-Yosef, K. Wilner, and M. Abitbol, “Prediction of Temporal Changes of Natural Terrain in Infrared Images,” Proc. SPIE, Vol. 807, Passive Infrared Systems and Technology, pp. 58–50, 1987. 4. N. Ben-Yosef, K. Wilner, and M. Abitbol, “Natural Terrain in The Infrared: Measurements and Modeling,” Proc. SPIE, Vol. 819, Infrared Technology XIII, pp. 66–71, 1987. 5. N. Ben-Yosef, K. Wilner, and M. Abitbol, “Measurement and Modeling of Natural Desert Terrain in the Infrared,” Optical Engineering, Vol. 27, pp. 928–932, 1988. 6. J. Lloyd, “Fundamentals of Electro-Optical Imaging Systems Analysis,” in Vol. 4, ElectroOptical Systems Design, Analysis, and Testing, M. Dudzik, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 34–35, 1993. 7. T. Edwards and R. Vollmerhausen, “Use of Synthetic Imagery in Target Detection Model Improvement,” Proc. SPIE 4372, Infrared Imaging Systems, Analysis, Modeling, and Testing XII, 2001. 8. T. Edwards, R. Vollmerhausen, J. Cohen, and T. Harris, “Recent Improvements in Modeling Time Limited Search,” Proc. SPIE 4719, Infrared Imaging Systems, Analysis, Modeling, and Testing XIII, 2002. 9. T. Edwards, R. Vollmerhausen, and R. Driggers, “NVESD Time-Limited Search Model,” Proc. SPIE 5076, Infrared Imaging Systems, Analysis, Modeling, and Testing XIV, 2003. 10. P. A. Jacobs, Thermal Infrared Characterization of Ground Targets and Backgrounds (Tutorial Texts in Optical Engineering, Vol. Tt26), SPIE Press, Bellingham, WA, pp. 17–21, 1996.
ILLUMINANCE CHANGES DURING TWILIGHT 1. At a 45° latitude (e.g., Portland, OR), the illuminance level decreases by approximately one-half every five minutes during the evening twilight period and doubles every five minutes during the morning twilight period.1 2. A typical difference in the illuminance in the visible bandpass between direct sunlight and dark night is 170 dB.2
Discussion The above rules describe the change in the natural illuminance levels in the visible portion of the spectrum during sunrise and sunset. This is based on empirical observations in clear weather. One might have trouble believing no. 1, if one attempts to gauge this with his eyes. Remember, the eye-brain is an amazing instrument that automatically adjusts the gain by orders of magnitude. Thus, the dimming of the background to our eyes seems much slower during sunset, and the brightening much slower during sunrise. The first rule results in an interesting camera system phenomenon. A much more sensitive camera allows similar quality imagery only for a relatively short period near sunset and sunrise, as the illuminance levels change so fast. One of the authors (Miller) can remember testing a camera that was eight times as sensitive as another camera, more than twice as expensive, and much larger. Comparing the images at sunset showed that the more expensive and sensitive camera provided useful images for only an additional 15 minutes or so. This was exactly in line with this rule. The above can be scaled for latitude; with lower latitudes, the change occurs faster.
Backgrounds
81
References 1. P. Keller, Electronic Display Measurements, John Wiley & Sons, New York, p. 16, 1997. 2. B. Hostick et al., “CMOS imaging for automotive applications,” IEEE Trans. on Electron Devices, 50(1), January 2003, pp. 173–183.
REFLECTIVITY OF A WET SURFACE In the visible wavelength range, the difference between the reflection of a wet surface and that of a dry one can be approximated using the following equation: 0.9ρd ρw = ----------------------------------( 1.77 – 0.85ρd ) where ρw = reflectance when the surface is wet ρd = reflectance when the surface is dry
Discussion Generally, when a surface (such as the background environment) becomes wet, the water collects and fills in the pits and voids, and the surface tension of water tends to build up the flat surfaces and form a more gradual surface angle between the depressions and elevations. This tends to make surfaces less Lambertian (more specular) and also increases their total reflection (outside of strong water absorption bands). The above approximation is valid for the visible bandpass and assumes that water is the wetting agent. For a thin layer of water in the visible spectral range, the index of refraction of water can be assumed to be 1.33, the hemispheric reflectance is about 0.08, and the transmittance of the liquid layer is 1.0. Note that water is extremely black in the infrared and non-Lambertian (specular). A more complete equation that applies when the multiple internal reflection is dominant is as follows: 2
( 1 – ρ' )t ( 1 – r )ρ'd n–1 2 ρwet = -------------------------------------------- 1 – ⎛ ----------⎞ 2 2 2 ⎝ n + 1⎠ n – ρd t ( n – 1 + ρ ) where ρwet = wet stack bulk reflectance ρ' = hemispheric reflectance of the water for a Lambertian (diffuse) source τ = transmittance through the liquid ρ = hemispheric reflectance of the liquid ρd = reflectance of dry surface n = index of refraction of dielectric One of the authors (Miller) can also testify that wetting a Lambertian surface tends to increase the specular scatter substantially and reduce the infrared emissivity (except for very black, dry surfaces).
References 1. W. Wolfe and G. Zissis, The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 3-11 to 3-12, 1978.
82
Chapter Four
2. D. Kryskowski and G. Suits, “Natural Sources,” in Vol. 1, Sources of Radiation, G. Zissis, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham WA, p. 144, 1993.
SKY IRRADIANCE 1. A clear sky’s long-wavelength irradiance on a horizontal ground surface can be estimated from1 4
LLWIR = ( a + b ) eσT a where LLWIR = irradiance in a long-wavelength IR bandpass (e.g., 8 to 12 µm) a = constant ranging from 0.51 to 0.75, with a suggested mean of 0.58 b = constant ranging from 0.017 to 0.065, with a suggested mean of 0.061 e = water vapor pressure in hectopascals (hPa); 1000 hPa = 1 atmosphere σ = Boltzmann’s constant Ta= mean air temperate along line of sight 2. For a heavily overcast sky, the distribution of irradiance with zenith angle is a cardioid (a figure of revolution), as follows:2 3E d ( 0 )( 1 + 2 cos θ ) L ( θ ) = ------------------------------------------7π where
L(θ) = sky’s broadband irradiance Ed(0) = downwelling irradiance on a horizontal surface in watts per square meter (Irradiance is received energy, expressed in watts per square meter. It can be measured by pointing the instrument at the zenith, which is indicated by the “0.”) θ = solar zenith angle
Discussion The sky illumination of the sea or land surface of Earth can be a significant factor in the upwelling (reflected) radiation that is observed. The first equation allows one to crudely approximate the LWIR irradiance of a clear sky. The second equation makes it possible to estimate the contribution from the surface under heavy cloud conditions. Sky coverage by clouds is rarely uniform. Therefore, empirically based approximations of the type shown above, based on uniform cloud cover and density, must be used with care. For an infrared imager, an overcast sky has the irradiance of a blackbody close to that of the cloud temperature (because water vapor has a very high emissivity). Cloud temperature is closely correlated to cloud height, so a nice day with high cirrus will tend to have lower sky irradiance than a stormy day with low clouds. The pascal (Pa) is the SI unit of pressure and is equal to one newton per square meter (N/m2). One millibar (mbar) is equal to 100 Pa, a hectopascal (hPa) is equal to 100 Pa, a conventional millimeter of mercury (mmHg) is equal to 133.322 Pa, and a torr is equal to 101,325/760 Pa (Ref. 3).
References 1. P. Jacobs, Thermal Infrared Characterization of Ground Targets and Backgrounds, SPIE Press, Bellingham, WA, pp. 38–41, 1996.
Backgrounds
83
2. J. Apel, Principles of Ocean Physics, Academic Press, Orlando, FL, p. 525, 1987. 3. http://www.npl.co.uk/pressure/punits.html, 2003.
SPENCER’S SIGNAL-TO-CLUTTER RATIO AS A FUNCTION OF RESOLUTION Spencer derived the signal-to-clutter ratio for unresolved targets as inversely related to the resolution, range-to-clutter, and characteristics of the clutter such that 1 SCR ∝ ---------------------------------------0.5n + 1 ( R • IFOV ) where
SCR = R= IFOV = n=
signal-to-clutter ratio range to the clutter linear one-dimensional field of view expressed in radians a constant describing the slope of the clutter’s power spectral density (PSD) (For real-world cases, this is usually approximately 3, but for odd backgrounds, it may range from 0.5 to 6.)
Discussion The above assumes that the clutter and target are at the same range. It does not consider atmospherics; atmospherics may effect the target and clutter differently (e.g., if they are at different ranges or if the target is out of the atmosphere, but the clutter is not). This rule assumes that the target is a point source and not tracked if the slope of the PSD is changing (e.g., at the break frequency) over the size of the spatial filter (the spatial integration limits). Also assumed are square pixels on the FOV (rectangular or other odd-shaped pixels or optical distortion will affect this rule, analysis, and conclusion). Key attributes of the mathematical derivation is as follows. Clutter is usually expressed as K S ( f )~ -------------------------2 2 n/2 ( f x + f y) where S(f) = PSD of clutter noise K = constant used to describe the power level and/or adjust the units fx = spatial frequency in cycles/length (usually cycles per kilometer) in the x direction fy = spatial frequency in cycles/length in the y direction n = constant that determines the slope of S(f) Now, by Fourier transform techniques, the two-dimensional variance in the clutter is fhfh 2
σ =
∫ ∫ S( f x f y) d f x d f y fl fl
where σ2 = the background variance (recall that σ is the standard deviation).
84
Chapter Four
fl = the lower spatial limit of the bandpass that the sensor/image processor is using, usually assumed to be the low spatial frequency. This is usually equal to 1/NFP where N is the number of the pixels that the spatial filter is using and FP is the two-dimensional pixel footprint (sr) on the clutter. fh = the higher spatial limit of the bandpass that the sensor/image processor is using, usually assumed to be the spatial frequency equal to one pixel footprint on the background. This integral is hard to do, so, assuming a relatively narrow power bandwidth and using its average to evaluate the integral, and substituting an average bandpass (accurate for narrow bandpasses), the narrow band results in approximately the following variance being sensed: 2
σ = S ( f x, f y )∆ f x ∆ f y With some more work and substitution, Pdc = K
1⁄2
Ao ( IFOV )
0.5n + 1
where Pdc = power on the detector from clutter Ao = area of the optics So, the SCR is the power on the detector from the target divided by Pdc , or 1⁄2
2
K ( Ao /r ) SCR = -------------------------------------------------------------( 0.5n – 1 ) ( 0.5n + 1 ) Ao r ( IFOV ) r = range to the clutter If the background and target ranges are identical, the above rule results. If a specific noise spectrum is modified by a signal processor and sensor parameters, the local contrast can be determined by finding the difference in the radiated power between the center pixel and a window (usually a 3-by-3 to a 7-by-7) around it. When one works through the mathematics, one finds that the clutter power is a relationship of the IFOV and range. The value of “n” tends to be near 3, so the effects of clutter diminishes by 5/2 of the pixel field of view. If the PSD is flat, then it represents white noise and has n = 0. For this case, the SCR will vary inversely with respect to range and resolution as 1 ≈ K --------------------- . r ( IFOV ) For the more commonplace case in which n = 3, the SCR will vary as 1 ≈ K ------------------------------5⁄2 [ r ( IFOV ) ] The surprising feature is that the range and one-dimensional resolution are at the same power, regardless of the shape of the curve. In other words, the improvement in detection range obtained by changing the IFOV is independent of the shape of the clutter PSD (when the target and clutter are at the same range).
Reference 1. Private communications with Dr. George Spencer, 1995.
Chapter
5 Cryogenics
Cryogenic engineering is important throughout the realm of electro-optics, as detectors, filters, and sometimes optics require cooling for high sensitivity—especially for infrared cameras and instruments. Therefore, it can be said that cryocooling is a key enabling technology for modern sensors. Many systems require some form of refrigeration to very low temperatures, at least for some components. Most commonly, the coldest part of the system, other than the dewar, is the detector array. Usually, this cooling is accomplished by employing one of several methods, including ■ A liquid reservoir of cryogen (typically, for end-use temperatures of less than 100 K). ■ A mechanical refrigerator (for end-use temperatures from 4 to 220 K). ■ A Joule–Thomson (JT) blow-down expander (typically, for end-use temperatures from 20 to 110 K). Several types of JT systems are in use—those with mechanical pumps to create the desired pressures, those that use hydrated bed sorption technology to create the desired pressures, and those that use stored gas. ■ A thermoelectric cooler (for end-use temperatures above about 180 K). The cooling is usually delivered to the focal plane, filter, cold shield, surrounding surfaces [e.g., the focal plane array (FPA) mux and carrier], and sometimes the optics. Typically, an infrared detector focal plane array will require cooling, as will its immediate surroundings. All of these parts are contained in a super-Thermos® bottle called a dewar, the only exception being some space sensors. For high sensitivity at long wavelengths, cooling is also required for portions of the telescope structure and optics. Cryogenic cooling has traditionally been the bane of the electro-optical system engineer. Providing cooling by means of a perishable liquid always taxes the user’s supply lines. In many applications, such as space, cryogen depletion limits the life of the sensor (e.g., the Infrared Astronomical Satellite and the European Infrared Space Observatory). Employing a Joule–Thomson expander limits the operation time, usually to a few minutes. Traditionally, mechanical coolers were bulky, inefficient power consumers of low reliability. Thermoelectric coolers had poor efficiency, limited cooling, and poor performance in high shock/vibration environments. For rapid, highly reliable cooling for a limited period, Joule–Thomson expander systems provide higher efficiency and reliability but the lowest operation time (perfect for missiles, but horrible for cameras). This may change as a new generation of hydrated-bed systems is developed. These systems reuse the working gas and store it in materials that
85
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
86
Chapter Five
yield the gas upon heating. It is hoped that they will allow high-performance cooling in space systems without the use of complex and potentially unreliable compressors. Expanding a nonideal gas through an orifice for cooling was first investigated by J. Joule and William Thomson (later to be named Lord Kelvin) in the 1850s. The effect was later refined and developed (near the turn of the century) by Linde and Hampton. Linde later went on to form a successful cryogen company. Another major development in the late 1970s and early 1980s used photolithography to define the gas flow passages and expansion in a thin disk. Today, modern Joule–Thomson expansion systems are compact, lightweight, and reliable. The former Soviet Union concentrated on solid-state thermoelectric coolers in the 1950s and 1960s. Low-cost, high-reliability, limited-capacity coolers (with no moving parts) based on the Peltier effect were produced by the former Soviet Union and some American counterparts. Recent material advancements may allow these coolers to challenge the mechanical and Joule–Thomson coolers for temperatures down to 100 K. Mechanical cryogenic science is a rather new field, with very little research prior to World War II for cryogenic temperatures. Nevertheless, most cryocoolers in use today employ a cycle discovered by Rev. Robert Stirling in 1816. Since the 1940s, advancements in cryocooling have largely consisted of the incremental development of better dewars and more reliable coolers. The British concentrated their designs around providing a highpressure source for (liquid) air expanders, while the Americans concentrated on mechanical cryocoolers. American military systems frequently employ Stirling-type mechanical coolers and (far less often) Vuilleumier or Gifford types. Traditionally, employing a mechanical refrigerator resulted in limited system reliability and additional cost, weight, and power consumption. Mechanical cooler life of 5 hr was great in the early 1970s, 500 to 1000 hr was par for the 1980s, and now 6000 hr is common. Recent advancements in cooler technology have altered this archaic view, and the hardware advancements have migrated from the lab to commercially available products. Many of these modern coolers have mean time to failures (MTTFs) approaching other components, use little power, and are substantially smaller than the beer cans found at a successful critical design review party (at least the ones held in Australia). Spurred on by military infrared applications, there are now numerous companies in several nations whose entire business is centered around the production of mechanical cryocoolers for application in imaging systems, superconductor components, and spectroscopy. Significant advancement in mechanical coolers occurred in the 1980s and 1990s with the development of the Oxford cryocooler (employing ultrapure working gases, clearance seals, and linear voice coil actuators). Another improvement was the pulse tube cooler, which replaces the traditional piston in the expander with a slug of gas. The slug of gas acts like a mechanical piston, with slightly less efficiency and wear but higher reliability. Several companies are pursuing a pulse tube cryocooler to achieve a 20,000-hr MTTF for tactical applications. The future is bright now that failure mechanisms are well understood for cryocoolers. When the reliability of small production coolers approaches that of electronic components, the system engineer will find little benefit in employing electro-optical techniques that do not employ cryocoolers. This is because system reliability will be dominated by electronic failures rather than cooler failures. At the same time, recent detector advancements promise visible and infrared focal planes that do not require cryocooling. The four commonly used types of coolers for space flight systems are cryogen containment dewars, Joule–Thomson blow-down (or expansion) coolers, thermoelectric coolers (TECs), and Stirling cycle coolers. Systems on the ground can use a number of less expensive but heavier systems like the Gifford-McMahon refrigerator. The type of cooler used depends on the system requirements, constraints, and specifications. Typically, containment dewars are used for laboratory or space systems that require cooling below 80 K for
Cryogenics
87
short periods and have access to the refill port. For temperatures below about 50 K, double-walled dewars (usually LN2 on the outside) and a colder cryogen inside, or an exotic cooler such as a large Gifford-McMahon or multistage Oxford, are used. Both Stirlings and Joule–Thomson coolers are used in the temperature regime from 50 to 180 K; the JT provides the quicker cooldown but has a limited gas supply. Usually, TECs are used only for cold-side temperatures above 180 K and, because of their solid state nature and high reliability, almost exclusively for temperatures above 240 K or so. The new generation of coolers employ combinations of these systems to achieve an appropriate combination of efficiency and low-temperature performance. For a given cooling application, Joule–Thomson coolers provide the most rapid cooldown with the lowest usage of electric power; thermoelectric coolers provide the slowest cooldown with the highest reliability and consume the most power. Stirling coolers have characteristics in between and are probably what you will use. Expendable JT cooling provides high reliability and rapid cooldown with no moving parts or power consumption. However, they are limited by the amount of expendable gas that they carry. Usually, JT cooling can be used only once. Therefore, JT systems are often found on missile seekers or other short-lived applications. Some JT systems recapture the expanded gas, repressurize it, and are able to be used more than once. These systems typically are large, inefficient, and of low long-term reliability, so their applicability is usually limited to lab environments. A recent advance for closed-cycle space systems is the sorption cooler, which uses hydrogen as the working gas. By using a sorption bed and releasing gas as needed by heating the bed, this technology offers the potential of providing JT performance (working to 18 to 20 K) without requiring pumps for compressing the gas on the high-pressure side of the expansion valve. The British have developed relatively small liquid air compressors that take in ambient air, dry it, purify it, and expel the nitrogen through a JT cryostat device. Outside of the UK, there have been few fielded applications of this approach, although this technique combines some of the best attributes of a JT and a mechanical system. Thermoelectric coolers (TECs) provide high reliability with slow, low-efficiency cooling. Traditionally, TECs are limited to cooling a small detector by less than 120°C from ambient. Their electrical efficiencies are usually less than 1 percent. They are normally found in sensors employing InGaAs, SWIR HgCdTe, lead salts, and uncooled arrays. Recent advancements in material properties promise to broaden the application of this type of cooler.1 Mechanical Stirling refrigerators seem to be the dominant technology for now and the near future. Stirlings provide thousands of hours of cooling with cooldown times of a few minutes and electrical efficiencies of 2 to 6 percent. These coolers weigh half a kilogram to a few kilograms (depending on cooling power) and are found in infrared cameras and FLIRS. For the reader who is interested in more details, only a few books exist that provide useful insight into cryocooling. Walker’s Miniature Refrigerators for Cryogenic Sensors and Cold Electronics is certainly one worth purchasing. Additional specific chapters on modern cryocoolers can be found in the Infrared and Electro-Optical Systems Handbook. Occasionally, papers can be found in SPIE conferences and thermodynamic publications. Generally, every other year, there is an international conference of cryocooling technology that presents a wealth of state-of-the-art information. Manufacturers continue to be an excellent source of up-to-date information on the relevant technologies.
Reference 1. R. Venkatasubramanian, E. Siivola, T. Colpitts, and B. O’Quinn, “Thin-Film Thermoelectric Devices with High Room-Temperature Figures of Merit,” Nature, 413(11), pp. 597–602, October 2001.
88
Chapter Five
BOTTLE FAILURE For a cylinder whose length is greater than 1.11D D ⁄ t , a tank can be treated with the same equations that define the collapse pressure of a long pipe. The collapse pressure is independent of the cylinder length and is expressed as 2E t 3 ------------2- ⎛ ----⎞ ⎝ ⎠ 1–ν D
(1)
where D = diameter ν = Poisson’s ratio E = Young’s modulus t = thickness of the walls
Discussion Cylindrical and spherical tanks play an important role in cryogenic systems. In the following equations, we provide a number of simple but useful equations that describe the mechanical performance of these containers. It should be noted here that the equation above is conservative. Another approach is provided by Ref. 1 in which the authors suggest that differential pressure should not exceed 8E t 2 ------------------------ ⎛ ----⎞ 2 ⎝ D⎠ 3(1 – ν )
(2)
They state that experiments have been conducted to confirm this formulation. The following is a simple expression2 for the buckling pressure Pc of cylindrical tanks. It is used when the tank is shorter than 1.11D D ⁄ t . This computation explicitly includes the length of the cylinder, L. t 2.5 2.6E ⎛ ----⎞ ⎝ D⎠ Pc ≈ ---------------------------L t ---- – 0.45 ---D D
(3)
In Fig. 5.1, we compare the various equations above. Equation (1), as stated in the rule, is the most conservative. Equation (2) and is the least conservative of the three approaches. The middle curve derives from Eq. (3) and assumes that the length-to-diameter ratio is 2, as is common in small cans. Finally, radial displacement of a pressurized spherical container of radius r can be expressed as 2
Pr -------- ( 1 – ν ) 2tE where r = radius of the sphere P = pressure E = Young’s modulus of elasticity ν = Poisson’s ratio t = thickness
Cryogenics
FIGURE 5.1
89
Comparison of methods for computing the buckling pressure of containers.
References 1. “Initial Performance of the High-Pressure DT-Filling Portion of the Cryogenic Target Handling System,” Laboratory for Laser Energetics Review, Vol. 81, October-December 1999, available at www.lle.rochester.edu/pub/review/lle-review-81.html, 2003. 2. P. Blotter and J. Batty, “Thermal and Mechanical Design of Cryogenic Cooling Systems,” in Vol. 3, “Electro-Optical Components,” of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 411, 1993.
COLD SHIELD COATINGS A cold shield should be black on the inside and shiny on the outside. That is, it should be of high emissivity on the side viewed by the detector in the detector’s bandpass and of low emissivity (at all wavelengths) on the side not viewed by the detector.
Discussion This rule is based on radiometry, empirical observations, and approximations. The detector should not directly see a highly reflective surface, as it will reflect any background radiation onto the detector. However, to keep the thermal load low, the outside of the cold shield should be of low emissivity (high reflectivity) so that it does not absorb thermal radiation.
90
Chapter Five
If optics are contained within the cold portion of the system, and the self-radiation from the cold optics is a concern, then in some peculiar designs it may make sense to have the cold shield of low emissivity (shiny) on the inside. Initially, one might think that the cold shield should be of low emissivity (shiny). A review of Planck’s equation certainly supports this, as a shiny cold shield interior will clearly radiate less to the FPA. However, this is one of the cases in which pragmatic considerations usually overpower theoretical ones. If the cold shield is made shiny, it will reflect radiation from the warm surroundings onto the focal plane (as shown in Fig. 5.2) and reduce overall effective efficiency. If it is black and Lambertian, only a small portion (typically 5 to 10 percent) of the warm photons will make their way to the FPA. If one can limit the unwanted photons leaking through the cold shield to 10 percent of the other noise photons, there will be only about a 5 percent increase in total noise for BLIP conditions (less for non-BLIP conditions).
FIGURE 5.2
Performance with and without cold shields.
Moreover, the apparent problem of the cold shield emission is usually reduced to a negligible level, because the cold shield usually is far colder than it needs to be for radiometric purposes. It is usually thermally tied to the FPA and therefore cooled to a temperature close to that of the FPA. Radiometry shows that rarely will emission from the cold shield be a contributing factor if proper designs are employed. It is usually beneficial to make the exterior of the cold shield (the part the detector does not “see”) as shiny as possible to reduce thermal load. This side is usually coated with gold or aluminum.
COOLER CAPACITY EQUATION The available refrigeration capacity of a Stirling cryocooler can be estimated with an equation similar to Qr = CPVFTr
Cryogenics
91
where Qr = available refrigeration in watts P = mean pressure of the working gas (or some other characteristic pressure) V = swept volume in compression space F = operating frequency of the cryocooler (piston cycle frequency) Tr = refrigeration temperature (temperature of the expansion space) in kelvins C = a constant (<1) to scale the equation and provide for the correct units
Discussion The above relationship was developed on the basis of observations of the performance of large (1 kW or more) Stirling coolers. It is not validated for small cryocoolers but can be used for them with caution. It is valuable for quick estimations of the available refrigeration from a cryocooler. It is also useful for estimations of what attribute of a cooler to change, and by what amount, to achieve a different level of cooling. Walker suggests that C is on the order of 10–4 for large Stirling coolers, when pressure is expressed in bars, frequency in hertz, and TR in kelvins. The available refrigeration (Qr) is the amount of cooling, in watts, available at the cold finger with all of the losses and inefficiencies included. This is the measure that sensor designers care about and is identical to the cooling capacity. This equation suggests that the cooling capacity of a cryocooler varies in direct proportion to the speed at which it is run, the displacement of the compressor, the pressure of the working gas, and the temperature. However, the cooling capacity of small cryocoolers tends to be more strongly related to the temperature at which the cooling is to be done. The cooling capacity of smaller cryocoolers also seems to be a function of the compressor case temperature of the cryocooler. This is because of the inefficiency of rejecting the heat and the inefficiency of the magnets in the motors.
Reference 1. G. Walker, Miniature Refrigerators for Cryogenic Sensors and Cold Electronics, Oxford: Clarendon Press, Oxford, UK, pp. 120–121 and 158–160, 1989.
COOLING WITH SOLID CRYOGEN It takes about 50 kg of solid or liquid cryogen to provide 0.1-W cooling for a year. Table 5.1 illustrates the exact values for typical cryogenic gases, each cooling at its fluid-gas transition temperature. Hudson1 provides information indicating the additional mass associated with the storage container for solid versions of each material. TABLE 5.1 Typical Cryogenic Gases Refrigerant Methane
Boiling point (K)
Mass of liquid cryogen (kg) needed for 0.1-W cooling for 1 yr
Mass (kg) of solid cryogen and tank
112
5.4
Argon
84
19.4
30
Carbon monoxide
68
14.8
27
Nitrogen
77
15.9
28
Neon
24.5
36.3
54
Hydrogen
13.7
7.0
30
Helium
4.2
154
12
—
92
Chapter Five
Discussion To provide 0.1-W cooling for a year requires about 3.153 million joules (number of seconds per year times 0.1). We then determine the latent heat of vaporization of each of the gases at its boiling point. This energy must be invested to cause the phase change from gas to vapor. Dividing the 3 million joules by the heat of vaporization and converting from grams to pounds (mass) gives the cryogen masses. To see the system impact of using these gases, we must add the mass of the container. This will be different for each cryogen, because the density of each solid is different, and the volume required to house the material will be different. At the same time, the temperature of each gas is different, requiring more or less complex storage techniques. An example is helium, which demands much more complex and massive storage to achieve the same level of cooling as the other gases. Of course, it also provides its cooling at 4.2 K, a temperature not easily reached by the other alternatives.
References 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, p. 385, 1969. 2. http://astrosun.tn.cornell.edu/courses/a525/lectures/A525_27(Cryogenic%20Techniques).pdf, 2002.
FAILURE PROBABILITIES FOR CRYOCOOLERS Failure probabilities for cryocoolers depend on the particular design, environment, and application, but all seem to be dominated by cooler contamination.
Discussion Reliability in cryocoolers is clearly an important cost and system success factor, as discussed in the introduction to this chapter. This in especially true for those used in space systems, as those cannot be readily serviced. For systems in which the failure probabilities are low, the total failure probability is the sum of each of the terms. This is essentially a conservative approach to reliability. A simple experiment with the data below shows that the sum of the failure rates is always larger than the value obtained from the more traditional “root sum of squares.” The summing process is obviously easier to carry out as well. Table 5.2 shows example failure sources for four popular types of coolers.
Pulse tube w/back-to-back compressor
Pulse tube w/compressor and balancer
Stirling + bal. w/back-to-back compressor
Dual Stirling w/two compress. and two expand.
TABLE 5.2 Failure Probability (Percent) of Mechanical Cooler Designs
Excessive internal cooler contamination
2
2
3
4
Hermetic seal or feedthrough leak
2
2
2.5
3
Compressor flexure spring breakage from fatigue
0.1
0.1
0.1
0.1
Failure mechanism
Cryogenics
93
Pulse tube w/back-to-back compressor
Pulse tube w/compressor and balancer
Stirling + bal. w/back-to-back compressor
Dual Stirling w/two compress. and two expand.
TABLE 5.2 Failure Probability (Percent) of Mechanical Cooler Designs (Continued)
Compressor motor wiring isolation breakdown
1
1
1
1
Compressor piston alignment failure (binding)
0.2
0.2
0.2
0.2
Compressor piston blowby due to seal wear
1
1
1
1
Compressor piston position sensor failure
1
0.7
1
1
Expander structural failure (e.g., at launch)
0.2
0.2
0.2
0.3
Expander blowby due to long-term wear
0
0
3
4
Expander motor wiring isolation breakdown
0
0
0.5
0.5
Expander spindle alignment failure (binding)
0
0
0.2
0.2
Expander/balancer position sensor failure
0
0.7
1
1
Total failure probability (%)
7.5
7.9
13.7
16.3
Failure mechanism
Reference 1. R.G. Ross, Jr., “Cryocooler Reliability and Redundancy Considerations for Long-Life Space Missions,” Proc. 11th International Cryocooler Conference, Keystone, Colorado, June 20–22, 2000.
JOULE–THOMSON CLOGGING The clogging of a cryostat tends not to happen in conventional Joule–Thomson (JT) cryostats when there is a concentration of less than about 2 ppm (of CO2), 3 ppm (of a hydrocarbon), or particles 6 µm in size.
Discussion Joule–Thomson coolers function by passing a nonideal gas through an orifice. For those who are thermodynamically inclined, this results in a constant enthalpy process that leads to a cooling effect. More specifically, when the gas temperature, just before passing through the expansion orifice, is below the inversion temperature of the gas (where expansion results in a decrease in temperature of the gas), a phase change can occur so that the gas downstream of the orifice includes fluid as well. The orifice and associated capillary tubes can clog, which stops or reduces the flow of the gas. If contaminants exist in the gas bottle of a JT system in excess of the values mentioned in the rule, clogging can be expected. Systems that use oil lubricants must employ an oil separation technology to ensure that hydrocarbons are not a source of clogging. At the same time, trace amounts of moisture can be a problem. A typical JT system can function for only a few minutes with moisture content in the gas of 2 ppm water.2
94
Chapter Five
References 1. G. Bonney and R. Longsworth, “Considerations in Using Joule Thompson Coolers,” Proc. Sixth International Cryocoolers Conference, David Taylor Research Center, Bethesda, MD, pp. 231–244, 1991. 2. W. Ellison et al., “Commandably Actuated Cryostat,” U.S. Patent 6,082,119, July 4, 2000.
JOULE–THOMSON GAS BOTTLE WEIGHTS The weight of a containment bottle for the gases for a Joule–Thomson (JT) cryostat can be estimated from the following relationships:1 1.
Ws ≈ 0.000373 V0.86 P0.49 FS0.7
(1)
2.
Wc ≈ 2 x 10–8 V1.067 P1.5 FS1.36
(2)
3. And for high-tech composites,2 PV W HT = ---------------61 × 10 where
(3)
Ws = weight of a spherical container in pounds Wc = weight of a cylindrical container in pounds WHT = approximate weight, in kilograms, of a high-tech composite container V = volume of the container in cubic inches P = operating pressure of the gas in pounds per square inch FS = a safety factor (usually about 2)
Discussion Because this is based on the state of the art, it will change with time and the introduction of new materials. The weight estimate does not include mounting, handling, valves, or line hardware. This assumes normal JT pressures from 3000 to 6000 PSI. Generally, this is accurate to within ±15 percent for volumes from 10,000 to 40,000 cubic inches; it probably underestimates the weight of smaller bottles. Joule–Thomson cryostats require high-pressure gas to be blown through them to produce cooling. The significant contribution to weight and size is not the cryostat but the containment bottle for the high-pressure gas (usually nitrogen, helium, or argon). Prominent technological advancements are occurring in tank production for lightweight propulsion systems. This same tank technology can be applied to the reservoir of high-pressure gas. For pressures required by JT systems, the dry weight of a high-tech composite-wound tank can be estimated by the third equation above. Advancements in high-pressure tank technology could elevate the constant in the denominator by a factor or 2 or 3. This will lead to even lighter tanks in the future. When calculating weight, be sure to add the weight of the gas. As any scuba diver knows that the weight of the gas is not trivial. If one requires a pressure of 10,000 psi and a volume of 50 in3 at room temperature to perform a given cooling function, the above relationships can be used to estimate the mass of the containment bottle. If spherical, it would weigh [from Eq. (1)] Ws ≈ 0.000373 × 500.86 10,0000.4920.7
Cryogenics
95
or about 1.6 lb or 725 g. Additionally, a cylindrical bottle would weigh [from Eq. (2)] Wc ≈ 2 × 10–8 501.067 10,0001.5 21.36 or something on the order of 3.3 lb or 1500 g. Now, if weight is a critical concern, then one might want to employ high-tech composite fiber tanks and estimate the weight [from Eq. (3)] as 50 × 10,000 W HT ≈ -------------------------6 10 or 500 g. Additionally, the weight of the gas must be included. Assume that the gas was nitrogen. First, the number of moles must be estimated. To do this, we must make some conversions. The room temperature is 300 K, the volume is (50 × 2.54 × 2.54 × 2.54/1000) or 0.82 L, and the pressure is (10,000/30) or 333 atm. We can find the volume per mole from the familiar RT ------- or × = 0.075 liters per mole P and we have 0.82 L, so there are 11 moles of nitrogen. Eleven moles of nitrogen means we have (11 × 28) or 308 g of gas. The authors of this book apologize for the inclusion of English units, but many of the original sources for this material use them, and we chose to copy those equations exactly.
References 1. M. Donabedian, “Cooling Systems,” Chap. 15 in The Infrared Handbook, W. Wolfe and G. Zissis, Eds., ERIM, Ann Arbor, MI, pp. 15-14 to 15-16, 1978. 2. J. Miller, Principles of Infrared Technology, Van Nostrand Reinhold, New York, pp. 204–206, 1994.
SINE RULE OF IMPROVED PERFORMANCE FROM COLD SHIELDS The reduction in background noise [and therefore the increase in sensitivity for a background limited in performance (BLIP) detector] realized by employing a cold shield (see Figs. 5.3 and 5.4) can be estimated by A -----------------θ sin ⎛ --- ⎞ ⎝2 ⎠ where A = cold-shield efficiency constant depending on the design and tolerances (Usually, the efficiency of a cold shield is between 90 and 95 percent, so A would be between 0.9 and 0.95.) θ = full field of view angle (not half angle)
96
Chapter Five
FIGURE 5.3
A cold shield can limit the adverse effect of local “warm” sources.
FIGURE 5.4 Notional drawings of representative cold shield designs. It is assumed that the rays enter from the left and that the gray rectangle is the focal plane. Very complex shapes are possible.
Cryogenics
97
Discussion A focal plane reacts to the background as well as the target signal. A flat detector with no shielding responds to energy from a background of a solid angle of 2π steradians. It “sees” everything. Because of projected angle (cosine effects), the actual approximate value of the flux is calculated by using just π steradians (see “Lambert’s Law” in Chap. 14, “Radiometry”). However, the important point is that the focal plane detects only the useful target energy from a cone defined by the optics. It is a wise practice in the IR industry to limit the background that the detector “sees” by including a cryogenic light shield. Some uncooled detector modules still provide a cold shield that is thermally tied to the TEC to reduce the thermal noise from the background and provide a uniform (if not cryogenic) shield. Ideally, this shield should limit the incoming radiation to the required target field of view. If so, the benefit of including the shield is to reduce the background noise by 1/(sinθ/ 2). The benefit of cold shielding increases with narrow fields of view and longer wavelengths. In fact, the above rule also approximately equals 2 times the f/#, because the f/# defines the cone’s solid angle. The above rule is correct for the expected reduction in the noise associated with the background. The actual background flux is reduced by the square of the above. Keep in mind that cold shields are never perfect. Manufacturing tolerances require that the shield be somewhat oversized, and stray light can leak in from its attachments and seams. Additionally, cold shield design is affected by the available back focal length and thermal load. Thus, a 90 to 95 percent efficiency typically can be achieved, which defines the value of A in the equation.
STIRLING COOLER EFFICIENCY The current state of the art for power efficiency of Stirling cryocoolers is about 30 to 40 W of input power for 1 W of cooling in the temperature region of liquid nitrogen.
Discussion This is based on the recent state of the art. Future development may improve this situation slightly. The rule is valid only between 70 and 80 K; the ratios are more efficient for cooling temperatures above 80 K and less efficient for cold end temperatures below 70 K. Some key assumptions are as follows: ■ Compressor case temperature is colder than about 40°C. ■ Efficient (>85 percent) power converters and power supplies are employed. Integral Stirlings (one-piece coolers with the expander as part of the compressor) are slightly more efficient, and pulse-tube Stirlings (a variant of a Stirling that employs a “gas slug” in the expander instead of a mechanical piston) are slightly less efficient. Classic Stirling cryocoolers operate using electrically powered motors in the compressor to propagate a pressure change to drive a piston in the expander. The (wall outlet) electrical power needed to drive the Stirling cycle is typically about 30 to 40 times the heat power to be removed. The inverse of this number (a small number) is often called the coefficient of performance (COP). These electrical efficiencies are expected to progress to 20 to 25 W/W (for cold-end temperatures in the 70 to 80 K range). If you need to reduce the cryocooler’s power consumption, it is usually best to spend your efforts in reducing the dewar parasitics so that a smaller cooler can be used.
98
Chapter Five
TEMPERATURE LIMITS ON DETECTOR/DEWAR Generally, dewars containing photovoltaic (PV) mercury cadmium telluride (HgCdTe) focal plane arrays (FPAs) should never experience temperatures in excess of 90°C. Dewars for other FPA materials should never exceed 100°C.
Discussion This rule is derived from the state of the art in FPA/dewar production and generalization of normal bake-out temperatures (generally 90 to 100°C), materials, and procedures. The reader should be aware that higher exterior temperatures are tolerable for very brief periods, as these will not have time to heat up the critical areas. This does not consider specially made FPAs and dewars to accommodate higher temperatures. On the other hand, maximum safe temperatures can be substantially lower if bakeout temperatures were lower. Photoconductive materials and single-element detectors are often slightly more tolerant of high temperatures. Additionally, most solders will start to melt at approximately these temperatures. Focal planes and detectors tend to be fragile devices that rarely can survive high temperatures without degradation. Likewise, the optics and coating inside of a dewar can rarely survive temperatures in excess of the above. Moreover, cleanliness is key to maintaining dewar vacuum integrity and hence lifespan. Dewar manufacturers “bake-out” every component to high temperatures, and the entire dewar assembly to 70 to 110°C. If the dewar is ever heated to a temperature near its bake-out temperature, contaminants may be released that will limit performance and lifetime.
THERMAL CONDUCTIVITY OF MULTILAYER INSULATION The thermal conductivity of silk net/double-aluminized Mylar® metal layer insulation (MLI) can be estimated by –4
k = 8.962 × 10 N where
–6
4.67
4.67
5.403 × 10 ε( T h – T c ) ----------------- + --------------------------------------------------------------2 ( T h – T c )N
1.56 T h + T c
k = thermal conductivity in µW/m K N = layer density in layers per centimeter Th = hot temperature Tc = cold temperature ε = broadband room temperature emissivity of the blanket
Discussion Often, multilayer insulating (MLI) blankets are employed in electro-optical sensor design. These blankets usually are implemented by wrapping the component (or sensor) in many layers of a thin MLI blanket like an Egyptian mummy. These blankets were developed as a general-purpose insulator for space use but are also sometimes used in dewars. For Tissuglas/double-aluminized Mylar®, a different approximation is often used, as follows:
Cryogenics
7
2
2
–10
3
3
99
2.91
( 3.07 × 10 )( T h – T c ) – ( 2.129 × 10 )( T h – T c )N k ( µW/m K ) = --------------------------------------------------------------------------------------------------------------------------Th – Tc These equations derive from empirical observations based on data in the reference. Thermal conductivity has eluded a strict analytical approach as a result of difficulties in predicting the contact pressure, contact area, interstitial gas pressure, and material properties. These rules are approximately valid in the temperature range from 70 to 300 K but can be assumed to be valid at lower temperatures if a derating factor is applied. Some measurements have constantly indicated that the thermal conductivity is higher than that predicted by the above equations, so the above should be used as a crude guide only. Also, some investigations1 suggest that the thermal properties of MLI blankets are optimized at approximately 30 layers per centimeter of thickness. When employing MLI in an EO sensor, two cautions must be noted. First, when handled, chips of metal sometimes flake off of the blanket and will always find their way to critical surfaces (e.g., optics and across electrical leads). Also, MLI wrappings greatly increase the surface area and can trap air pockets within the insulation layers. Therefore, MLI can greatly lengthen vacuum pumping and any bakeout process. Note also that MLI may contain materials to assist in electrostatic discharge control, micrometeroid protection, and electromagnetic interference shielding.
Reference 1. P. Blotter and J. Batty, “Thermal and Mechanical Design of Cryogenic Cooling Systems,” in Electro-Optical Components, W. Rogatto, Ed., Vol. 3 of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 370–373, 1993.
CRYOCOOLER SIZING RULE The required cooling capacity for an application can be sized by the greater of either (1) the steady-state heat balance or (2) the time allowed to cool a thermal mass to a cryogenic temperature, but not both. Ten percent should be added to the sizing criteria selected.
Discussion Always oversize the mechanical cooler capacity. The extra capacity will be needed sometime. When cooling a detector array, more capacity is needed for a reasonable cooldown time. Application requirements are not reasonable if the steady state and cooldown requirements result in same refrigeration capacity. When the cooldown and steady state capacities are the same, the result is always either an overcapacity at steady state and a detector that is too cold, or the need for a “demand” cooler with all kinds of electronics (or a cooldown time of hours instead of minutes). If this happens, reduce the thermal mass or total heat load. If performance at the end of life is important (or after some large number of cycles), the cooler should be oversized by a large margin (perhaps 50 percent). Of course, one cannot overlook the added expense of meeting particular performance requirements. In fact, building in the added 10 percent mentioned above could result in pushing the design into a higher-cost technology, so be cautious when applying that part of the rule. This is particularly true when designing for space applications, where any additional capability beyond that demanded by the margins selected by the system engineer is likely to add very substantially to cost. Many space cryocoolers show margins in cooling load that exceed 30 percent at operating temperature, and some are designed with as much
100
Chapter Five
as a 50 percent margin. This is done to accommodate the expected aging of the unit and to account for inaccuracies in the performance of the thermal models used in the design. Moreover, one must anticipate that the surface properties (emissivity, especially) of materials that are critical to the performance of the cooler will age, thereby changing the radiation load seen by the cooler and its radiators.
Reference Private communication with Mike Elias, 1995.
RADIANT INPUT FROM DEWARS Actual radiant background flux density from a dewar is often larger than that predicted by the Planck function. The difference is difficult to predict analytically but seems to be between 25 percent and a factor of 2 higher.
Discussion Reflection of background irradiance, unforeseen cold shield leakage, and imperfect optical filters tend to cause an increase in the radiation impinging on a detector over what is routinely predicted by the Planck function. The effect is more pronounced for low-background LWIR detectors of wide wavelength response. Several possibilities exist for this effect. One is that the emissivity of the materials of the dewar are higher than expected, especially when cooled. This can happen because emissivity is wavelength-dependent and surface properties can be temperature dependent. In bands where the reflectivity is low, the emissivity is high, leading to higher than expected radiative transfer. Because dewar components are at very low temperature, the peak of the blackbody curve can be far into the infrared wavelengths, so that the apparent “shininess” of the surfaces in the visible wavelength regime offers little insight into the reflectivity at wavelengths that really matter. A surface that is Lambertian in the visible wavelengths offers little insight into its reflectivity properties at other wavelengths. The authors of this book anticipate that one cause for this is that the reflectivity, emissivity, and scattering properties of black surfaces are almost always measured at room temperature, as the lab environment is the only convenient and affordable venue in which to take such measurements. In spite of these limitations, the room-temperature data are applied to cryogenic surfaces.
Reference 1. J. Vincent, Fundamentals of Infrared Detector Operation and Testing, New York, John Wiley & Sons, pp. 196–199, 1989.
Chapter
6 Detectors
A detector is a characteristic component in an archetypal electro-optical system. For imaging systems, they are most useful as a dense two-dimensional array called a focal plane array (FPA). Traditionally, detector arrays have held the dubious position of being the system sensitivity limiter, resolution limiter, and cost driver. However, advances in the technology are making the FPA less of a performance and cost concern. A million-dollar FLIR may contain a FPA that costs $20,000 or less. This chapter includes rules relating to detector performance, manufacture, and applications. For the most part, the material in this chapter is based on physics and concentrates on exotic detector materials and nonvisible portions. Many additional rules and information concerning visible CCDs, CMOS, and active pixel cameras and detectors can be found in Chap. 18, “Visible and TV Sensors.” Generally, detectors are categorized by nature, depending on their chemical constituents and/or wavelength response. Table 6.1 summarizes the typical wavelengths of commonly used detector materials. The conversion from light to electricity can occur through several mechanisms. The most popular for present devices are as follows: 1. A thermal effect, whereby the light raises the temperature of some material (such as bulk germanium or coatings of vanadium oxide or amorphous silicon as used in silicon microbolometer arrays). This is also called a bolometric effect. 2. The photoconductive effect, whereby the resistance is changed so that the conductance of a material is altered with the level of irradiance. This requires a bias voltage and circuit to read the change in current through the detector. 3. The photovoltaic effect, whereby the light generates a voltage (or current) in a material. This requires a readout circuit that can sense this change in voltage (or current). As the above mechanisms suggest, detector physics and semiconductor physics are closely related. Advancements in one directly contribute to advancements in the other. The biological eye is a wonderful detector and is covered in Chap. 8, “The Human Eye.” To date, it holds the distinction of being the most sensitive and highest-resolution device that can be built in nine months by unskilled workers (even fewer months for the higherperforming hawk or eagle eye). Perhaps the first nonbiological mechanical photonics detector was Hershel’s blackened thermometer, which he used to discover infrared (IR) radiation in 1800.
101
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
102
Chapter Six
TABLE 6.1
Material
Typical useful spectral region (µm)
Notes
CdS
0.3 to 0.55
Rarely used today.
GaAs
0.6 to 1.8
Linear arrays are commercially available. Doping with phosphorus extends the cutoff wavelength.
2 to 20
This material is tunable at time of manufacture, limited in spectral bandwidth, and very suitable for dense arrays and low-cost production.
2 to ≈100
Doped Ge has long been an IR detector, with one to a few elements per array. Ge:Hg can respond as low as 2 µm, while Ge:Ga can respond at 100 µm at 3 K.
HgCdTe
2 to 22
This material is tunable at time of manufacture. Arrays to 12 µm are commonly available.
InGaAs
0.8 to 1.7
Cutoff can be extended to ≈2.6 µm by adding phosphor. Usually, only thermoelectric cooling is required.
GaAs quantum well infrared photodetector (QWIP) Ge:xx
InSb
1 to 5.5
Some versions have response into the visible wavelength spectrum.
LiTaO3
5 to 50
Pyroelectric materials with two-dimensional arrays are available.
PbS
1 to 3
Usually photoconductive, so two-dimensional arrays are rare, although many linear arrays are made.
PbSe
2 to 5
Two-dimensional arrays are rare, but many linear arrays are in production.
Pt:Si
1 to ≈5
Pt:Si is highly uniform and producible in large formats but offers low quantum efficiency at MWIR wavelengths. Although a popular FPA material in the 1980s and 1990s, it is becoming archaic.
0.3 to ≈1.0
Red-enhanced and blue-enhanced versions are available; it lends itself to IC manufacture.
0.3 to 26
Doping silicon allows detection into the LWIR; requires cooling well below liquid nitrogen temperatures.
Si Si:X Microbolometers
8 to 14
Via micromachining, silicon can be made into a tiny bolometer; this lends itself to dense arrays. These currently are available with coatings of vanadium oxide or amorphous silicon.
Willoughby Smith first reported the photoconductive effect in 1873 while studying selenium crystals. This—and subsequent investigations of what would become electronic detectors for UV, visible, and IR—was an important line of work, because film, relying on silver halide (which dominated image detection for 100 years) does not respond well in the ultraviolet or beyond about 1.2 µm. Additionally, films do not lend themselves to the electrical digitization needed by modern processors, displays, and electronic communication.
Detectors
103
Albert Einstein won a Nobel prize in physics for explaining the photoelectric effect. He determined that a photon has a characteristic energy hν (Planck’s constant times the frequency), and energy greater than this is needed to free an electron in a photoelectric effect. Today, photoconductors, photovoltaics, and quantum well detectors use a variety of quantum phenomena to convert photons into detectable currents and voltages. During World War I, T. Case experimented with, and produced, silicon and thallous sulfide detector cells. A true father of the detector industry, Case created devices that were difficult to manufacture and had reliability problems. Then, World War II and the tension that preceded it spurred great investment by Germany and Great Britain, and to a lesser extent in the U.S.A., in electro-optic sensor development. Lead sulfide continued to be refined in the 1940s by the Americans (e.g., Cashman, Case, and others) and in Germany by Edgar Kutzscher. Following the war, lead salts’ development slowly progressed, with single-element systems and small arrays appearing in missiles in the early 1960s. Today, arrays of up to 256 PbS elements are routinely manufactured and used in numerous military and commercial applications. However, it was the former Soviet Union that led some of the greatest advancements in lead-salt detector technology and effectively exploited them in many high-tech imaging and IRST systems. Unfortunately, this production technology base seems to have been lost. Recently, some American firms have applied various hightech techniques to make uncooled MWIR detector arrays that operate beyond 4 µm using lead salts. In the 1960s and 1970s, many Westerners (especially Paul Kruse) were convinced that a specific mixture of mercury telluride and cadmium telluride would make an excellent detector. Billions of research dollars flowed from Capitol Hill to military laboratories and then to American industry. The U.S. Army developed and promoted the common module detectors based on HgCdTe linear arrays. These helped reduce cost via economies of scale and proliferated IR sensors throughout the U.S. and NATO militaries. Development of other materials waned, and today HgCdTe is a popular infrared detector material being made into arrays as large as 1024 × 1024, and multispectral arrays (with several American and European companies producing commercial single-band focal planes in the 640 × 480 format and smaller). IR detector research in the 1980s led to the development of the 480 time-delay and integration (TDI) Standard Advanced Dewar Arrays (SADAs) of the U.S. Army, as well as producible (mid-format, e.g. 256 × 256) indium antimonide (InSb) area arrays. In addition, micromachining was employed to develop uncooled bolometer arrays (championed again by Paul Kruse) and chemistry advancements to develop uncooled ferroelectric arrays. Both of these technologies are now encroaching on the sensitivity of traditional cryocooled materials. In the early twenty-first century, amorphous silicon microbolometers have reached commercial maturity. Additionally, molecular beam epitaxy allows us to custom form lattice structures yielding advances in HgCdTe and custom quantum well detectors. The 1990s witnessed continued advances in the science of producing smaller pixels and larger-format microbolometer, InSb, quantum well, and HgCdTe arrays, with the ubiquitous format being 320 × 240. Uncooled microbolometers (first with vanadium oxide as the active material and later with amorphous silicon) also became commercially available in the 320 × 240 and 160 × 120 format for a few thousand dollars per array, opening up a myriad of commercial applications. In addition, multispectral arrays were developed and incorporated into research systems. These are typically sandwich arrays that respond to different bands, or stacked quantum wells. In the 2000s, 4 megapixel InSb arrays have been produced in low numbers, the Joint Strike Fighter is planning to use several 1000 × 1000 InSb arrays on each aircraft, and 640 × 480 arrays have become commonplace. For the reader who is interested in more details or a more thorough understanding, numerous books and journals are available. For the latest breaking technical developments, it
104
Chapter Six
should be noted that SPIE and Veridian’s Military Sensor Symposiums (formerly IRIA/ IRIS) have regular detailed sessions on detectors. The journals that seem to cater to the scientific and detailed engineering needs of electro-optical engineers in this area are Infrared Physics and Technology, Optical Engineering, and the IEEE’s Transactions on Electron Devices. Trade journals frequently provide valuable, up-to-date information on the technology that is critical to anyone trying to select or use an FPA. Valuable trade journals in this discipline include Photonics Spectra and Laser Focus, which have a annual detector sections and frequent articles. Finally, an interested reader should review the corporate publications and web sites of those active in detector manufacture. These web sites frequently have articles that provide an excellent marketing and technology review of a given company’s products.
Detectors
105
APD PERFORMANCE The multiplication factor, M, for an avalanche photodiode (APD) can be estimated as M where
–1
Vb m = C ⎛ 1 – ⎛ ---------⎞ ⎞ ⎝ ⎝ V bd⎠ ⎠
M = multiplication (gain) factor C = material constant (sometimes device dependent) Vb = bias voltage Vbd = breakdown voltage m = an empirical constant; observed values of m between 1.4 and 4
Discussion APDs can generate many electron-hole pairs from a single interaction with a photon, giving effective quantum efficiency greater than 1. When a photon interacts, it generates a single electron-hole pair. This single pair travels through the lattice under the influence of the bias voltage and can produce other electron-hole pairs (an amplification effect). The electron and the hole actually travel in opposite directions, away from each other. The multiplication (or gain) factor M is the average number of electron -hole pairs produced from an initiating electron-hole pair. Gain can be very high (~100) for silicon APDs, in which breakdown voltage is about 400. Other materials provide less gain, but M equal to 10 to 20 can be generally expected.
References 1. T. Limperis and J. Mudgar, “Detectors,” Chap. 11 of The Infrared Handbook, W. Wolfe, and G. Zissis, Eds., ERIM, Ann Arbor, MI, pp. 11-36 to 11-37, 1978. 2. M. Ieong, “The P-N Junction,” EE3106 Lecture 7, p. 14, 2002, available at http:// www.ieong.net/ee3106/Lecture7.PDF, 2002. 3. E. Garmire, “Sources, Modulators, and Detectors for Fiber-Optic Communication
Systems,” Chap. 4 in Fiber Optics Handbook, M. Bass, Ed. in Chief, and E. Van Stryland, Assoc. Ed., McGraw-Hill, New York, p. 4.76, 2002.
RESPONSIVITY OF AVALANCHE PHOTODIODES 1. As a function of wavelength, the responsivity (amps per watt) of super low ionization ratio, k (SLIK) silicon avalanche photodiodes (APDs) can be approximated as follows: 4
3
2
ℜSLIK = ( a4 λ ) + ( a3 λ ) + ( a2 λ ) + ( a1 λ ) + a0 where ℜ = responsivity (amps/watt) λ = wavelength in nanometers a4 = +8.336175040854987 × 10–9 a3 = –2.723043085066618 × 10–5 a2 = +3.230873938707 × 10–2 a1 = –1.649814699356764 × 101 a0 = +3.099174914803980 × 103
106
Chapter Six
2. The responsivity of reach-through structure (RTS) silicon avalanche photodiodes can be approximated as follows: 5
4
3
2
ℜRTS = ( a5 λ ) + ( a4 λ ) + ( a3 λ ) + ( a2 λ ) + ( a1 λ ) + a0 where a5 = 3.099051403791184 × 10–11 a4 = –1.358198358589383 × 10–7 a3 = +2.328370966911533 × 10–4 a2 = –1.9584705951699 × 10–1 a1 = +8.117775177287255 × 101 a0 = –1.326368764972709 × 104
Discussion It is quite likely that reasonable results are obtained using just the first few (e.g., four or five) significant digits of the coefficients, but be cautious if using the numbers in the reference, as they are not adequate to produce accurate results. The authors of the reference compare the performance of two available types of APDs (the mature technology of “reach-through structure” and the relatively newer SLIK APD structure) as shown in Fig. 6.1. These devices are in widespread use, particularly for high-speed and high-sensitivity response detectors for both visible and near-IR applications. The rule works for wavelengths from 600 to 1100 nm. We also find the temperature sensitivity of these devices in a polynomial form, as tabulated in Table 6.2. One of the authors (Friedman) has had the opportunity to meet Andrew MacGregor, who worked in the group in which the technology was invented (GE, in Vaudruil, Quebec). MacGregor coined the name “SLIK.”
FIGURE 6.1
Responsivity as a function of wavelength for silicon APDs.
Detectors
107
TABLE 6.2 SLIK
RTS
Bias voltage: 415 V
Condition: –3 < T < 25°C
720 nm
130.02 – 9.63T + 0.45T 2 – 8.3 × 10–3T 3
820 nm
136.19 – 9.85T + 0.46T 2 – 8.6 × 10–3T 3
940 nm
73.14 – 4.99T + 0.24T 2 – 4.4 × 10–3T 3
Bias voltage: 317 V
Condition: 5.7
720 nm
928.5 – 189.9T + 17.1T 2 – 0.6985T 3 + 0.011T 4
820 nm
1043.2 – 192.3T + 15.8T 2 – 0.6005T 3 + 0.0086T 4
940 nm
1093.9 – 185.9T + 14.1T 2 – 0.4956T 3 + 0.0066T 4
Source: Ref. 1.
Reference 1. T. Refaat, G. Halama, and R. DeYoung,” Comparison between Super Low Ionization Ratio and Reach through Avalanche Photodiode Structures,” Optical Engineering, 39(10) pp. 2642–2650, October 2000.
DEFINING BACKGROUND-LIMITED PERFORMANCE FOR DETECTORS 1. The approximate background above which a detector can be considered to be background limited in performance (BLIP) is 18
2.5 × 10 λη E B = ----------------------------2 ( D* ) where EB = background flux in watts per square centimeter (W/cm2) λ = wavelength in micrometers (µm) D* = specific detectivity in centimeters per hertz per watt (cm Hz1/2/W) (Jones) η = quantum efficiency 2. This can be rewritten for a photon flux above which the detector is considered to be BLIP, 37
2
φB = 1.3 × 10 ( λ/D* ) η where φB = background flux in photons per square centimeter per second (photons/ cm2/sec)
Discussion These results are based on basic radiometry, assuming a detector with a 2π steradian field of view (no cold shielding). Cold shielding (and cold filtering) will reduce background flux, thus improving performance. The equations differ by the units of background radiation. Incidentally, the constant of 2.5 × 1018 is the reciprocal of two times the Planck constant multiplied by the speed of light and adjusted for units of micrometers for the wavelength.
108
Chapter Six
The user should verify that D* and quantum efficiency are for the same wavelength of interest. The user should also realize that BLIP may not be achieved if the background is low, even for reasonably high RoA products (see the rule on RoA, p. 115). Once the D* has been evaluated based on an RoA for a given temperature, a background may be estimated for which the background noise exceeds the noise of the detector, and the detector becomes BLIP. Once D* has been estimated, one of these equations for the BLIP-level background may be employed. Another issue to consider is that values for D* are for specified background temperature conditions (usually 270, 300, 310, or 500 K), and D* will change with different measurement temperatures. If the conditions in which the detector is to be used do not match the test conditions, the transition to BLIP operation may differ from that predicted by theory. The test temperature and background used for the measurements should be provided in the manufacturer’s literature.
DIGITIZER SIZING 1. For digitization noise to be lower than other noises, the resolution of the digitizer needs to be Ce number of bits ≥ log 2 -------------er 12
(1)
where er = number of noise electrons (from all the noises such as detector, dark current, read out, background, and so on) Ce = well capacity in electrons 2. Generally, it is wise to set the number of bits at least two above what is shown in Eq. (1), or ⎛ Ce ⎞ number of bits ≈ log2 ⎜ --------------⎟ + 2 ⎝ er 12⎠
(2)
Discussion Quantization errors can be significant, as discussed in a rule in Chap. 11, “Miscellaneous.” The minimum RMS noise contributed by a digitizer is 0.29 ADU (analog-to-digital units, or counts). This is because the RMS noise is the square root of the mean-square error, which in this case is given by integrating the counts (say, X2) from –0.5 to +0.5, or 0.5
∫ X dX 2
–0.5
The simple form of this case comes from the rectangular shape of the error probability density. The result is X3/3, evaluated from –0.5 to +0.5, which is 1 ⎛ 1 –1⎞ --- --- – -----3⎝8 8 ⎠ or 1/12. Thus, the RMS error is 1 ⁄ 12 counts.
Detectors
109
In the realm of digital video, infrared sensors, and scientific sensors, the quantization error can add significantly to the system noise if insufficient in resolution. The noise is related to the dynamic range of the focal plane array (be it a CCD, CMOS, active pixel sensor, CID, or infrared readout) and the number of bits that one is able to digitize at video rates. Digitizations of 12 and 14 bits are common, with 18 and higher being much more costly but becoming more common in scientific applications. This equation assumes linear digitization. Many algorithms employ nonlinear digitization schemes that may alter the equation at the extremes. This rule assumes that noise is what you care about, which isn’t always the case; often, a higher number of bits are required. Conversely, if the image is only going to be displayed on a six-bit display, fewer bits may suffice. Hobbs1 states that a cost-benefit analysis would show that allowing the digitizer to add as much noise as the photoelectron statistics is not reasonable unless the digitizer costs as much as the optical system. The reader is cautioned that this rule is useful for first approximations, but a critical design should be conducted before committing to a final hardware design. These equations allow one to estimate the minimum number of bits the digitizer should have for the digital video signal, based on readout noise and dynamic range. Generally, one would want the digitization noise to be lower than the readout noise and not be the dominant noise source (because it is not Gaussian). Picking an analog-to-digital converter (ADC) with a bit or two of additional resolution is wise, as suggested by Eq. (2), especially now that ADCs have become so inexpensive (unlike a decade ago).
Example Hobbs1 states that a commodity camcorder has a well capacity of about 5 × 104 electrons and as few as 20 electrons for readout noise. Equation (1) indicates that at least a 10-bit ADC is required, and Eq. (2) suggests using a 12-bit ADC.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, pp. 447–449, 2000. 2. W. Lin, Handbook of Digital System Design for Scientists and Engineers, CRC Press, Boca Raton, FL, pp. 194–195, 1981. 3. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 291–292, 1994. 4. http://www.mip.sdu.dk/~fonseca/bachelor_project/html_sections/se-theory-conversion, 2003. 5. http://ccrma-www.stanford.edu/CCRMA/Courses/252/sensors/node23, 2003.
HGCDTE “X” CONCENTRATION The energy bandgap (and hence the wavelength cutoff) of a mercury cadmium telluride detector can be estimated from the operating temperature and the alloy concentration, “x,” as follows: 1.
2
2
–4
E g = –0.295 + 1.87x – 0.28x + [ ( 6 – 14x + 3x )( ×10 ) ]T + 0.35x
4
(1)
Alternatively, 2.
–4
2
E g = –0.302 + 1.93x + 5.35 × 10 T ( 1 – 2x )–0.81x + 0.832x
where Eg = energy bandgap in electron volts
3
(2)
110
Chapter Six
x = material’s “x”; that is, the relative concentration between Hg and Cd in the alloy, where x is the decimal concentration of Cd T = temperature
Discussion Detectors made from alloys of mercury, cadmium, and telluride (HgCdTe) have energy bandgaps that depend on the concentration of cadmium to mercury (called “x”) and their operating temperature. Note that this rule predicts the energy bandgap, and recall that cutoff wavelength is related to bandgap by the simple relationship λ (in µm) = 1.24/Eg. As “x” is decreased, the cutoff wavelength to which the detector will respond is increased. Additionally, as a slight change in “x” occurs, the cutoff wavelength will vary much more as a function of the wavelength, with the longer wavelengths being much more sensitive to this change in “x.” This is why process control is so critical (and difficult) for long-wavelength devices but relatively easy for short-wave and mid-wave devices. If the “x” in a long-wavelength device is varied by the same amount allowed at the mid wave, the cutoff wavelength will vary much more, and therefore so will the in-band responsivity, yielding great nonuniformity. The above is useful for estimations of the cutoff at a given mixture and temperature, demonstration of the effects of temperature and “x” on an array’s properties, demonstration of the required degree of control for “x” based on a given uniformity, and estimating how the change of the temperature of an existing array affects spectral response. This rule is based on empirical observations and semiconductor physics. The amount of mercury with respect to cadmium determines the energy gap and therefore the long-wavelength cutoff. Although providing only an estimation, these equations seem to track most companies’ HgCdTe material to within ±1 µm in cutoff—although the error could be larger. The rule is valid from “x” concentrations of ≈0.15 to 0.45, for temperatures from, let’s say, 50 to 200 K and cutoffs from about 2.5 to 15 µm. The above rules can give questionable results for low values of “x” at the lowest temperatures so beware under those conditions. The two versions of the model agree quite well for higher values of “x” but differ enough at lower values of “x” to create a difference in cutoff prediction of about 1 µm. Also, this is detector cutoff, not peak wavelength (see the “Peak vs. Cutoff” rule, p. 114), the FPAs should be operated 0.1 to 0.2 µm (or more) less than this. There are many different equations to estimate the cutoff as a functions of “x.” Almost every detector manufacturer or systems house has its own version, generally of same form as the above with some different coefficients.
References 1. J. Chu, Z. Mi, and D. Tang, “Intrinsic Absorption Spectroscopy and Related Physical Quantities of Narrow-Gap Semiconductors HgCdTe,” Infrared Physics, Vol. 32, pp. 195–211, 1991. 2. G. Hansen, J. Schmidt, and T. Cassleman, “Energy Gap vs. Alloy Composition and Temperature in Hg1-xCdxTe,” Journal of Applied Physics, Vol. 53, pp. 7099–7107, 1982. 3. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 135–137, 1994.
MARTIN’S DETECTOR DC PEDESTAL Martin points out that the signal that you want to observe is usually small compared to the electronic DC background. Therefore, detection of the signal requires sensing a small change in a large value.
Detectors
111
Discussion This rule is based on empirical observations, the fact that noise electrons often dominate the total number of signal electrons, and the state of the art. It applies more to IR than UV or visible detectors (as IR detectors are noisier, and the background noise is also usually higher) and underscores a serious hindrance in signal detection. The removal of the DC “pedestal” determines the length to which electronic engineers go to enhance the signal. Bob Martin1 points out that detection of the desired signal is often analogous to observing grass growing on the top of the Empire State Building (Fig. 6.2). Typically, the signal (grass height) coming out of the detector is a minute change in voltage or current that rides on a large signal level (the Empire State Building). The ratio between the two can easily be a 1:100 and sometimes 1:10,000. This large “pedestal” is (statistically) relatively constant and can be subtracted by a host of methods. However, this large “bias” eats up dynamic range (well capacity). The combination makes readout and analog circuitry hard to design and implement. This large level relative to the signal results from a myriad of sources including voltage sources, preamps, noisy resistors, background shot noise, dark current, 1/f effects, Johnson noise, and background clutter. Scientific instruments frequently alternate between the scene and a known reference source (or “chop”) to reduce these effects. FLIRs often view a known reference and “characterize” the detectors at a rate ranging from every few minutes to several times a second. Signal processors can perform an AC coupling to effectively ignore this large “DC” level.
FIGURE 6.2 Searching for the target signal is often like looking for grass growing on the Empire State Building.
Reference 1. Private communications with Dr. Robert Martin, 1995.
112
Chapter Six
NOISE BANDWIDTH OF DETECTORS When unknown, the noise bandwidth of a photodetector is commonly assumed to equal 1 divided by 2 times the integration time, or N b = 1/( 2t i ) where Nb = noise bandwidth ti = integration or dwell time
Discussion This rule is based on simple electrical engineering of photodetectors and simplification of circuit design. It assumes a “boxcar” amplifier (an amplifier with a sharp bandwidth cutoff). This rule is highly dependent on readout design, detector material, readout material, architecture, and optimal noise filtering. It assumes a rectangular pulse and a value for the noise cutoff of 3 dB down from the peak. The rule does not always properly include all readout and preprocessing signal-conditioning effects, and it can vary from 1/ti to 1/4ti. Nevertheless, the rule is useful for estimating a detector’s noise bandwidth for D* or NEP calculations when little else is known. A detector scanning a target will have a sharp increase in its output, which can be assumed to be almost a square wave in time. The response of the electronics (amplifiers and filters) to this transition will have a gentler rise that is related to the inverse of the bandwidth. Similarly, a target entirely located on a pixel of a staring array provides a square pulse increase for the readout amplifier. For a well designed system, the electronics can be matched to provide a minimum noise bandwidth as approximated above. Hobbs1 cites the case of a boxcar amplifier where the bandwidth of a transimpedance amplifier is approximately f ≈ f RC f T where fRC = resistance capacitance frequency fT = unity-gain crossover frequency of the transimpedance amplifier At the –3 dB point (f–3dB), one loses between a factor of 2 and 2 in bandwidth, depending on the details of the frequency compensation scheme. This yields f RC f T f –3dB ≈ -------------------2
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, p. 626, 2000. 2. K. Seyrafi, Electro Optical Systems Analysis, Electro Optical Research Company, Los Angeles, CA, p. 148, 1973. 3. E. Dereniak and D. Crowe, Optical Radiation Detectors, John Wiley & Sons, New York, p. 50, 1984. 4. J. Vincent, Fundamentals of Infrared Detector Operation and Testing, John Wiley & Sons, New York, pp. 227–231, 1990. 5. J. Miller, Principles of Infrared Technology, Kluwer, New York, p. 339, 1994.
Detectors
113
NONUNIFORMITY EFFECTS ON SNR The maximum useful SNR is closely related to the reciprocal of the nonuniformity of the produced image, i.e., SN Rmax ∝ 1/N where SNRmax = maximum attainable signal-to-noise ratio N = residual (after processing) nonuniformity (the variation in sensitivity of focal plane elements, or tolerance of nonuniformity) in decimal notation (e.g., 3 percent = 0.03)
Discussion Early staring FPAs in the infrared had great variation from one pixel to another. This variation (or nonuniformity) resulted in a noise source being dependent on the background and therefore often included in generic “fixed pattern noise.” In many camera situations, this is the dominant noise source. It has been used to justify the application of lower-sensitivity (but higher-uniformity) arrays (e.g., Pt:Si and QWIPS). As discussed by Rogalski,1 “For a system operating in the LWIR band, the scene contrast is about 2 percent/K of the change in the scene temperature. Thus, to obtain a pixelto-pixel variation in the apparent temperature less than, e.g., 20 mK, the nonuniformity in response must be less than 0.04 percent.” Typically, such uniformities can be achieved only after a multipoint correction. This rule is based on analysis of the uniformity effects of detectors in staring array systems and typical image processing techniques. Processing can reduce the effects of nonuniformity by 10 to 100 times. Generally, staring arrays produce a fixed pattern noise that is a result of pixel-to-pixel variations in sensitivity and noise. The original equation in Mooney’s paper2 has SNRmax = 1/N. This rule represents the maximum SNR, not the SNR that the camera actually will have. It could have other dominant noise sources. Additionally, some algorithms and instruments that do not produce a display for humans are less affected by fixed pattern noise and nonuniformity. The estimated nonuniformity depends on the difference between the scene temperature and the correction points (the flux levels of the electronic normalization). These should be as close as possible (see Fig. 6.3). This rule assumes that the pixel can be corrected. Unfortunately, many materials exhibit pixels that defy correction regardless of the number of points of the correction. Usually, these are considered “dead” or “out-of-specification” pixels. Additionally, there are the “blinkers.” These devilish pixels have the irritating property of blinking on and off in the scene and must be accommodated for in the image processing to generate a useful scene. To make matters worse, sometimes the “blinking” pixels change each time the array is turned on, or from frame to frame. Blinking pixels often exhibit excessive noise and, as Schulz and Caldwell2 point out, the noise is often of the elusive 1/f type. Focal plane array manufacturers have been improving the uncorrected uniformity as well as the corrected uniformity. Often, they measure (and quote) the sigma (standard deviation) of the gain, offset, or noise divided by the mean for a given impinging flux or blackbody temperature. Then, after a correction at some given flux (commonly stated as blackbody temperature through a given telescope and optical bandpass), the standard deviation is greatly reduced. Reduction factors of 10 to 100 are common for 2- or 3-point correction. However the corrections are at specific points of impinging flux; between these points, uniformity decreases, as can be seen by the notional graphic. The graph indicates
114
Chapter Six
FIGURE 6.3
Nonuniformity and correction points.
that a representative FPA’s uniformity can be close to perfect at any given flux (or blackbody temperature) but degrades as the scene changes from the temperature for which the corrections were made. The important point is to correct with a reference flux as close as possible to the expected scene flux, and to correct often. If the scene is spatially or temporally varying, then correct at as many flux levels as possible (four fluxes are better than three, which are better than two). One may have the necessary noise equivalent temperature difference (NETD or NE∆T) or sensitivity to detect a phenomenon with a given signal-to-noise ratio (say 200). However, if fixed pattern noise (usually caused by nonuniformities in the FPA) is not considered, the results may be disappointing. If one has a raw (uncorrected) variation from pixel to pixel of 10 percent, one would find the SNR reduced to 1/0.1 or 10. With the modest electronics and common normalizing procedures, the final corrected nonuniformity usually can be reduced to less than 1 percent. Therefore, one may notice an irritating fixed pattern noise that can limit the SNR to 1/0.01 or 100; nevertheless, this is still less than the ratio of 200 that one might first assume.
References 1. A. Rogalski, “Quantum Well Photoconductors in Infrared Detector Technology,” Applied Physics Reviews, 93(8), p. 15, pp. 4355–4391, April 2003. 2. J. Mooney et al., “Responsivity Nonuniformity Limited Performance of Staring Infrared Cameras,” Optical Engineering, Vol. 28, pp. 1151–1161, November 1989. 3. M. Schulz and L. Caldwell, “Nonuniformity Correction and Correctability of Infrared Focal Plane Arrays,” Infrared Physics and Technology, Vol. 36, pp. 763–777, June 1995. 4. Private communications with Phil Ely, 1995.
PEAK VERSUS CUTOFF A semiconductor detector’s peak sensitivity is at maximum at a wavelength of about 10 percent less than its cutoff.
Detectors
115
Discussion This rule is founded on empirical observations, the state of the art, common definitions of cutoff, and simplifications of solid-state physics. This applied to both photovoltaic (PV) and photoconductive (PC) architectures. The above rule is simply not valid for pyroelectric, ferroelectrics, quantum wells, Schottky barriers, or bolometers, all of which exhibit a much different spectral curve. Usually, a classic semiconductor material has the characteristic that the maximum sensitivity is just lower than its cutoff wavelength. Therefore, one should specify a cutoff of about 5 to 10 percent shorter than the longest wavelength of interest. It is common for vendors specify “cutoff” as the 50 percent point, but they give the D* (or other sensitivity figure of merit) at the peak (Fig. 6.4).
FIGURE 6.4
Wavelength vs. D*. (Courtesy of EDO/Barnes.)
PERFORMANCE DEPENDENCE ON ROA The RoA of a semiconductor material seems to change with temperature according to the following:1,2 Ro A = C × 10
( B/T )
where RoA = resistance area produced in ohm-cm2 C = a constant B = another constant T = temperature in kelvins
116
Chapter Six
Discussion The RoA of a detector is the product of the resistivity and area of the detectors. It is significant for all detectors, because the higher it is, the less inherent noise will be present. Very good MWIR detectors have an RoA of about 1 million, whereas long-wave devices have values that are much lower. This rule is based on empirical observations of infrared semiconductor detectors (e.g., HgCdTe and InSb). This provides a crude estimation only. One must ensure that B and C are correct for the material and temperature range. Do not use this rule to compare different materials (e.g., InSb changes more rapidly than HgCdTe in the 80- to 100-K regime). The above equation was determined by curve fitting published data on HgCdTe and InSb. The basic shape of the equation seems to hold well. However, the challenge of the above is determining the constants. This is subjective and difficult and changes with the state of the art. The authors suggest that the interested reader scale from FPA manufacturer data or published data. Generally, RoA for LWIR HgCdTe doubles for every ≈3-K increase, and MWIR HgCdTe’s RoA doubles for every 5- to 8-K increase. For PV LWIR HgCdTe at about 80 K, Martin1 suggests the handy relationship of RoA ≈ 10(12-λcutoff). Many semiconductor detectors also have other noise terms that vary in a similar fashion. Generally, dark current follows the same mathematical form, except one should also consider the area of the detector for scaling dark current as follows: I d1 = I d2 ( Ad1 /Ad2 )10
K/T
where Id1 = dark current in amps (or nanoamps) for a detector whose properties are not known Id2 = dark current for the detector whose properties are known (vendor data can usually provide information to determine the dark current) Ad1 = area of the detector for which you want to estimate dark current Ad2 = area of the detector from which you are scaling K = another constant T = temperature in kelvins
References 1. Private communications with Dr. Robert Martin and Dr. George Spencer, 1995. 2. J. Miller, Principles of Infrared Technology, Kluwer, New York, p. 137–138, 1994. 3. P. Norton, “Infrared Image Sensors,” Optical Engineering, 30(11), pp. 1649–1662, November 1991.
RESPONSIVITY AND QUANTUM EFFICIENCY An optical detector’s responsivity (in amps per watt) is equal to its quantum efficiency divided by 1.24 times the wavelength in micrometers, or QE R = ⎛ ----------⎞ λ ⎝ 1.24⎠
(1)
Discussion Responsivity is a measure of a detector’s output signal (defined in amps, volts, or electrons) to a given input radiant signal, regardless of the noise. Higher responsivities are always better than lower responsivities.
Detectors
117
This rule occurs because of the definition of quantum efficiency. It is defined as the number of electrons generated per second per incident photon on the active area of the detector at a particular wavelength. When making the conversion (which invokes Planck’s constant, the number of electrons per coulomb, and the speed of light), the number 1.24 appears if wavelength is expressed in micrometers. The rule is derived as follows: amps n( ph/sec )η R = ------------ = ----------------------------hc watts n( ph/sec ) ----λ where the numerator captures the number of electrons created by a photon rate, n, and the denominator is the rate of energy flow from the same photon rate. The term η is the quantum efficiency of conversion of photons to electrons, and hc/λ is the number of joules per photon. At this point, all the units are in meters, seconds, joules, and so on. As a result, we have –
24 e /sec ηλ R = ------- = ηλ5.03 × 10 -------------watts hc
A flow of 6.25 × 1018 e–/sec is one amp, so we divide the equation above by that number and get Nλ R = ------------------------–6 1.24 × 10 Therefore, if we express wavelength in micrometers, we get the equation in the rule. Said another way, at 1.24 µm, the responsivity is numerically equal to the quantum efficiency. Assuming good antireflection coating and a single-pass detector design, quantum efficiency (QE) is equal or strongly related to QE ≤ 1 – e
–αt
where α = absorption coefficient t = thickness of the active region (where photogenerated electron holes can be collected) This indicates that responsivity is a strong function of the absorption coefficient. The thickness is a trade-off between high absorption and QE (for high QE, a large thickness is desired). The number of thermally generated electron-hole pairs is proportional to the thickness. The thermal noise is proportional to the square root of the number of thermally generated electron-hole pairs. Rogalski1 shows that, when QE is defined by Eq. (2), the highest specific detectivity (D*) is obtained when t is equal to 1.26/α. Rogalski1 also tells us, “The ratio of the absorption coefficient to the thermal generation rate, α/G, is the fundamental figure of merit of any material intended for infrared photodetectors. It determines directly the detectivity limits of the devices. This figure should be used to assess any potential material.”
Reference 1. A. Rogalski, “Quantum Well Photoconductors in Infrared Detector Technology,” Applied Physics Reviews, 93(8), pp. 4355–4391, April 15, 2003.
118
Chapter Six
SHOT NOISE RULE If the photocurrent from a photodiode is sufficient to drop 50 mV across the load resistor at room temperature, the shot noise equals the Johnson noise.
Discussion Shot noise is an important noise source for modern infrared photodiodes. At tactical background levels, it is frequently the dominant noise source. The above result can be proved by investigating the comparison of Johnson noise (caused by the random motion of carriers within a detector, usually thermal in nature) and shot noise (the result of the statistics of the photon-to-electron conversion process in the detector). The RMS Johnson noise voltage is expressed as 4kTR∆f where
k = Boltzmann’s constant T = temperature of the resistor ∆f = bandwidth of the detector
The noise bandwidth is generally 1/2ti, where ti is the integration time. The shot noise is the product of the noise current and the resistance defined above. That is, the shot noise voltage is R I2e∆f . Here, I is the average current in the detector, and e is the charge of an electron. If we now take the ratio of the Johnson noise to the shot noise, we obtain 2kT --------RIe Now we can compute this ratio for T ~ 300 K, and the voltage drop RI ~ 50 mV. The value of Boltzmann’s constant is presented in Appendix A. Care must be taken with the units. The easiest way to work through the problem is to use cgs units in which volts have the units of cm1/2gm1/2/second, charge has the units of cm3/2gm1/2/second, and energy (joules) is in units of cm2gm/sec2. Using these values, we find the ratio to be nearly unity, which proves the assertion in the rule.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, p. 117, 2000. 2. W. Sloan, “Detector-Associated Electronics,” W. Wolfe and G. Zissis, Eds., The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 16-4 to 16-6, 1978. 3. A. Rogalski, “Quantum Well Photoconductors in Infrared Detector Technology,” Applied Physics Reviews, 93(8), pp. 4355–4391, April 15, 2003.
SPECIFYING 1/f NOISE Spencer1 indicates that frequency where the 1/f noise equals the white noise can be closely approximated by f r ⁄2 f o ≤ ----------------------------ln [ T ( f r ⁄ 2 ) ]
Detectors
119
where fo = “break” frequency (or “knee”) of the 1/f noise [the place where it crosses the white noise power spectral density (PSD) in hertz (see Fig. 6.5)]. fr = frame rate in hertz T = observation time in seconds [This is the time in which you can allow the 1/f noise to grow and add to fixed pattern noise. For most systems, it will be the time between updates of the processing normalizing coefficients with a blackbody reference source. For many commercial cameras, this is the operation time (from turn-on to turn-off).]
Discussion Many detector materials (CMOS Si, PbS, PbSe, HgCdTe, and InSb) exhibit an unpleasant and disturbing noise that increases with time; this traditionally has been called 1/f noise. In many devices, it is closely related to fixed pattern noise. Electrical engineers frequently deal with this in circuits. Its origins are still somewhat mysterious, but, for detectors it seems to be a consequence of surface effects and is sensitive to the kind of passivation used. It seems to be associated with potential barriers at the contacts, surface, or interior of semiconductors, and dislocations increase 1/f noise. For HgCdTe, the 1/f noise increases as I 0.76, where I is the total diode current.2 This rule is based on noise analysis by setting the 1/f noise equal to the white noise. By definition, these noise types are the standard deviation of the temporal fluctuations in the total signal. Often, the total signal level is relatively constant and can be considered to be a “DC pedestal” (see “Martin’s Detector DC Pedestal” rule, p. 110). The average of this white noise can be subtracted. What is left is the variation (caused by targets, clutter, and noise), which are aptly expressed either as a standard deviation or variance. Spencer1 indicates that the white noise can be expressed as fh 2 σw
=
1
0
FIGURE 6.5
1/f noise and white noise.
Afr
∫ A df = A2t------i = -------2
120
Chapter Six
where σw = standard deviation of the white noise fh = high-frequency cutoff, which is the noise bandwidth or ≈ 1/2ti ti = integration time A = a constant fr = frame rate Many forms of semiconductor electronics (especially detectors) tend to experience a slowly varying noise source that constantly increases with amplitude; this is called 1/f noise. This 1/f noise can be approximately (but still very accurately) expressed as fh 2 σf
=
To f r
- = A f o ln ⎛ -----------⎞ ∫ ---------⎝ 2 ⎠ f ⁄ fo df
fl
where σf = standard deviation of the 1/f noise f = frequency of interest fl = low frequency cut-on, which is observation time or the time between renormalizing the system (whichever is less) Spencer indicates that the limits on the integral are somewhat controversial. But, generally, for 1/f noise, the limits of integration are set to the high-frequency cutoff and the lowfrequency cut-on. Setting these noises as equal and doing some math will result in the above rule. The actual 1/f noise depends on detector material and the architecture of the electronics and assumes that the white noise bandwidth is equal to (1/2ti ). Also, this rule sets the 1/f noise equal to the white noise. In some cases, it may need to be set even lower. For a system in which 1/f noise tends to dominate, one should set it to less than the white noise (e.g., add a 3 to the denominator of the above equation to make it about 11 percent of the white noise). Often, system designers specify the 1/f noise to have a “knee” at or below the frame rate. However, this may lead to unexplained noise after the hardware is built. If the 1/f knee is at fo as defined above, then the RMS of 1/f noise just equals the RMS of the white noise. This results in a specification for the break frequency less than the frame rate. If you do not want to affect system noise, then the noise from 1/f effects should be ≈1/2 or less than the of the noise from other sources, which may drive it even lower. Another figure of merit for 1/f noise has been proposed3 based on the work of R. Tobin. The term αt (alpha Tobin) is equal to the noise current in amps at 1 Hz divided by the DC current from the detector, or i( f = 1 Hz ) --------------------------idet For a detector to have low 1/f noise, the αt should be less than 10–5 (with <10–6 preferred). The deleterious system performance effects of 1/f noise can be mitigated by AC coupling, special signal processing, renormalizing the detector by forcing it to view a known blackbody source, or reducing the integration time. For example, assume a 30-Hz video system is updated with a radiometric reference only once per minute. From the above rule, the 1/f noise knee frequency characteristic of the detector should be specified as fo ≤ (30/2)/ln(60[30/2]) = 15/6.8 = 2.2 Hz
Detectors
121
Again, note that 2.2 Hz is much lower than might be expected, as the 30-Hz frame rate is sometimes assumed to be the break frequency. It should be noted that the logarithm in the denominator causes this to change only slightly with different observation times. For instance, if one only wished to update the above example every hour, the break frequency would only decrease to 1.4 Hz.
References 1. Private communications with Dr. George Spencer, 1995. 2. A. Rogalski, “Quantum Well Photoconductors in Infrared Detector Technology,” Applied Physics Reviews, 93(8), pp. 4355–4391, April 15, 2003. 3. A. D’Souza et al., “HgCdTe HDVIP Detectors and FPAs for Strategic Applications,” Proc. SPIE, Vol. 5074, Infrared Technology and Applications, XXIX, April 2003.
WELL CAPACITY The well capacity of a readout device can be assumed to have a maximum value of about 25,000 electrons times the area of the pixel in square microns.
Discussion This is based on the state of the art and on available 30- to 50-µm square pixels with currently available deep wells. Many devices (especially visible-wavelength CCDs) hold less with standard silicon readouts (1.6 pF at 5 V). It assumes TTL bias (or less). This rule does not account for special charge-skimming electronics, which can increase the “effective” size of a well by subtracting some of the charge buildup and hence increase the allowable integration time for systems limited by well size. Every pixel has some electronic circuits that take real estate away from the well capacitors. Usually, the capacitors occupy a 5 × 5 to a 10 × 10 µm area. Typically, these lines, feeds, and control circuits do not change with pixel size. Therefore, as pixels get smaller, the above rule overpredicts the well capacity. Caveat emptor. Under normal bias conditions, readout structures can contain about 25,000 to 30,000 electrons per micron squared. Each pixel can accommodate an area equal to less than the pixel for charge storage. In general, a 50-micron readout unit cell should be able to hold around 50 million electrons from the detector pixel. This does not account for three-dimensional readouts and integrated circuits being pursued by DARPA, Irvine Sensors, Ziptronix, Raytheon, DRS, and others. These stackedchip architectures will, eventually, result in far deeper wells and the integration of other signal processing functions into tiny pixels.
IR DETECTOR SENSITIVITY TO TEMPERATURE Lead salts (especially PbSe) increase sensitivity about 3 percent for each 1°C that the detector is cooled below room temperature, MWIR HgCdTe increases sensitivity about 7 percent for each 1°C that it is cooled below ≈ 220 K, and LWIR HgCdTe and LWIR QWIPS sensitivity increases about 15 percent for each 1°C below ≈90 K.
Discussion This rule is based on an approximation of detector physics and empirical observations of the current state of the art. As the detector temperature is increased, the internal noises
122
Chapter Six
rise, and the effects of offset and gain nonuniformity may become greater. For example, dark current will double for every ≈5 K increase in LWIR HgCdTe operating temperature above 80 K. Sensitivity versus temperature is not linear. It is a curve, so don’t overuse this rule. This rule depends on the state of the art (which changes) and assumes normal focal planes. Additionally, scaling should be limited to ±20° about the normal operating temperature. Clearly, a temperature is reached for any material in which additional cooling provides no additional system-level sensitivity. When a detector material is cooled, the noise decreases, causing its overall sensitivity to improve. Other benefits from additional cooling may be increased uniformity, longer wavelength response, and the ability to integrate longer. However, eventually, a temperature will be reached at which further cooling provides minimal gains, as the total noise become dominated by photon and shot noise and multiplexer readout noise (the diminishing returns concept). Additionally, multiplexer and bias circuitry may start to fail if operated at temperatures colder than those for which they are designed. With some detector materials (most notably, HgCdTe) the system designer can increase the FPA operating temperature to a point that the detector noise (Johnson, dark current, 1/ f, and so on) becomes the dominant noise source, above photon noise. This will result in higher system reliability (a result of increased cooler life), quicker cooldowns, and less dissipated heat. However, a caveat should be noted: the cutoff wavelength may change (this happens for HgCdTe). This also applied to trades of the cooling impact versus the effect of a more sensitive detector. It is possible that the minimum cost sensor system is one that reduces the specification on the detector and cool it a few more degrees to maintain sensitivity.
Chapter
7 Displays
Displays are the electro-optical complement to detectors. They produce a photonic image to a human viewer based on an electrical input. A display transduces electronic signals into light. This conversion of electricity to light can occur in a phosphor, LEDs, plasma cells, liquid crystals, or electroluminescent cells as well as other devices. When made into a two-dimensional array, or utilizing a flying spot, a display is formed. Display technology goes hand in hand with sensor system development and, more commercially important, television technology. For a discussion of the history of television and rules relating to cameras, the reader is referred to Chap. 18, “Visible and Television Sensors.” Displays are generally characterized as direct-view as opposed to see-through. This distinction is also made via the terms heads-down displays (HDDs) and heads-up displays (HUDs). The direct-view or heads-down classification includes the classic cathode ray tubes (CRTs) and other monitors. Such a display requires the user’s vision to be concentrated on it, because it is the one and only source of information. Displays that are made from a material that allows us not only to see the available images but also to look through the display and view the natural surrounding are called heads-up. These semi-transparent displays have found great use in pilotage and military targeting. They hold promise for widespread application in automobile driving, police activity, medical activity, personal digital assistants, cell phones, and a myriad of other applications. Both display types suffer from limited brightness for comfortable viewing in bright sunlight. One solution is to forgo the viewing material and write the image directly on the human retina. This display architecture was developed in the 1990s, but at the time of this writing has not experienced widespread use. Karl Braun invented the first CRT in 1897. Television was the largest display market and the only major technology driver for almost a century. Early mechanical televisions were limited and generally employed mechanical scanning devices to display an image about the size of a modern day business card. The work of the National Television System Committee (NTSC) laid the foundation that made monochrome television displays (and broadcasts) practical in the U.S., and its 1941 standards (subsequently adapted by the FCC) are still used today. Under the leadership of Donald Fink, the NTSC fixed the vertical resolution of 525 lines (generally, about 44 are blanked and not displayed), fully updated almost 30 times per second. Bandwidth was scarce in the 1940s, so an ingenious technique of interlacing was developed and em-
123
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
124
Chapter Seven
ployed in NTSC. Each second, 59.94 fields are displayed, but a field represents only half (every other) of the horizontal lines. The horizontal scanning rate is 15,734 kHz. This results in many problems for scientific, surveillance, and military imaging as well as in freezing frames, slow motion, and other effects. However, interlacing does produce cosmetically pleasing video at the reduced bandwidth producible by 1940s technology. These numbers are not random; they represent an astute balance of the decay times of screen phosphors, the temporal response of the human eye, and the available bandwidth in the 1940s. Various countries soon adapted the NTSC standard as well as other related broadcast and display standards. But the NTSC standard is not global. In fact, at the turn of the most recent century, phase alternating scan (PAL) was the most widely used format in the world. While employing many of the NTSC-pioneered techniques (e.g., chromatic subcarrier, frequency interleaving of luminance and chrominance, and the constant luminance principle), PAL differs from NTSC in using the phase of the color components—simple color difference signals are used in place of the NTSC signals. PAL ends up with slightly superior resolution from the wider bandwidths used and slower frame rate (25 frames per second for PAL). The PAL standard picture has 582 lines and uses interlacing. Horizontal sync rates of 15,625 Hz and field rate of 50 Hz are employed. In the 1970s, a large market opened up for monochrome (usually green or amber) displays for computer terminals, and then for personal computers in the 1980s. In the 1990s, color displays made their way into laptops, airplane cockpits, and cell phones; in the 2000s, they emerged in automobiles. The advent of HDTV also has stimulated significant development in the manufacturability and producibility of plasma and high-density LED displays as well as new standards (e.g., SMPTE 292 and MPEG). Today, the International Telecommunications Union1 (ITU) is a United Nations body that publishes recommendations defining the standards for international telecommunications. Digital video and displays are becoming increasingly common, and new digital video standards (e.g., MPEG2) are taking the place of the previously discussed analog ones. Both the ITU and Society of Motion Picture and Television Engineers (SMPTE) have established digital high-definition television standards that will be the dominant format (e.g., ANSI/SMPTE 292M, 295M, and 260M) based on a 1.485 Gb/sec interface. The reader will benefit from a brief discussion of a term used throughout this chapter and book. Spatial phase, used in the context of DRI models and reproduction of targets by displays, refers to the juxtaposition of pixels relative to a target. That is, when ideally “phased,” the pixels are uniformly distributed over the target with approximately equal brightness in each pixel. When out of phase, the pixels may be aligned so that only part of the image has an adequate SNR. In the latter case, the location of the center of the target may not be effectively located. For more information, the usual publications of IEEE and SPIE, as well as SMPTE, document advancements in display technology.
Reference 1. www.itu.int.
Displays
125
ANALOG SQUARE PIXEL ASPECT RATIOS Williams1 gives the following: 1. For 525-line (Rec. 6012) video, the pixel aspect ratio is 11/10 height/width. 2. For 625-line (Rec. 6012) video, the pixel aspect ratio is 54/59 height/width.
Discussion Video pixels are rectangular, so they have different sampling frequencies (or resolution) in the two directions (horizontal and vertical). Contrary to popular belief, the pixel aspect ratios are not 4:3, nor are they 720:640 and 768:720. The actual ratios are defined purely in terms of the pixel sampling frequency of each video standard: Rec. 601 digital video is always sampled at 13.5 million pixels per second (for both 525- and 625-line systems). If you have a 525-line analog NTSC (ANSI/SMPTE 170M-1994) video signal that you want to sample (resulting in a square pixel), the industry standard is to sample at 12 + 27/ 99 million pixels per second. Similarly, if you have a 625-line analog PAL video signal, the industry standard is to sample at 14.75 million pixels per second. Therefore, we can derive the following: 525-line Rec. 601 pixel aspect ratio = 13.5/(12 + 27/99) = 11/10 625-line Rec. 601 pixel aspect ratio = 13.5/14.75 = 54/59
References 1. 2. 3. 4.
Private communications with George Williams, 2003. ITU-R BT.601 (also known as CCIR-601 or Rec. 601). http://com-net-org.cn.gs/xuexi/01/premiereHelp/help.html, 2003. http://www.mir.com/DMG/aspect.html, 2003.
COMFORT IN VIEWING DISPLAYS The worst acceptable update rate for a display is 10 Hz, and not much comfort is gained by updating at a rate above about 60 Hz.
Discussion This rule is valid for any displays viewed by humans and does not apply to machine vision applications. The special irritating effects of a 10-Hz update are somewhat diminished if the display is operating faster (e.g., 30 Hz), even if the scene is changing only at 10 Hz. The flicker is worse when the display is bright and when it is on the periphery of the field of vision, as is often the case in a cockpit. Studies have indicated that flicker at 60 Hz is usually agreeable for most humans, although a portion of the population still reports being annoyed by rates as high as 100 Hz. This rule is useful for system trades and analysis to determine update rates and determining the design limits for a display. This is particularly important for displays that will be watched for long intervals and for applications in which the proper detection of targets or target motion is critical, such as in security systems and in military and air traffic control applications. Most humans are irritated and fatigued easily by a display that is updated near 10 Hz, because of the human eye-brain processing. At that frequency, an annoying “flicker” is sensed. At faster updates (e.g., the 22 Hz of movie film), the eye-brain system seamlessly
126
Chapter Seven
integrates one frame into the next and smooths out the motion to generate a truly lifelike moving scene. When frame updates are slow enough, such one per second, the eye-brain assumes each to be an independent picture (much like viewing paintings in an art gallery), and no “smoothing” of motion occurs. Between these limits, there is confusion and annoyance. Update rates should be selected that either allow the brain to form motion easily (>20 Hz) or prevent the brain from attempting to form motion (<2 Hz). For example, a cinema screen is updated at 22 Hz, NTSC video is half updated (because of interlace) at 60 Hz, and progressive scan television is fully updated at 60 Hz. Incidentally, fire tends to have a strong flicker component near 12 Hz, exactly the frequency that humans find most objectionable.
COMMON SENSE FOR DISPLAYS 1. Backgrounds should not be brighter than foregrounds. 2. Any displayed grid lines should be of half intensity. 3. Do not have extreme color contrasts between foreground and background colors, as this can cause afterimages via rod fatigue. 4. For mission-critical displays, use white for critical/important dynamic information (in case the color gun fails). 5. Separate significant information on the display by size, distance, or intensity, or by highlighting it. 6. Use screen position consistently for symbology; that is, don’t change the position of the symbology (although color can change to ensure sufficient contrast).
Discussion It has been widely observed that common sense isn’t that common. The above astute rules are helpful in the design of an ergonomic screen presentation for humans and should be heeded whenever possible.
References 1. http://www.csun.edu/~renzo/cs485/notes/devices.pdf, 2003. 2. Private communications with Dr. G. Michael Barnes, 2003. 3. W. Banks and J. Weimer, Effective Computer Display Design, Prentice Hall, Upper Saddle River, NJ, 1992. 4. K. Mullet and D. Sano, Designing Visual Interfaces: Communication Oriented Techniques, Prentice Hall, Upper Saddle River, NJ, 1995.
CONTRAST Unfortunately, contrast can be defined in several ways, including at least all of the following: 1. ( Pb – Pd ) C 1 = -------------------( Pb + Pd )
(1)
Displays
127
where C1 = our first definition of contrast (should be always unitless), and C2 through C6 below are different definitions Pb = bright pixel (e.g., the target pixel if it has positive contrast) Pd = dimmest pixel (frequently the background) 2. ( Pb – Pd ) C 2 = -------------------Pd
(2)
P C 3 = -----bPd
(3)
( Pb – Pd ) C 4 = -------------------Pb
(4)
C 5 = Pb – Pd
(5)
Pt C 6 = --------Xσ f
(6)
3.
4.
5.
6.
where Pt = a target pixel sf = standard deviation of the image frame X = a multiplication factor, generally from 1 to 3 (with 1 being the target pixel in contrast relationship to 1 sigma, and 3 being in relation to 3 sigma)
Discussion Contrast is an important concept for a myriad of EO disciplines, including display technology, target detection, tracking, eye response, and others. The concept appears in the context of DRI, SNRs, displays, image processing, and many other disciplines. Essentially, it is a measure of the difference between the object of interest and its surrounding pixels (or background). High contrast is good, low is bad, and negative can be either good or bad, depending on the image processing and the amount of contrast. Many systems can accurately identify a negative-contrast target with a large amount of contrast. The important concept is to always get away from zero contrast. In imaging applications, contrast is often more important than SNR. The target may produce a particular display brightness, but, if the surrounding background has the exact same displayed brightness, the target will be imperceptible. This is true even for targets with high SNR. Most equations for computing contrast take the form of one of the above six equations. Often, the absolute value is used for the numerator to ensure a positive contrast, although a negative contrast (a dark spot among a bright background) has strong conceptional value, as in the case of the text that you are reading. The authors recommend using Eq. (1) when contrast is otherwise undefined.
128
Chapter Seven
Zero-range contrast is the contrast at very close range with no degradation by the transmitting medium such as the atmosphere (e.g., less than a meter or two) and is a target characteristic. Apparent contrast is the contrast reduced by the medium—generally, the atmosphere. This is why the San Gabriel Mountains are barely visible through the Los Angeles smog, but the Cascade Mountains stand out strongly over the Bend, Oregon, skyline. The FAA definition implies a 2 percent visual contrast for visibility calculations, whereas others define visual contrast as the point at which 50 percent of human subjects can identify the difference in contrast between black and white bars. Thus, if the San Gabriels had a 2 percent contrast relative to the sky at a range of 10 km, the visibility would be 10 km. Physical contrast is the measured contrast of a display, whereas perceptional contrast is its psychophysical appearance. The eye will perceive a stronger contrast than displayed if the bright colors are ones with strong eye response and the background has colors of low eye response. Equation (1) is sometimes called the modulation contrast or Michelson contrast, Eq. (2) is sometimes referred to as luminance contrast, and Eq. (3) is termed the simple contrast ratio. However, be careful, because all the of nomenclature surrounding this concept is unfortunately sloppy, and one should define contrast whenever it is referenced. To fortify the differences between the equations and illustrate the difference between zero-range and apparent contrast, consider the following. An object of interest has a brightness value of 100 and a background value of 50 (units do not matter for this discussion, as long as they are consistent—assume linear digital counts). At some later time, the target gets closer and has a value of 200, while the background remains at a value of 50. Table 7.1 provides a comparison of the results. TABLE 7.1 Comparison of Different Definitions of Contrast
Equation
Contrast, initial condition
Contrast, later condition
Mathematical range of contrast values (assuming positive contrast)
1 (C1)
0.333
0.6
Always a value between negative 1 and positive 1, with 0 meaning no contrast
2 (C2)
1
3
Between –∞ and +∞, with 0 indicating no contrast
3 (C3)
2
4
Between 0 and ∞, with a value of 1 indicating no contrast
4 (C4)
0.5
0.75
Between 0 and 1, with 0 indicating no contrast
5 (C5)
50
150
From 0 to ∞
GAMMA For a CRT display, gamma should be set to about 2.3 (and almost always between 1.7 and 2.8).
Discussion Gamma is a number representing how the brightness is increased as a function of increased signal. For many types of displays, including CRTs, the intensity of the light from the screen responds nonlinearly to increases in the video drive voltage. Because CRT displays are nonlinear, historically, cameras were designed with inverse gamma curves so that the camera/display system would have a linear relationship between signal level and display brightness.
Displays
129
The brightness of a display can be mathematically written as B = KE
γ
where B = luminance K = a constant E = video drive voltage γ = exponent representing the gamma Often, one can set the brightness based on the K and γ. Thus, a gamma of 1 makes the brightness linear to the voltage, and a gamma of 2 makes it the square of the voltage. Holst1 points out that NTSC, PAL, and SECAM (a French acronym for sequential color with memory) standardize the gamma to 2.2, 2.8, and 2.8, respectively. Although NTSC systems assume a gamma of 2.2 at the receiver, a value of 2.3 to 2.6 is often more appropriate. Color shifts can occur if gamma is not corrected properly. The mix of red, blue, and green from which colors are derived should follow the same gamma curve, so the percentage of each will depend on the brightness. This is not a good situation for color consistency.
Reference 1. G. Holst, CCD Arrays Cameras and Displays, JCD Publishing, Winter Park, FL, pp. 169–171, 1998. 2. P. A. Keller, Electronic Display Measurement, John Wiley & Sons, New York, p. 288, 1997. 3. www.NTSC-TV.com, 2003. 4. www.Cgsd.com/papers/gamma, 2003. 5. http://graphics.stanford.edu/gamma.html, 2003.
GRAY LEVELS FOR HUMAN OBSERVERS From experiments, we know that 5-bit resolution (or 32 perceived gray levels) is adequate for most displays when humans are attempting to detect targets against a background.
Discussion To display sufficient target rendition with 5-bit resolution (32 gray levels), the contrast ratio should be ≥19:1. This is because, if we assume that each gray level must have an intensity of 10 percent over the previous, then ∆I/I = 0.1. The contrast ratio can be calculated from ∆I (n – 1) Gray contrast ratio = ⎛ 1 + ------⎞ ⎝ I⎠ where ∆I = difference in intensity from one gray level to the next I = intensity n = number of gray levels such that the required contrast ratio Therefore, the required contrast for 32 gray levels is 19. Conversely, if 64 gray levels are required, the contrast ratio must be about (1.1)63 or about 400. The rule is useful for estimating the grayscale required and selecting the balance between dynamic range and sensitivity in systems that use electronic focal planes and dis-
130
Chapter Seven
plays. It assumes that the five-bit resolution gives sufficient signal-to-noise and signal-tobackground ratios on the screen.
Reference 1. I. Spiro and M. Schlessinger, Infrared Technology Fundamentals, Marcel Dekker, New York, p. 208, 1989.
HORIZONTAL SWEEP 1. Keller1 points out that the horizontal sweep frequency (fhorizontal) can be defined from the vertical sweep frequency (fvertical) as follows: lvertical f horizontal = f vertical --------------------d f vertical where lvertical = vertical scan lines dfvertical = a vertical duty factor (generally, 0.9 as a result of vertical retrace) 2. The clock rate needed to recreate the above resolved elements (on an interlaced display) is –6 Phorizontal f clock = f horizontal × 10 --------------------------d f horizontal
where
P = horizontal is the number of horizontal pixels dfhorizontal = horizontal duty factor (or blanking), generally 0.8
3. Bandwidth of the display is equal to (fclock) × (k) where k ~ 0.6.
Discussion The above three equations relate the timing to the clocking, scan frequencies, video bandwidth, and digital clock frequencies. The method to convert monitor viewed resolution to frequency for NTSC is as follows: Horizontal scanning frequency = 15.734 kHz Horizontal time (including active video and blanking) = 63.556 µsec, which is just the reciprocal of the scanning frequency If horizontal blanking = 10.9 µsec, then active video per line = 52.656 µsec Williams2 states the following: The key to visual fidelity on high-resolution CRT monitors is to use the slowest available pixel clock for a given resolution. The video amplifiers in CRT monitors must expand the signal one-hundredfold. The slew rate of these amplifiers is the primary bandwidth-limiting factor in the video chain. If the luminous intensity were a linear function of the video signal, then video frequencies exceeding the speed of the video amplifier would appear as a smooth patch of the average intensity over the patch. However, since a CRT is a square-law device, the bandwidth-limited inputs appear darker than their average intensity.
References 1. P. A. Keller, Electronic Display Measurement, John Wiley & Sons, New York, p. 292, 1997. 2. Private communications with George Williams, 2003.
Displays
131
KELL FACTOR 1. Assume the Kell factor to be 0.7, and apply it to any conventional displayed visible images to convert from pixels to lines. 2. Alternatively, use 0.35 to convert from sample frequency to lines.
Discussion The Kell factor is the ratio between theoretical pixelated resolution to actual analog display resolution, and it has been used to include all effects of reduction in resolution. What now seems like eons ago, Kell, in the 1930s and 1940s, reasoned that perfectly sampled data cannot be manipulated and then perfectly displayed on analog displays because of phasing. If it could be, the Kell factor would be 1. This seems trivial to us living in the digital imaging world, but project yourself to Kell’s time. Long before digital data manipulation was common, and while Claude Shannon was first developing his information theory principles,1 Kell found this significant imaging rule. Television analysis has incorporated the Kell factor since the 1930s. Reference 2 states, “Otto Shade Sr. had said that the Kell factor is not a fundamental but is a reflection of the fact that a perfect sampled data system is physically unrealizable, because it requires that the input and output be ideal lowpass filters.” The Kell factor can be explained by the sampling of the resolution by vertical trace lines of a display. Because of the vertical raster, the vertical resolution is limited to about 70 percent of the IFOV, or 35 percent of the spatial sample frequency. The Kell factor is used to account for losses in converting pixelated still images to oversampled video images. The Kell factor is used because, although two samples theoretically can perfectly sample a sinusoid, the output is heavily dependent on phase. In the worst case, the sinusoid is completely missed, whereas, with three samples, the sinusoid is well sampled regardless of the sample phase, as shown in Fig. 7.1. Originally, Kell dealt with interlaced displays, and the interlacing was part of the Kell factor. However, it applies irrespective of the manner of scanning and whether the lines follow each other sequentially (progressive scan) or alternately (interlaced scan). It applies to both analog and digital displays.
FIGURE 7.1 A sinusoid sampled with two and three samples per cycle. Although the former meets the Nyquist criterion, results depend strongly on the phase of the sampling with respect to the signal.
132
Chapter Seven
References 1. J. Miller and E. Friedman, 2003, Optical Communications Rules of Thumb, McGraw-Hill, New York, pp. 150–151. 2. L. Biberman and R. Sendall, “Introduction: A Brief History of Imaging Devices for Night Vision,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 1-12 to 1-13, 2000. 3. Private communications with Dr. John Wiltse, 2003. 4. R. Kell et al., “An Experimental Television System,” Proc. IRE, 22(11), 1934. 5. J. Miller and J. Wiltse, “Resolution Requirements to Read Alphanumerics,” Optical Engineering, March 2003. 6. R. Kell et al., “A Determination of Optimum Number of lines in a Television System,” RCA Reviews, 1940. 7. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham, WA, p. 87, 2000. 8. http://graffiti.virgin.net/jlmayes.mal/car/tvband.htm, 2003. 9. S. Hsu, “The Kell Factor, Past and Present,” SMPTE Journal, pp. 206–214, February 1986. 10. K. Greeley and R. Schwartz, “F-22 Cockpit Avionics: A Systems Integration Success Story,” Proc. SPIE, Cockpit Displays VII, Vol. 4022, pp. 52–62, 2000. 11. www.NTSC-TV.com, 2003.
NTSC DISPLAY ANALOG VIDEO FORMAT 1. 2. 3. 4. 5. 6.
TV lines scanned: 525 Vertical retrace line time: the time equivalent to 44 lines Active line scans: 481 Line scanning period: 63.556 µsec Duration of time during which information is conveyed: about 53 µsec Typical number of real color lines: 270 or 280
Discussion NTSC video is a standard for video timing. Although archaic, most American video displays accept it, and many require it. The above specifications give the engineer basic information as to the timing involved. Note that although NTSC specifies 525 lines, 44 exist during the “retrace time.” This is the time allowed for an old cathode ray tube to reposition the electron beam back to the top of the display. However, this is important, because when one buys a 525-line camera and display for a security application, one should use only about 480 lines to calculate the resolution obtained on the display if using the line pair rules from Chap. 1, “Acquisition, Tracking, and Pointing.” The NTSC has its roots in the balance between economics and technology of the 1940s. The VHF spectrum was allocated to 12 TV channels with a 6-MHz spacing. This results in slightly less than 4.5 MHz available for the video carrier and corresponds to a 525-line system. New HDTV and high-resolution security systems have different timing. For instance, the SMPTE292 digital video specification calls for 60-Hz interlaced digital video of 1920 × 1080 pixels as well as several other resolutions, each with an associated standard. The NTSC analog specification identifies I, Q, and V, Jayne1,2 gives us the following lucid explanation of these values: Think of a color wheel, with red on top, then clockwise to magenta, then blue on the right, then turquoise on the bottom, then green, then yellow on the left, and so on. Let’s
Displays
133
put black in the center and pastel colors along the borders of the page. With this color wheel, U stands for left to right, and the V stands for up and down. Next think of the same wheel turned a bit so orange is on top, purple is to the right, blue is on the bottom, and green is to the left. For this wheel, Q stands for left to right, and I stands for up and down. You can still describe any color as somewhere on the I scale and somewhere on the Q scale. Two color components (I and Q, or U and V) are needed, because the color wheel occupies a two-dimensional space. If there was just one color component, you would have to think of all the possible colors along one straight line. The electronic complexity of representing all the colors as one component signal versus two is comparable to the mechanical complexity of blending all shades of all colors on a thin strip of paper (it would have to be hundreds of feet long) as opposed to one 8 1/2- by 11-inch page with a color wheel drawn on it. Lesser color resolution means you will see on the screen more readily the blending of the edges of adjacent colors somewhat analogous to following a straight line from one spot to another on a color wheel. Sometimes a gap of white or black occurs between the adjacent contrasting colors. Lesser color resolution means that, as the electron beam draws a scan line, it may be unable to get all the way from one desired color to the next before it has to start changing to a third color for a spot yet further along the scan line. The terms I, Q, U, and V refer to the color component signals already modulated onto the color subcarrier, approx. 3.58 MHz for NTSC and about 4.43 MHz for PAL and SECAM. Simply adding the C signal to the Y signal produces composite video. The U signal when demodulated becomes the Pb (B-Y) part of component video. The V signal when demodulated becomes the Pr (R-Y) component. PAL and SECAM use U and V rather than I and Q. On alternating scan lines, the SECAM “C” signal consists of just the U or just the V. NTSC can (and often does) use U and V rather than I and Q to construct composite video, usually at the expense of restricting all colors to 48 lines of resolution.
References 1. 2. 3. 4.
Private communications with Allan W. Jayne, 2002. http://members.aol.com/ajaynejr/vidcolor.htm, 2003. www.NTSC.org, 2003. J. Hall, “Characterization and Calibration of Signal-Generating Image Sensors,” in ElectroOptical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 7-3 to 7-5, 2000. 5. G. Holst, CCD Arrays Cameras and Displays, JCD Publishing, Winter Park, FL, pp. 149–155, 1988.
THE ROSE THRESHOLD As described by Legault,1 the Rose threshold can be calculated from a displayed target by C k = a N t ---------------------------N 2 – C + ------NT where
k = Rose threshold, defined as when detection occurs 50 percent of the time; k can be considered to be 1.7 for bar targets and 3.7 to 5 for disks Nt = photon rate from the target area of the display IT – T B C = contrast, defined as ------------------ , where IT is the target luminance, and IB is the IT background luminance N = dark mean-square noise in the same units as Nt a = diameter of the object (assuming a disk)
134
Chapter Seven
Discussion The Rose threshold can be used to estimate the probability of detection, from a display, of objects of various sizes and contrast. It is similar to that in the Wald and Ricco rule. Similar work was published by Rose and Coltman in providing the limiting criterion for detection of objects on a display. Assume k to indicate the N50 for detection where 50 percent of the observers would detect the object, and 50 percent would not. Generally, this is about 1.7 for bar targets and 3 to 5 for a disk. The reader should review the contrast rule elsewhere in this book. This rule is for monochrome stationary objects. This rule does not account for dark adaptation of the eye. Note that this is a version of a signal-to-noise equation with the term a included—which corresponds to the diameter of the disk.
References 1. R. Legault, “Visual Detection Process for Electro-Optical Images: Man—The Final Stage of an Electro-optical Imaging System,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, p. 21-17, 2000. 2. A. Rose, “The Sensitivity Performance of the Human Eye on an Absolute Scale,” Journal of the OSA, Vol. 38, pp. 196–208, 1948. 3. A. Gorea and D. Sagi, “Disentangling Signal from Noise in Visual Contrast Discrimination,” Nature Neuroscience, 4(11), pp. 1146–1150, November 2001. 4. J. Coltman and A. Anderson, “Noise Limitations to Resolving Power in Electronic Imaging,” Proc. IRE, Vol. 48, p. 858, 1960.
WALD AND RICCO’S LAW FOR DISPLAY DETECTION Wald’s equation for threshold detection from a display is k = I b Cα
x
where k = the Rose threshold, defined as the point at which detection occurs 50 percent of the time (Threshold k can be considered to be 1.7 for bar targets and 3.7 to 5 for disks.) Ib = luminance (foot-lamberts) from the background IT – T B C = contrast, defined as ------------------ , where IT is the target luminance, and IB is the IT background luminance α = angle (in minutes) subtended by the eye by a disk on the display x = a constant that varies between 0 and 2 For small objects (<7 minutes of arc) and an x = 2, the above collapses to Ricco’s law, 2
k = a I BC where a = diameter of the object (assuming a disk)
Discussion There was much activity in the 1940s and 1950s on quantifying the detection ability of a human viewing a display. The above allows one to calculate the Rose threshold based on the contrast, illumination, and size of the object.
Displays
135
The Rose threshold defines the N50 for detection and varies depending on target geometry and background clutter. This also assumes good human vision. Reference 1 points out that objects smaller than one minute of arc are difficult to see.
References 1. R. Legault, “Visual Detection Process for Electro-Optical Images: Man—The Final Stage of an Electro-optical Imaging System,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 21-2 to 21-4, 2000. 2. http://www.ulib.org/webRoot/Books/National_Academy_Press_Books/emergent_tech/ cover001.htm, 2002. 3. G. Wald, G. and D. R. Griffin, “Change in Refractive Power of the Human Eye in Dim and Bright Light,” Journal of the OSA, Vol. 37, pp. 321–336, 1947. 4. Committee on Vision Commission on Behavioral and Social Sciences and Education, National Research Council, Emergent Techniques for Assessment of Visual Performance, National Academy Press, Washington, DC, 1985.
DISPLAY LINES TO SPATIAL RESOLUTION Wiltse1 gives the following: HTVL Spatial resolution = 0.67 ----------------HFOV where spatial resolution is defined in cycles/mrad, HTVL is the number of displayed horizontal television lines, and HFOV is the horizontal field of view in mrad.
Discussion Typically, displays and television cameras specify their resolution in terms of the number of HTVLs that they can produce. An important concept to remember is that HTVL is per picture height (even though it is horizontal resolution). This metric depends only on the camera and is independent of the type of lens used. It applies to one-chip and three-chip CCD cameras and older tube cameras, and to both color and monochrome cameras and displays. Horizontal spatial resolution is proportional to HTVL resolution. If the above HFOV is in milliradians, then the calculated spatial resolution is in the popular cycles/milliradian. The constant of proportionality, 0.67, is based on the fact that a cycle is defined as two lines, and display resolution is always given in terms of picture height (PH), so the result must be multiplied by the 4/3 aspect ratio used in NTSC and PAL. Stated another way, horizontal spatial resolution = horizontal cycles/horizontal FOV, but horizontal cycles = (4/3) × HTVL/PH/2. The factor of 2 comes from the fact that there are two lines per cycle. The factor of (4/3) is the aspect ratio. Note that color resolution is frequently less than defined above, especially for one-chip cameras. Also, note that the Kell factor is not included here. It is included in the conversion of pixels (or resolution elements) to analog TV lines of resolution. For example, 494 CCD pixels (vertical) don’t result in 494 lines of displayed resolution; typically, you get less as a result of the Kell factor, pseudo-resolution (discussed in the chapter introduction), and other factors. Williams2 cautions the reader that resolution can be normalized to a square screen resolution specified in TV lines per picture height (TVL/ph). The normalized measurement lets you compare the horizontal resolution of a TV system (the figure normally quoted) with the vertical resolution, which is fixed by the number of scan lines used and the kind of scanning performed (interlaced or progressive). This eliminates any dependency on as-
136
Chapter Seven
pect ratio (4:3 or 16:9). Thus, a camera resolving 600 TV lines produces those lines across a width of the image equal to the picture height. If the camera acquires 4:3 images, the camera can actually resolve 800 TV lines across the entire picture (4/3 multiplied by 600); if the camera shoots true 16:9 images, it resolves 1067 TV lines across its entire picture width (16/9 times 600 = 1067). This explains why true 16:9 switchable cameras list the same resolution in both 4:3 and 16:9 modes; the figures are normalized to picture height, even though more pixels per line are used in 16:9 than in 4:3. If that isn’t bad enough, there is also the question of what sort of picture you’ll get as you approach the specified resolution figure—the absolute limits of resolution. In all video systems, aperture response tends to decrease as frequency increases. In other words, resolving power starts falling apart as the limit is approached, because the detail being captured actually becomes smaller than the individual pixels on the CCD (or smaller than the diameter of the scanning beam in analog systems). At least one manufacturer specifies resolution at the point where the response is only five percent, which is a reasonable methodology. Other manufacturers measure the point at which the curve actually intersects the noise floor and no detail can be seen at all. Limiting resolution can be precisely that limit, and not necessarily the usable limit from a more practical point of view. The interested reader should see the related rule, “Williams’ Lines of Resolution per Megahertz,” in Chapter 18.
Example Assume that you have a 480-HTVL camera (this is typical for a one-chip color CCD camera). You put on a lens that gives you a 1.6° horizontal FOV (≈27 mrad). Your spatial resolution is 0.67 × 480/27 mrad, or 11.9 cycles/mrad.
References 1. Private communications with Dr. John Wiltse, 2003. 2. Private communication with George Williams, 2003.
Chapter
8 The Human Eye
The function of the human eye has been a source of wonder for millennia. What is more, the advancements in modern science that have allowed us to fully characterize the eye functions have led to many more questions than can be answered. By itself, the optical performance of the eye is quite poor. Even in persons with ideal vision, the image falling on the retina (the eye’s detectors) has aberrations. Parallel lines appear to be curved toward and away from one another, the image is inverted relative to the objects in the scene (as in most other imaging systems) and, above all, there is a point at which there is no vision at all. Most surprising is that everywhere in the retina, except at the fovea (the area of highest acuity), light has to pass through a number of layers of tissue and blood vessels to get to the photosensitive material. In the area of the fovea, there is a “pit” where the overlayers are not present. The exposed cones in the fovea have a density five to six times higher than in the periphery. The individual cones are about 3 µm in extent and hexagonal in shape. Fortunately, the optical system is just the beginning of the vision system. Behind the eye is the most powerful computer known to man, and it corrects virtually all of these faults— although a pair of glasses or contact lenses are often invaluable in providing the finishing touch on the system. Only because the eye is connected to the brain can we understand why vision of any quality occurs at all. The brain not only adapts to the aberrations of the normal eye, it can gradually accommodate insults that no EO instrument could deal with; the employment of inversion lenses causes the world to appear inverted for a time (usually on the order of days), but this is eventually corrected by an inversion in the brain’s interpretation of the image. Even the effects of highly distorting glasses are eventually overcome by processing that results from experience and practice. EO designers dream of a world in which the inevitable image distortions that occur in their systems could be removed by a computer system that is able to practice and learn. Perhaps a mature version of neural network technology will provide this capability. There is no way we could have captured in this chapter, let alone this entire book, the range of modern understanding of the eye and the vision process. It would have been of little value to attempt to describe in this chapter the operation of the eye and vision system. Thousands of books and web sites provide an indication of the state of the art, with new advancements coming every day. Rather, we concentrated on simple rules that provide bits
137
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
138
Chapter Eight
of information that could be of use to EO designers, providing a few insights into knowledge they should have to 1. Understand the most sophisticated imaging system 2. Deal with the many applications in which designs must interact with human vision 3. Understand the application of modern EO methods to characterization of the eye and vision systems This last point deserves some attention. Much of the current research in eye function is now conducted in the context of machine imaging techniques and uses the measures of merit from that field, such as MTF, spatial frequency analysis, Fourier transforms, Zernike coefficients, and so forth. The technically advanced reader with an interest in this area can gain a wealth of information from Ref. 1. This web site includes wavefront characterizations of a number of human eyes for many of the Zernike coefficients. Perhaps most interesting is the material that shows how many aberration orders need be corrected to provide diffraction-limited sight for an eye adapted to bright light versus the much larger number required to achieve the same level of correction for a dark-adapted (large) pupil. As in other chapters, we have emphasized the collection of simple equations that can be useful in estimating complex features of a system. In this case, we have included algorithms that describe the density and properties of rods and cones, how pupil size varies with illumination level, the units of measurement common in dealing with vision, how to take pleasing stereo photographs, the properties of color blindness, the frame rate that provides for a true motion picture feeling, optical and physical properties of the eye and retina, how aging affects vision, estimating retinal quantum efficiency, and many others. For those interested in how people adapt to a life with profound color blindness (acromatopsia) we cannot do better than suggest Oliver Sacks’ The Island of the Colorblind.2 Among the surprises revealed when studying the eye is the amazing feats it performs constantly, without our knowledge. Everyone is aware of the “blind spot” that the brain fills in by some unknown means. At the same time, the eye never rests, constantly moving by jumps known as saccades. Curiously, when optical techniques are used to suppress the constant motion of the eye, any image in the field fades from view after a few seconds, as documented by Crick, The Astonishing Hypothesis.3 It seems that the eye/brain vision system ceases to function if the incoming light field is static. This is but one of many odd features of the vision system, most of which are beyond the scope of this book. The interested reader will want to look at Crick’s book as well as any of dozens of others, as they provide examples of visual illusions, figures that help illustrate how the blind spot works, and other visual tricks. A good online example is Ref. 4. Another surprising capability of the eye is its extreme sensitivity, once dark adapted. While not as effective as electronic detectors, the eye offers an extraordinary range of performance, with the ability to deal with just a few photons per second when fully dark adapted and very bright conditions as well. All students of science who know the stories of early studies in nuclear physics have heard that there was no better method than a darkadapted human (graduate student) for detecting the tiny flashes that were of so much interest at that time. Just as surprising is the fact that light must pass through the retina to get to the sensitive cells that convert the light to electrical signals. The complexity of the functions of the eye should make it no surprise that many of the its observed capabilities and limits can be described only empirically. Quantitative theories of eye function are limited to those that deal with the parts of the system: rods, cones, nerve cells, and so on. The operation of the whole system is not well understood at all. Accordingly, many of the rules in this chapter are descriptive in nature; they do not, and cannot, include descriptions of why the phenomena occur. As we find in this chapter, the eye is capable of diffraction-limited performance (although this is rare in the general population), in spite of all of the odd features of the sys-
The Human Eye
139
tem. It also has a huge fields of view and fields of regard, rivaling the performance of a wide-field sensor on a gimbal. It accommodates a wide range of illumination and reacts quickly to changing lighting conditions. Furthermore, experiments show that some structures in the optic nerve resemble nerve cells of the brain and may actually participate in image “preprocessing.” This theory is supported somewhat by the fact that the time delay in communicating the presence of light into the brain is far longer than the observed reaction time to image motion. The possibility is that the optic nerve and the brain, working together, are able to keep up with the information flow. Modern explanations of the strange properties of Benham’s disk (see, for example, Ref. 5) derive from the different rates at which different color sensors communicate with the brain. When a Benham disk is rotated, nearly everyone sees colors, despite the fact that only black or white segments exist on the disk. It is worth noting here that the brain consumes about 20 percent of the energy used by the entire body. A typical value of 20 W is commonly cited in the literature. The vision system made up of the eye and brain is covered in many texts at all levels. The interested reader can also find frequent contributions in this vital field in magazines such as Scientific American. Professional journals require a strong foundation in both optics and biology and are reserved for the expert. A particularly fascinating discussion of the function of the optical processing component of the brain is provided by Francis Crick3 and several of the books by Oliver Sacks. Sacks has written on a number of visionrelated problems associated with genetic defects, brain injury, and disease. In addition, V. Ramachandran (Phantoms in the Brain) has introduced a number of simple techniques for illustrating how the eye/brain system functions and how it can be “tricked” to achieve a desired effect. For example, Ramachandran has shown through simple techniques how the brain “fills in” the empty part of the visual field that results from the blind spot in the retina. His work contributes to the use of vision to diagnose brain injury and other pathology. There is also an active research community in the military that is concerned with target detection phenomena and the way the mind/brain processes images. SPIE occasionally publishes compendia of articles dealing with detection processes, psychometric performance, and vision in nonideal conditions. As in all other cases, the reader is likely to find interesting (but unvetted) information on vision on the web. A particularly thorough assessment of a model of the eye is found in the work of David Salomon.7 The current emphasis on vision science is not just academic. In the last 120 years, there has been an explosion in the number of presentation technologies in use. Prior to the invention of the photograph, few humans had seen imagery other than real scenes. Only a few had seen paintings. These days, we have LCD, plasma, projection, and electron beam screens; digital and analog large screens for stadia and theatres; laser, dye sublimation, and dot matrix printed imagery; color photographs; stereo photographs; and many more. Each technology has its limitations in spectral and spatial resolution, brightness, size, and frame rate, so knowledge of how vision works can be important in creating pleasing and highly detailed images, both moving and still. In addition, the range of technologies related to vision correction has been expanding as well. Their full exploitation requires attention not only to optical performance but also to the behavior of the brain. This has allowed, for example, the development of corrective surgery that allows a patient to have high-performance vision in both the near and far field by exploiting the brain’s ability to deal with eyes that have (purposely) two different focal lengths. As a result of all of these developments, vision science has become and will continue to be an important component in the advancement of presentation technology.
References 1. http://www.cvs.rochester.edu/williamslab/research/option01.html, 2003. 2. Oliver Sacks, The Island of the Colorblind, Vintage Books, New York, 1998.
140
3. 4. 5. 6.
Chapter Eight
F. Crick, The Astonishing Hypothesis, Scribners, New York, 1994. http://www.exploratorium.edu/seeing/exhibits/changing.html, 2003. www.exploratorium.edu/snacks/behhams_disk.html, 2003. V. S. Ramachandran, Phantoms in the Brain: Probing the Mysteries of the Human Mind, Quill Books, William Morrow & Co., New York, 1999. 7. http://www.ecs.csun.edu/~dxs/DC2advertis/AppenH.pdf, 2003.
The Human Eye
141
CONE DENSITY OF THE HUMAN EYE The following equation provides a simple model for the density of cones (NC) as a function of angular distance (e) from the center of the retina and the cone density in the center (NCO).1 0.124 0.85 N C = N CO ⎛ -----------------------------2 + ---------------------2- + 0.026⎞ ⎝ ⎠ 1 + ( e/0.45 ) 1 + ( e/6 )
Discussion We see from Fig. 8.1 that the density of cones near the center of the retina is about 10,000 per square degree. The eccentricity (e) is the angular distance from the center of the retina expressed in degrees. The reader will note that the formula shows the superposition of three terms. Reference 1 indicates that the first term represents the density variation in the fovea, the second term represents the density variation in the area between fovea and periphery, and the third term represents the density in the periphery. The density in the center of the fovea is assumed to be 12,000 per square degree, which corresponds to 142,000 cells/mm2. Cones are the component of the eye that is sensitive in high light situations, when the eye is exhibiting photopic response and is sensing color. The photopic mode occurs when the eye has adapted to light levels that exceed about 3 candela/m2. At much higher luminance levels (around 1000 cd/m2), color vision is best. This occurs in bright indoor lighting, which is just less than outdoor sunlit luminance.
FIGURE 8.1
Cone density reaches about 10,000 cells per square degree near the center of the retina.
142
Chapter Eight
Finally, we can offer a simple equation that models the photopic response of the eye. It is based on the lemprosity function, 32 –[ 100.937/λ ]
2.51189 × 10 e f ( λ ) = ------------------------------------------------------------182.19 λ where wavelength is expressed in micrometers. Figure 8.2 shows the comparison of the model above and measured data.2 Yet another model for cone response is3 f ( λ ) = 1.019e
FIGURE 8.2
–285.4 ( λ – 0.559 )
2
Photopic response of the eye peaks near 550 nm.
References 1. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA p. 70. 1999. 2. R. Kingslake, Applied Optics and Optical Engineering, Vol. 1, Light: Its Generation and Modification, Academic Press, New York, 1965. 3. J. Palmer, “Radiometry and Photometry FAQ,” http://www.optics.arizona.edu/Palmer/ rpfaq/rpfaq.htm, 2003.
DATA LATENCY FOR HUMAN PERCEPTION It has been well documented that displayed video data latency for complex functions are not perceptible if less than about 33 µsec.
The Human Eye
143
Discussion When designing systems such as night vision sensors for aircraft pilotage, it is necessary to distinguish between control loop delay and video delay. This is because the whole body is in motion, and delays in the video loop will cause a disconnect between the vestibular and visual senses. Experiments have shown that subjects can “train into” long control loop delays, but long video delays lead to flight problems. The vast majority of published experiments either involve control loop delays or do not distinguish between the type of delay implemented. In one short study involving solely video delays,2 the helicopter pilots did not notice a one-frame delay (33 msec). However, flight control problems resulted with delays of 66 msec. At 100 msec, the pilots experienced a serious disconnect between the vestibular and visual, which badly degraded flight performance.
References 1. R. Vollmerhausen, private communication, 2003. 2. R. Vollmerhausen and T. Bui, “Sensor System Psychophysics,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 26–35, 2000.
DYSCHROMATOPIC VISION Presentations of all kinds (paper, electronic, natural) can be confusing to people who are color blind.1–4 Keep in mind that at least 8 percent of the population is affected by this syndrome.
Discussion Professional optical engineers and scientists will have interest in this phenomenon for the following four reasons: 1. There is a likelihood that they are afflicted with this problem themselves. 2. They might create color presentations and should be concerned about reaching their entire audience. 3. Displays depicting color visible images, false color images, color symbology, multispectral color images, and color fusions may be encountered. 4. General curiosity may lead them to the subject. This genetic defect is called dyschromatopsia and affects the approximately 6 million cones in the retina. These cells are responsible for color vision. Rods (approximately 100 million of them) are responsible for dark-adapted vision, in low-light conditions. Dyschromatopsia can also result from disease, certain drugs, and other health problems such as brain injury or stroke. Reference 4 offers the following definitions and a good plan for the creation of presentations that are accessible to those with color deficiencies. Protanopia Deuteranopia Tritanopia Protanomalia
Red blindness (blue-green appears gray, red-purple appears gray) Green blindness (green appears gray, purple-red appears gray) Blue blindness Blue-green appears indistinct grayish, red-purple appears indistinct grayish Deuteranomalia Green appears distinct grayish, purple-red appears indistinct grayish
144
Chapter Eight
Most people who have deficient color perception are not completely “color blind.” These people are more accurately color deficient or dyschromatopic. The percentage of the population afflicted by this condition makes the problem important for those who design any signage seen by the public, including web sites.2,4 Table 8.1 summarizes data from a number of sources. TABLE 8.1 Dyschromatopia Summary All units in %
Caucasian
Asiatic
Arabic
AfroAmerican
Hispanic
American Indian
Others
Male
5–8.0
5.0–6.5
5.3
3.8–6.35
2.3
2.0
3.0
Female
0.5–1
0.5–1.7
0.0
0.0–0.15
0.6
0.0
0.5
A tiny number of Caucasians suffer from complete color blindness. This is a rare retinal defect affecting the cones of 0.003 percent of Caucasian males.5 They are achromatic, also known as monochromatic. Monochromats can suffer from other effects, including extreme sensitivity to light and difficulties with close focusing. People with partial color blindness are either dichromatic or anomalously trichromatic. These people can see red, green, and blue (corresponding to the peaks of the “normal” sensitivity curves), but either the red curve is shifted toward the green range of the spectrum, or the green curve is shifted toward the red. Anomalous trichromats with deficiency in the green range account for more than half of people with color vision deficiencies; about five percent of all males have this condition. The incidences of some effects are shown below: ■ Anomalous trichromacy, 1 percent male, 0.01 percent female ■ Protanopia (lack of cones sensitive to red light) and deuteranopia (lack of cones sensitive to green light), 1–2 percent male, 0.01 percent female ■ Tritanopia (lack of cones sensitive to blue light, which results in the inability to distinguish blue and green) 0.003 to 0.01 percent5 ■ Rod monochromacy, 0.01 percent ■ Cone monochromacy, 0.001 percent Others can suffer from missing red, green, or blue-sensitive pigment. Conditions exist in which people have all three pigments, but one or more may be abnormal. The challenge for the optical designer is to create products that can be used by the largest number of people, at reasonable cost, while allowing those with dyschromatopsia to use the presentation. Because the majority of color-deficient people are red-green deficient, we should pay special attention to red-green confusion. In fact, recent decisions by science publishing companies in Japan have come to the aid of color-blind biologists who are hard pressed to understand images of stained tissue in journals and on web sites.2 This will be done by converting the reds that normally appear in color images into magenta, which has enough blue to make them visible to the color-blind scientific audience. For presentations, a black-and-white color scheme certainly works but is uninteresting to those with full vision. Light colors on black is better, even for those with significant color loss. Some other details are as follows: ■ An achromat is born without any cones in the retina. ■ “A small subset of women (whose sons are dichromats) have an additional rhodopsin protein in their photoreceptors with peak sensitivity in the red. This sorority of tetrachromats would be able to see colors forever hidden to males. This finding would also have implications for gene therapy.”1
The Human Eye
145
References 1. 2. 3. 4. 5. 6. 7. 8.
http://www.klab.caltech.edu/cns120/Proposal/proposal_topics.html, 2003. “Breaking the Color Barrier,” Science, Vol. 298, p. 1551, November 22, 2002. http://jfly.nibb.ac.jp/html/color_blind/text.html, 2003. J. Halter, “To Improve Visualization by Color Deficient Vision Users, Avoid Using Color as the Only Visual Cue,” http://coe.sdsu.edu/et640/POPsamples/jhalter/jhalter.htm, 2003. www3.iamcal.com/toys/colors/stats.php, 2003. http://www.islanddiscs.freeserve.co.uk/access/colour.htm, 2003. Procedures for Testing Color Vision, National Academy Press, Washington, DC, pp. 9–11, 1981. Data from the National Health Survey, “Color Vision Deficiencies in Youths 12-17 Years of Age,” United States DHEW Publication No. (HRA), 74-1616, Series 11, Number 134, January 1974.
ENERGY FLOW INTO THE EYE For monochromatic light and photopic vision, 1 W is equal to 683 V(λ) lumen, where V is the standard curve for the sensitivity of the eye in photopic mode.
Discussion A number of units are used to describe the flow of radiation into the eye and the response that is generated. Here is a summary of those units. First, we note that the energy, ε, contained in a photon depends on wavelength (λ) according to ε = hν = hc/λ = 1.9858 × 10
–16
/λ joule
where h is Planck’s constant. The constant shown on the right of the equation applies when wavelength is expressed in nanometers. This provides the following relationship of power and photon rate. To create 1 W at a single wavelength (again, with wavelength in nanometers; other rules regarding this relationship use micrometers, so the exponent is different), we need the following number of photons per second: 1 16 1 W = ---------------- × 10 λ photons/sec 1.9858 For a more common case in which the radiation covers a range of wavelengths, we get the following form:
∫ ∫
P( λ )λdλ 1 16 1 W = ---------------- × 10 -------------------------- photons/sec 1.9858 P( λ )dλ where P(λ) is the spectral energy distribution function of the light source. Photopic vision is dominant when cones (color vision) are stimulated by bright illumination, as occurs during the day. We can also express lumens in terms of photons per second for monochromatic light, 12
1 lumen = 7.373 × 10 /V ( λ ) photons per second A relatively new definition is the Troland (Tr), defined as
146
Chapter Eight
2
2
1 Troland = 1 candela/m × 1 mm = 10 = 3.0462 × 10
–10
lumen/deg
–6
candela = 10
–6
lumen/steradian
2
It is roughly true that, for monochromatic light, 3
1 Troland = 2.246 × 10 λ photons/sec/deg
2
We can also define some commonly used terms that convert from photometric and radiometric units. First is the definition of lumens, θv = K m where
θv = θe,λ = V(λ) = Km =
830
∫360 θe,λV (λ)dλ
luminous flux in lumens (note that lm = cd ⋅ sr) radiant flux in W ⋅ nm−1 spectral luminous efficiency for photopic vision maximum spectral luminous efficiency (683 lm ⋅ W−1) and the limits of integration are wavelength in nanometers
Candela are defined in the following equation: 830
Iv = Km where
∫360 I e,λV (λ)dλ –1
Iv = luminous intensity in candela (note that cd = ( lm ⋅ steradian ) Ie,λ = radiant intensity in W ⋅ sr–1 ⋅ nm–1 V(λ) = spectral luminous efficiency for photopic vision Km = maximum spectral luminous efficiency (683 lm ⋅ W–1)
and the limits of integration are expressed as wavelength in nanometers.
Reference 1. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA, pp. 60–62, 1999.
EYE MOTION DURING THE FORMATION OF AN IMAGE The human eye fixes on particular parts of an image for about one-third of a second. Small motions called saccades can last as little as a few milliseconds and can involve angular speeds up to about 500°/sec. At the end of the saccadic motions, vision occurs.
Discussion Few topics have so held the attention of physiologists as the eye. Vast areas of research have been completed that relate to the function of the eye as a light-gathering system, its connection with the muscles of the face, and its connection with the image processing system behind it—the human brain.
The Human Eye
147
During motion, vision is suppressed. A fixation of the eye involves two to three saccades. The number of eye fixations and sequential pattern describes the search pattern and whether the process is attentive or preattentive (consider the difference between “looking” at a page and reading words). Also, the properties of the saccades depend on the task given to the viewer and the nature of the scene. Understanding the motion of the human eye is needed for design of the size, resolution, contrast, and color of displays and their placement (e.g., in a cockpit). This is also useful when designing systems that track or follow the human eye. Yarbus1 suggests that the duration of saccade can be modeled by T s = 0.021δs
2⁄5
where Ts = duration of the saccade in seconds δs = its amplitude in degrees The fixation (vision) period takes up to about 95 percent of the glimpse time, the rest being left for the very short saccades. Reference 3 provides nice examples of how saccades progress during the process of reading.
References 1. A. Yarbus, Eye Motion and Vision, Plenum Press, New York, 1967. 2. J. Lloyd, “Fundamentals of Electro-Optical Imaging Systems Analysis,” in Vol. 4, ElectroOptical Systems Design, Analysis and Testing, M. Dudzik, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 107, 1993. 3. http://www.4colorvision.com/reading/foveola.htm, 2002.
FREQUENCY AT WHICH SEQUENCES OF IMAGES APPEAR AS A SMOOTH FLOW The actual relationship of frequency to the perception of moving images is logarithmic as a function of brightness. That is, the critical frequency at which a “motion picture” effect is seen is linear with the logarithm of the illumination.
Discussion The invention of the motion picture led to explorations of the physical phenomenon that allows humans to see a smooth flow of images rather than the individual pictures. In an evolutionary sense, the fact that this happens at all is a bit of a surprise. After all, no one had ever been exposed to a sequence of short-exposure images until civilization emerged, yet we have this capacity. Ferry (1892) and Porter (1902) studied this phenomenon and found that the frequency at which the smooth motion picture effect was seen by most people depends on the illumination of the scene. They found that the logarithm of the luminance (brightness) of the scene could be used to predict how many frames per second must be shown to a viewer to simulate the continuity that is seen in real life. A more modern study was conducted by Tyler and Hamer. The results are shown in Figs. 8.3 and 8.4 for different viewing conditions.
148
Chapter Eight
FIGURE 8.3 Critical flicker frequency (CFF) as a function of retinal illuminance measured by Tyler and Hamer (1990) for a circular field with a diameter of 0.5° and 0.05° with a 100 percent modulated sinusoidal temporal luminance variation. Viewing was monocular.
FIGURE 8.4 Critical flicker frequency as a function of the luminance for a CRT image seen with a subtended angle of 30°. Data points: CFF measurements by Farell et al. (1987) of the 90 percent flicker limit that corresponds with a chance of 10 percent for seeing flicker. Solid curve: 90 percent limit calculated with our model. Dashed curve: same calculation for 50 percent probability of seeing the flicker.
The Human Eye
149
References 1. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA, pp. 115, 117, 1999. 2. C. Tyler and R. Hamer, “Analysis of visual modulation sensitivity, IV, Validity of the FerryPorter law,” Journal of the OSA, Vol. A7, pp. 743–759, 1990.
EYE RESOLUTION The human eye can resolve better than one minute of arc and is stabilized by reflex movements.
Discussion As we will see below, the human eye is essentially diffraction limited for those with quality vision or corrective lenses. Another rule in this chapter shows that the pupil diameter is around 5 mm for nominal light levels. If we use the standard diffraction formula for defining the angular diameter of a point source, λ 2.44 ---D we obtain that, for light of a wavelength of 0.5 µm, the angular resolution is 244 microradians. When converted to arcseconds, we find that the resolution is about 45 arcseconds. A person with a visual acuity of 1.5 can resolve 40 seconds of arc, while an average person can resolve 1 minute of arc, which equates a visual acuity of 1. Said another way, 60 arcseconds Resolution = --------------------------------acuity Reference 1 makes this clear by stating . . . the eyes are constantly in motion, even when a person is consciously trying to fixate on a given point . . . . it seems that the rods and cones become desensitized if the irradiance falling on them is absolutely unchanging.
Reference 1. G. Waldman and J. Wooton, Electro-Optical Systems Performance Modeling, Artech, Norwood, MA, p. 185, 1993.
LITTLE BITS OF EYE STUFF 1. Typical ambient luminance levels (in candela/m2) are a. Starlight 10–3 b. Moonlight 10–1 c. Indoor lighting 102 d. Sunlight 104 e. Maximum intensity of common CRT monitors (from Ref. 1), 102
150
Chapter Eight
2. Estimates of the size of the retina range from 22 to 50 mm. Area of the human retina is 1094 mm2 calculated from the expectation that the average dimension of the human eye is 22 mm from anterior to posterior poles and that 72 percent of the area inside of the globe is retina.2 3. The eyes are 6 cm apart and halfway down the head.1 4. The range of pupil diameters is 1–8 mm, depending on illumination level. 5. The visible spectrum is 370–730 nm. 6. Peak wavelength sensitivity is1 a. Scotopic: 507 nm b. Photopic: 555 nm 7. Visual angles of common objects are1 a. The Sun or Moon = 0.5° b. Thumbnail (at arm’s length) = 1.5° c. Fist (at arm’s length) = 8 to 10° d. Monocular visual field, 160° (w) × 175 ° (h) e. Approximate visual field, 200° (w) × 135° (h) f. Region of binocular overlap, 120° (w) × 135° (h) 8. One degree of visual angle = 0.3 mm on the retina1 9. Total number of cones in the retina is 5,000,000 to 6,400,000.1,3 10. Total number of rods in the retina is 110,000,000 to 125,000,000.1,3 11. Density of rods and cones, in cells/degree,2 is about an areal density of 142,000 cells/ mm2. Figure 8.5 illustrates the density of rods and cones as measured in 1935.3 12. One degree of visual angle is equal to 288 µm on the retina. 13. Typical localization threshold is 6 arcsec (0.5 µ on the retina).1 14. The minimum temporal separation needed to discriminate two small, brief light pulses from a single equal-energy pulse is 15–20 ms.1 15. Some useful units in vision science are given below: a. Radiance is measured in watts/sr/m2. Radiometric units are those used to describe measurements conducted without concern for the response of the human eye. b. Photometric units (lumens, candela, lux) adjust radiometric units for visual wavelength sensitivity. c. Lux are units of illumination. Thus, a light intensity of 1 candela produces an illumination of 1 lux at 1 m. d. One Troland (Td) of retinal illumination is produced when an eye with a pupil size of 1 mm2 looks at a surface whose luminance is 1 cd/m2.1 e. Scotopic luminance units are proportional to the number of photons absorbed by rod photoreceptors to give a criterion psychophysical result. f. Photopic luminance units are proportional to a weighted sum of the photons absorbed by L- and M-cones to give a criterion psychophysical result. 16. Following exposure to a sunny day, dark adaptation to a moonless night involves 10 minutes (photopic), 40 minutes (scotopic), and a change in visual sensitivity of 6 log 10 units.1 17. The minimum number of absorptions for scotopic detection is 1 to 5, for detectable electrical excitation of a rod is 1, and for photopic detection is 10 to 15.1 18. The highest detectable temporal frequency is (for high ambient, large field) 80 Hz and (for low ambient, large field) 40 Hz.1
The Human Eye
151
FIGURE 8.5 Rods and cones complement each other in focal plane density. This data is derived from actual measurements.
Discussion This collection of mini-rules provide some easy-to-use ideas for characterizing the human eye as a detection device. No doubt there are thousands of little tidbits that could be included, but space confines us to these important features. All are approximations.
References 1. B. Wandell, Foundations of Vision, Sinauer Associates, Inc., Sunderland, MA, 1995. 2. R. Michels, C. Wilkinson, and T. Rice, Retinal Detachment, C. V. Mosby, p. 17, 1990. 3. G. Osterberg, “Topography of the Layer of Rods and Cones in the Human Retina,” Acta Ophthalmologica, Vol. 13 Suppl. 6, pp. 1–97, 1935. 4. See also http://webvision.med.utah.edu/facts.html, 2003.
OLD-AGE RULES Decline in visual acuity increases with age. About 13 percent of those over 65 complain about some sort of visual impairment, compared with 28 percent of those over 85. More than 90 percent of the elderly require some type of corrective lenses or eye surgery.
Discussion As the public ages, designers of all types of displays will have to take note. Changes in illumination and other accommodations are needed. Older viewers find difficulty in environments that require rapid adaptation to dim lighting or perception of detail at changing distances. Moreover, the older viewer loses the ability to detect subtle changes in color, pattern, or detail. The images in Fig. 8.6 compare the visual defects associated with common eye diseases in the elderly.
152
Chapter Eight
FIGURE 8.6 Visual defects associated with common eye diseases in the elderly. (Figures from www.nei.hih.gov/amd/areds_photos.htm.) Illumination and careful selection of colors, shades of gray, and patterns can all be tools to overcome some of the limits of the aging public while providing better recognition for younger viewers as well. Corrective lenses help, as described below. Deficits of the aged have been found in adaptation to darkness, visual acuity, contrast sensitivity, color discrimination, detection and recognition of moving objects, visual search, night vision, and glare sensitivity. These declines in visual function with age reflect losses in the quantity and quality of light reaching the retina, lost ability of the lens to bring near objects into focus (accommodation), changes in the nervous system structures serving the eye, and the increased prevalence of ocular disease in old age. According to the Framingham Eye Study,1 approximately 92 percent of persons aged 65 to 74 possess visual activities of 20/25 or better when fitted with their best refractive cor-
The Human Eye
153
rections. However, the ability to compensate for visual losses with eyeglasses or contact lenses becomes increasingly limited as age advances, because only 69 percent of those aged 75 to 85 can be corrected to 20/25 or better. The major environmental interventions that have been shown to enhance visual function in elderly individuals include increased levels and better distribution of illumination, control of glare, increased stimulus contrast, and reductions in visual “clutter.” In several specific cases, such as driving and taking medications, the aged population with vision deficits is a risk to itself and others. Improvements in performance and comfort are proportional to the square root of the changes in the amount of available light. In general, the designer can help by doing some simple things: use multiple sources of light rather than a single bright luminare, and incorporate variable-intensity controls at personal workplaces and reading stations to allow for the wide individual differences in the level of lighting needed to optimize comfort and performance. Special illumination on roads (“brightways”), active EO driving aids, and infrared sensors could be indicated on regional maps and used by drivers of all ages to negotiate the challenging nighttime environment more effectively. At the same time, overlighting can lead to problems. For example, drivers who stop at overlit service stations may be temporarily blinded when they reenter the roadway. The use of certain patterns and textures in carpeting, tile, and other architectural and building materials can greatly diminish depth perception at stairs and landings and contribute to an increased rate of fall-related accidents among older adults.
Reference 1. H. Leibowitz et al., “The Framingham Eye Study Monograph: An Ophthalmological and Epidemiological Study of Cataract, Glaucoma, Diabetic Retinopathy, Macular Degeneration, and Visual Acuity in a General Population of 2631 Adults, 1973–1975,” Surv. Ophthalmol., pp. 335–610, May-June 24, 1980 (Suppl.). 2. J. Fozard et al., Sensory and Perceptual Considerations in Designing Environments for the Elderly, from http://www.homemods.org/library/life-span/sensory.html, 2003. 3. www.nei.hih.gov/amd/areds_photos.htm, 2003.
OPTICAL FIELDS OF VIEW 1. The field of view of a homo sapiens approximates an ellipse 125° high and 150° wide1 to 135 × 200°.2 However, only a small portion of the field is in an area of acute vision. 2. The monocular visual field is approximately 160° (w) by 175° (h). 3. The total visual field is approximately 200° (w) by 135° (h). 4. The region of binocular vision, at best, is 120° (w) by 135° (h).
Discussion The portion of acute vision is determined by the fovea, which subtends only a few degrees of the field. The density of cones is very high in the fovea. Rods dominate peripheral vision beyond a few degrees off axis. The outside portion of the field of view is used for orientation. To prove to yourself that your field of view for acute vision is narrow, you can do this trick. Pick a word on this page and stare at it. See how far you can read without moving your eye. This might take a little practice, as your natural response it to step-stare your eye while reading. However, you can notice that you have the resolution to read only a few words before you need to move your eye. Many people are surprised to find out that they
154
Chapter Eight
are not able read even a fraction of the width of a page, although you will be able to “see” that it is there. To expand the field of view, we have the ability to rotate our eyes (+30° to –40° vertically and ±60° horizontally).2 In addition, we are able to move our head over the following ranges: bending upward (39° to 93°), bending downward (54° to 72°), ±59° of rotation of the head so that the ear approaches the shoulder, and ±64° of axial rotation (rotation of the head while looking straight ahead).2
Reference 1. B. Begunov, N. Zakaznov, S. Kiryushin, and V. Kuzichev, Optical Instrumentation, Theory and Design, MIR Publishers, Moscow, pp. 185–187, 1988. 2. http://www.opticalphysics.com/vision.htm, 2003.
PUPIL SIZE The size of the human pupil may be estimated from1 D = 5 – 3 tanh ( 0.4 log L ) where
D = pupil diameter in millimeters L = luminance in candelas per square meter tanh = hyperbolic tangent function
Discussion This rule can be used in assessing eye safety, optical augmentation, and eyepiece issues, because the size of the pupil determines the amount of energy received. Of course, the threat to the eye in a bright-light condition is that the light passing through the pupil is focused significantly, resulting in very large energy densities at the retina (focal plane). An additional correction may be needed if the field used to stimulate the eye is not the full field of view. For young adult subjects viewing a square illuminated area, the formula is corrected to be is3 ⎧ 2 2 ⎫ d = 5 – 3 tanh ⎨ 0.4log ⎛ LX o /40 ⎞ ⎬ ⎝ ⎠ ⎩ ⎭ where Xo is the angular extent of the field in degrees. For more complicated viewing areas, the X 2o term is replaced by the area of the target. The “40” appears because the experiments that produced the equation in the rule used a field of 40° × 40°. This rule is useful for estimating a pupil’s size under a variety of lighting conditions. This is also a consideration for display design, because the geometry of the eye must be taken into account to avoid vignetting and to ensure full illumination as the eye moves. Figure 8.7 shows measured data compared with the predictions of the rule. Data are from Ref. 2.
References 1. W. Driscoll, Ed., The Handbook of Optics, McGraw-Hill, New York, pp. 12-10 to 12-12, 1978. 2. F. Sears, Optics, Addison-Wesley, Reading, MA, p. 134, 1958. 3. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA, p. 31, 1999.
The Human Eye
FIGURE 8.7
155
Comparison of measured and modeled pupil size as a function of illumination.
THE QUANTUM EFFICIENCY OF CONES The quantum efficiency (QE) of cones drops by a factor of 10 between the center and edge of the field, as described by the following empirical equation: ⎛ 0.4 ⎞ 0.48 η( e ) = η⎜ ---------------------2- + ------------------------2- + 0.12⎟ ⎝ 1 + ( e/7 ) 1 + ( e/20 ) ⎠ where η at the right-hand side of this expression is the quantum efficiency at foveal vision, and e is the eccentricity in degrees. Figure 8.8 shows a plot of this function for the typical situation that the quantum efficiency of the cones is 3 percent in the center of the retina.
Discussion Figure 8.8 is interesting in several regards. First, it shows that the best quantum efficiency is about 3 percent. In addition, the QE shows a graceful decline as one approaches the edge of the field. This should be considered when one is involved in wide-FOV displays in bright light. The edge of the field is far less able to detect small changes in the light field. This is of particular interest because, when cones are acting as photoreceptors, the field is very bright. In addition, FOV can be adversely affected by this property of the retina when one is making the transition from photopic (bright) to dim (scotopic) conditions. During that process of dark-adaptation, one is likely to miss dim lights in the periphery.
156
Chapter Eight
FIGURE 8.8
Quantum efficiency as a function of eccentricity. (From Ref. 1.)
In other rules, we show the density of cones and rods as a function of position in the retina. Those results show that cones (bright-light color sensors) dominate near the center of the retina. The more sensitive rods (low-light black-and-white sensors) are much less prevalent near the center of the retina but are more so in the outer areas. Because rods are more sensitive, they are the preferred type of retinal component to use when low-light sensitivity is desired. Amateur astronomers know this and use adverted vision when looking at dim objects through a telescope. That means that they do not look directly at the object in question but rather look at another place in the field of vision. This causes the image of the dim object in question to fall on rods, where the highest sensitivity exists. It should be noted that Ref. 1 defines QE in a way that takes into account the amount of retina that is exposed. The reference states, “The quantum efficiency is defined by the average number of photons causing an excitation of the photoreceptors divided by the number of photons entering the eye.”
Reference 1. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA, pp. 79, 80, 1999.
RETINAL ILLUMINATION The following equation shows the effective illumination of the retina, E, as a function of pupil size, d: 2 πd ⎧ 2 4⎫ E = --------- L ⎨ 1 – ( d/9.7 ) + ( d/12.4 ) ⎬ 4 ⎩ ⎭
E is expressed in Trolands, which are discussed below.
Discussion One would naturally expect illumination of the retina to be a simple matter described by the following equation:
The Human Eye
157
2
πd E = --------- L 4 where d = size of the pupil L = luminance of the scene This was discovered not to be the case, however. In the 1930s, Stiles and Crawford showed that rays entering the eye near the edge of the pupil are not as effective in creating illumination in the retina as those near the center of the pupil. The reader will see that there is a substantial impact at larger pupil diameters as compared with the simpler theory. This effect was first noticed in cones and is more pronounced in those light receptors, but it has since been established as a general response of both rods and cones to oblique entry of light. Figure 8.9 illustrates the functional form of the part of the equation in brackets ({}). That is, one computes the area of the pupil for a given diameter, then multiplies by the value shown in the figure to find the reduced effective area. Trolands are a measure of retinal illumination. One Troland (Td) of retinal illumination is produced when an eye with a pupil size of 1 mm2 looks at a surface whose luminance is 1 cd/m2. A Troland (Tr) is about 2 × 10–3 lux if one takes into account the transmissivity of the ocular media and the angular area of the pupil seen from the retina. The transition from photopic to scotopic vision occurs somewhere between 1 and 10 Tr.
FIGURE 8.9
Impact of the Stiles–Crawford effect.
Reference 1. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA, p. 33, 1999.
158
Chapter Eight
ROD DENSITY PEAKS AROUND AN ECCENTRICITY OF 30° ⎛ ⎞ 2 0.15 0.85 N r = 12, 000 ⎜ 1 – -----------------------2- – ----------------------------------------------------2⎟ cells/deg ⎝ 1 + e/2.0 0.15 + 0.85/( 1 – e/20 ) ⎠ where Nr = rod density in the retina e = eccentricity (distance in degrees from the center of the retina)1
Discussion The distribution of rod density seems to complement the density of cones. The curve below (Fig. 8.10) and the one in the “Cone Density of the Human Eye” rule (p. 141) can be compared to see this effect. Moreover, there are very few rods near the center of the retina. The maximum is about 12,000 cells/degree2 at an eccentricity of about 20°. This value is about equal to the cone density in the center of the retina. It is important to note that the term eccentricity as used in the discussion of the optics of the eye does not refer to the more common usage in defining ellipses. Rather, it refers to the position of some feature of the eye, measured from its optic axis, in degrees. The angle is defined as shown in Fig. 8.11.2 See the “Cone Density of the Human Eye” rule for a discussion of how the density of rods can be exploited in low-light situations through the method of adverted vision. Finally, we provide a model for the relative response of the rods as a function of wavelength.3 The equation is V ( λ ) = 0.992e
–321.9 ( λ – 0.503 )
2
where wavelength is expressed in microns. This is illustrated in Fig. 8.12. Rods are active in scotopic vision after dark adaptation has occurred. This occurs for luminances lower than 0.01 cd/m2 for 10 min or more. In between scoptopic and photopic
FIGURE 8.10
Rod density as a function of eccentricity (degrees).
The Human Eye
FIGURE 8.11
Features of the structure of the eye. (Adapted from an image in Ref. 4.)
FIGURE 8.12
Scotopic response of the eye.
159
160
Chapter Eight
vision, both rods and cones participate. This range of light levels (0.01 to about 3 cd/m2) is called mesopic.
References 1. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA, p. 72, 1999. 2. A. Bradley and L. Thibos, Modeling Off-Axis Vision, available at http://research.opt.indiana.edu/Library/ModelOffAxisI/ModelOffAxisI.html, 2002. 3. J. Palmer, Radiometry and Photometry FAQ, http://www.optics.arizona.edu/Palmer/rpfaq/ rpfaq.htm, 2003. 4. National Institutes of Health, “Vision—A School Program for Grades 4–8,” http:// www.nei.nih.gov, 2003.
SIMPLIFIED OPTICS TRANSFER FUNCTIONS FOR THE COMPONENTS OF THE EYE The optical transfer function for the eye can be written as p –43.69 ----M H optics = exp ---------------------fo where
io
p = radial spatial frequency in cycles per milliradian M = imaging system’s magnification 2
3.663 – ( 0.0216D p log D p )
fo = e Dp = pupil diameter in millimeters ⎛ 0.277⎞ io = ⎜ 0.7155 + -------------⎟ Dp ⎠ ⎝
Discussion The retina transfer function is p H retina = exp –0.375⎛ -----⎞ ⎝ M⎠
1.21
The transfer function resulting from tremor (high-frequency oscillation of the eye) is p H tremor = exp –0.444⎛ -----⎞ ⎝ M⎠
2
where Hoptics = eye’s optical transfer function resulting from the eye’s optics Hretina = eye’s optical transfer function resulting from the eye’s retina Htremor = eye’s optical transfer function resulting from the eye’s tremor The magnification depends on display size and distance from display. Magnification is apparent size of object on display divided by size apparent to the eye. In many systems,
The Human Eye
161
the displays subtends about 20° at the eye. For those systems, a sensor with a 2° FOV would have a magnification of 10 (20° for display divided by 2° for the sensor).
References 1. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, Bellingham, WA, p. 37-39, 2000.
STEREOGRAPH DISTANCE Stereographs are said to offer the most natural 3-D appearance when the distance to the object is between 30 and 100 times the stereo baseline length.
Discussion This difference defines, at least in part, the separation between the location at which images are taken to produce the effect of stereo vision. Use this rule in the following way: measure or estimate the distance to the target (e.g., 100 km to a mountain range) and divide by 30 to 100. This defines the distance between the locations at which the two images should be taken to form the stereo image. In this example, the images should be taken 1 to 2 km apart. The longer this distance (also called the stereo baseline length), the greater the perception of 3-D when the stereo pair is viewed. The focus dial on a camera (for distances up to about 15 m) will facilitate determining the distance. For longer distances, you must use other means, such as maps or (if you can afford it) a laser rangefinder. It is interesting to note that mountains are usually cone shaped. This allows a stereo effect to be seen even if the baseline suggested above is not achieved. This probably happens because the nearest part of a mountain is not 100 km away but quite a bit closer than the peak, so moving the camera a few hundred yards succeeds in creating a stereo effect for the nearest parts, which are the most prominent in the scene. To make a stereo pair of the Moon (photographic distance about 380,000 km), it is easy to move between 3,800 and 7,700 km between the two shots. All you have to do is wait between two and six hours at the same spot to make a stereo pair using the Earth’s rotation. It has also been shown that the use of two eyes to view a scene, as occurs in stereo vision, increases the effective collecting area of light, increasing the sensitivity as compared with monocular vision.2 The increase is as might be expected at 2 , because the “noise” in each eye is independent. This is a neat rule in itself and important for viewing targets with monoculars versus binoculars.
References 1. http://www.nikon.co.jp/main/eng/photo_world/kumon/12e.htm, 2003. 2. P. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE Press, Bellingham, WA, p. 38, 1999.
SUPERPOSITION OF COLORS A beam of red light overlaid with a beam of green light causes the eye/brain to sense yellow, even though there is no “yellow” in the illumination.1
Discussion This remarkable result introduces the whole wide range of theories of color vision. On one hand, we have the trichromatic theory of color, which addresses the properties of the color
162
Chapter Eight
sensitivity of photoreceptors in the retina. The other currently popular theory is opponent processes, which addresses the neural mechanisms of color interpretation. Although the whole range of ideas included in these theories cannot be addressed here, we present the following summary. Trichromatic is a rather old theory, dating back to the eighteenth century.4 A great deal of this theory derives from the observed ability of pigments to form any color by appropriate mixing of the three primary colors. It was easy to imagine that the color sensors of the eye follow the same process. This has been confirmed, but it has also been found that the color receptors of the eye are not equal in sensitivity, with the “blue” sensors least sensitive (about 1/30 of the green and red sensors). This shows that the brain participates in the color sensing process by ignoring the absolute value of the signals from each sensor type; rather, it uses the ratio of signals to detect colors. Opponent processing is a more recent theory4 that attempts to explain some vision phenomena that are not easily explained by the trichromatic theory. Ewald Hering noted that certain pairs of colors are never seen, such as reddish greens and yellowish blues. He also noted that staring at a red color for a time, then at a white space, will cause one to see a green spot. This led Hering to theorize that the signals from the chromatic cones led to processing in the brain such that it detected contrast of red versus green and yellow versus blue, as well as detection of black versus white. Later measurements by Leo Hurvich and Dorothea Jameson led to a psychophysical evaluation of the opponent processing nature of color vision. The theory now stands aside the trichromatic theory in explaining color vision. Because of quantitative data provided by psychophysics, and direct neurophysiological measurements provided by electrophysiology, opponent processing is no longer questioned. Reference 5 indicates that the International Committee on Illumination (Commission Internationale de l’Éclairage, or CIE) has determined that the human visual system is not linear in its response to brightness but is proportional to the one-third power of the illumination. ⎧ ⎪ 116( Y /Y n )1 ⁄ 3 –16 L* = ⎨ ⎪ 903.3( Y /Y n ) ⎩
if Y /Y n > 0.008856 otherwise
L* is the lightness (perceived brightness), and Yn is the white reference luminance. For luminance below 0.008856, conditions are too dark for this equation to apply. Reference 5 also points out, “The intensity I of the light emitted by a CRT depends nonlinearly on the voltage V that is fed to the electron gun. The relation is I = V γ where the voltage V is assumed to be in the range [0, 1].” The term γ is in the range 2 to 3, so the one-third power in the equation above is effectively canceled, meaning that what is perceived by the eye is proportional to the voltage fed to the CRT. The color sensed is dependent on the specific colors that are overlaid, but the above statement is the common experience of most people, even if they don’t realize that it is happening. This rule emphasizes the fact that human vision is complex and has some unexpected characteristics. Who could have guessed that the brain and eye, working together, would synthesize the same color that we get when we mix pigments of the two colors? The following quote assigns this capability to a genetic gift from our ancestors. For example, the human eye and its controlling software implicitly embody the false theory that yellow light consists of a mixture of red and green light (in the sense that yellow light gives us the same sensation as a mixture of red light and green light does). 6
Reference 6 goes on to point out that all of the colors mentioned above are already mixtures of a range of frequencies and “cannot be created by mixing light of other frequen-
The Human Eye
163
cies.” The fact that the combination of red and green light appears yellow is a property of our eyes. An even more fascinating extension of this rule is pointed out by Crick.2 He mentions that the color perception process works even if the colors mentioned in the rule are shown to the observer, one after the other, as short flashes. That is, they don’t have to be concurrent. If the second color is not shown, then the observer sees only the first color. Therefore, we must conclude that the brain processes all of the information obtained in a period of time before concluding what has been seen. One of the authors (Friedman) has had the opportunity to try to determine if the eye can sense color when exposed to extremely short light pulses. In the experiment, a nitrogen laser-pumped dye cell produced 5-ns pulses of a color not known to the subject. Single pulses where then imposed on a nonfluorescent target, which was observed by the subject. All of the subjects could reliably tell the color of the pulse, even though the conventional wisdom is that humans are limited in their ability to sense short light pulses to about 1/30 of a second.
References 1. E. Hecht, Optics, Addison Wesley, Reading, MA, p. 73, 1990. 2. F. Crick, The Astonishing Hypothesis, Scribners, New York, p. 72, 1994. 3. K. Houser, “Thinking Photometrically, Part I,” Lightfair International, Las Vegas, NV, 2001. 4. http://www.yorku.ca/eye/trichrom.htm, 2003. 5. D. Salomon, http://www.ecs.csun.edu/~dxs/DC2advertis/AppenH.pdf, 2002. 6. D. Deutsch, The Fabric of Realty, Penguin Books, New York, l997.
VISION CREATING A FIELD OF VIEW Humans search a field of regard in a step-stare manner. An approximately 5° circular field is searched in about 3/10 sec. Moreover, in 1 minute, the eye can fixate on as many as 120 observations, which allows 0.2 to 0.3 sec to fixate on each.
Discussion Humans tend to step-stare across a scene to find an object. The time it takes is a function of the size of the field and the difficulty in finding the object. The above rule allows one to calculate the time it takes a person to search a field of view or display screen that does not contain complex or low-contrast images. The results are valuable in display design, symbol design, and other image processing tasks. As an example, consider a case in which the clear field of vision is 5°. It is then found that the time to search the field is 0.3( FOR°x )( FOR°γ ) T = ---------------------------------------------------25 where
T = time in seconds to search a field FOR°x = extent of the area to the searched (in degrees) in one direction FOR°y = extent of the area to the searched (in degrees) in the other direction
So, a display with 16° × 16° size would take about 3 sec to search.
Reference 1. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 121, 1974, available at http://www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003.
This page intentionally left blank
Chapter
9 Lasers
Revolutions in optics are infrequent. Most of this discipline’s history has involved slow evolution in understanding and technological improvements. Lasers, however, have caused a revolution in several applications. First, they have provided impetus for advancements in many areas of EO. They provide unique diagnostic capabilities essential for producing high-quality systems, evaluating effluents from exhausts and aerosols, diagnosing optical defects in the human eye, and thousands of other applications. EO systems provide a highquality, stable alignment reference source and allow lens and mirror quality testing using laser interferometry. They have led to the development of several applications that would be unthinkable using conventional light sources. Fields such as optical communications and active tracking would be impossible without the unique features of lasers. The spectral purity and compactness of lasers have allowed a number of medical advancements, including those used in eye surgery, elimination of damaged tissue (e.g., gall bladders), and cleaning of clogged arteries. Finally, lasers, along with television, have been one of the few EO advances to become a part of the lexicon of the average citizen. The invention of lasers in the early 1960s led immediately to the idea of “death ray beams” such as Buck Rogers would have used. Their enormous brightness and spectral purity have changed many parts of the electro-optics environment and have provided, after about 30 years of development, new advancements in consumer electronics such as CDs, laser printers, and high-performance semiconductors that can be created only with high-performance laser lithography. As a result of lasers’ widespread application, researchers have invested heavily in understanding the characteristics of the beams they produce and the interaction of those beams with various types of targets and detectors. A full understanding of the application of lasers requires new insight into the way electromagnetic waves propagate in the atmosphere. The close relationship of laser light propagation and the medium in which they travel requires this chapter to include a mixture of rules. As a result, the reader will find rules pertaining to the properties of the beams as they might propagate in a vacuum as well as how they interact with the propagation medium. The laser field also has been the source of many interesting stories about how science and industry do business. For instance, when the first optical laser was developed, a press conference was held, pictures were taken, stories were written, and predictions were made. Then, researchers at other laboratories attempted to reproduce the results, using the photos
165
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
166
Chapter Nine
in the newspaper, which conveniently included a ruler so the scale of the objects could be determined. Try as they might, the copycats could not make their versions work. Finally, they approached the researchers who were successful and asked what trick they had performed to make their laser work properly. The answer was simple; they never did make the laser shown in the picture work, because the rod was too big to get the crystal sufficiently clean. The real laser was much smaller and was not pictured, because the press said it was too tiny for the photographs. The history of laser development is also interesting. It can be argued that Einstein was the first really to solve a critical problem that led to their invention when he developed his concept of stimulated emission of radiation. The controversy revolving around the actual invention of the laser has never been fully resolved. Bell Telephone Laboratories claims that the invention occurred under their auspices when Schawlow and Townes predicted the conditions under which coherent light might be produced. They had already demonstrated stimulated emission of microwave radiation in 1954 using ammonia as a medium. On the other hand, it is widely known that Theodore Maiman was the first to actually build a visible light system. His design used doped synthetic ruby and was created while he worked at the Hughes Research Laboratories. His success came in 1960. The theoretical work done at Bell Telephone Laboratories was published in 1958. Meanwhile, it has been claimed that Gordon Gould was actually the first to create a visible light laser, while a graduate student working under direction of Townes at Columbia University. His work was conducted in 1958 and 1959. It was not until 1977 that he was finally awarded a patent for his laser work. By this time, Schawlow and Townes had already been awarded a patent, although it referred to masers, the microwave version of the technology. In the end, Townes and Schalow will be remembered longest, as both won the Nobel prize (but not in the same year) for developments related to the laser. The interested reader will find nearly 1 million articles on the World Wide Web that relate to the history of the laser. Perhaps most interesting is the collection of information residing on the patent office web site.1 In addition, Maiman (The Laser Odyssey) and Townes (How the Laser Happened) have written their version of the saga. Nick Taylor (Laser: The Inventor, The Nobel Laureate, and the ThirtyYear Patent War) and Scott McPartland (Gordon Gould: Laser Man) have also told the story in book form. The interested reader has access to a wide variety of texts that describe both the physics and applications of lasers. Siegman provides the current (although now over 10 years old) standard of excellence for a sophisticated presentation of laser physics and design approaches. It is probably beyond the skills of any but the most advanced readers, but it is a great resource and should be accessible to everyone who uses or expects to use lasers. Less complex texts are also available in most college books stores. For the entry-level student, the laser industry can be a great resource. For example, optical manufacturer Melles Griot includes a great deal of useful information in its product catalog. In addition, new laser users will want to look at various magazines such as Laser Focus World, because they complement the much more complex presentations found in journals such as Applied Optics and IEEE Quantum Electronics. Optics books should not be overlooked, because most provide a pretty thorough description of laser operation and applications and provide additional references for consideration.
Reference 1. www.uspto.gov, 2003.
Lasers
167
APERTURE SIZE FOR LASER BEAMS When a Gaussian beam encounters a circular aperture, the fraction of the power passing through is equal to ⎛ –2a2⎞ -⎟ 1 – exp ⎜ ---------⎝ w2 ⎠ where a = radius of the aperture w = radial distance from the beam’s center to the point where the beam intensity is 0.135 of the intensity at the center of the beam
Discussion The beam intensity as a function of radius is ⎛ –2r 2⎞ -⎟ I ( r ) = I o exp ⎜ ---------⎝ w2 ⎠ where w = as defined as above r = radius at some point in the beam The power as a function of size of the aperture is computed from a
2 2 2πr dr –2a /w P(a) ----------- = I ( r ) -------------- = 1 – e P Po o
∫ o
The 0.135 comes from fact that at the 1/e2 point of the beam, the intensity is down to 0.135 of the intensity in the center of the beam. Thus, we see that an aperture of 3w transmits 99 percent of the beam. This rule applies to beams that are characterized as Gaussian in radial intensity pattern. While this is nearly true of aberration-free beams produced by lasers, there are some minor approximations that must be accommodated for real beams. Of course, as in any system in which an electromagnetic wave encounters an aperture, diffraction will occur. The result is that, in the far field of the aperture, one can expect to see fringes, rings, and other artifacts of diffraction superimposed on the geometric optics result of a Gaussian beam with the edges clipped off.
References 1. H. Weichel, Laser System Design, SPIE Course Notes, SPIE Press, Bellingham, WA, p. 38, 1988. 2. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 666, 1986.
ATMOSPHERIC ABSORPTION OF A 10.6-µM LASER The absorption coefficient (in dB/km) of Beer’s law can be approximated for a 10.6-µm laser given the following conditions:
168
Chapter Nine
296 5.25 296 1.5 –5 –970/T 1.4 + 625⎛ ---------⎞ × 10 + ------1. Clear 1.084 × 10 p( P + 193 p )⎛ ---------⎞ ⎝ T ⎠ ⎝ T ⎠ V 2. Rain 1.9R 3. Snow 2 S 4. Fog
0.63
0.75
1.7 --------1.5 V
5 5. Dust ---V where V = visibility in kilometers S = snowfall rate in millimeters per hour R = rainfall rate in millimeters per hour T = temperature in K P = atmospheric pressure in millibars p = partial pressure of water vapor in millibars
Discussion These are estimates for the atmospheric extinction of Beer’s law, so they must be used with Beer’s law only. Note that the units are decibels per kilometer (dB/km), so a conversion must be employed if the answer is desired in units of percent transmission or decimal notation. Clearly, these simple equations cannot do justice to the real behavior of the atmosphere. Furthermore, it must be remembered that the specific numbers used here apply only to the 10.6-µm band. Therefore, any extrapolation to other bands should be done with extreme caution. This type of rule provides a quick estimate of atmospheric transmission that can be helpful during the planning and execution of field experiments. Atmospheric transmission of any wavelength usually can be obtained in adequate detail by using codes like LOWTRAN and MODTRAN, but HITRAN must be used for laser lines or other high-resolution applications in which less than 1 wave number must be resolved. However, we all desire simple rules that can help us deal with complex issues easily. This rule provides a rough idea of the transmission of the atmosphere in the very important 10.6-µm band produced by a CO2 laser. It provides an estimate for a variety of atmospheric conditions. Because these results are empirical in nature, there is little to say about the physics that causes them to be true. However, their validity has been confirmed in field experiments.
Reference 1. G. Kamerman, “Laser Radar,” in Vol. 6, Active Electro-Optical Systems, C. Fox, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 26, 1993.
CROSS SECTION OF A RETRO-REFLECTOR The laser radar cross section of a cube corner retro-reflector exposed to an illuminating beam and viewed from the position of the source is
Lasers
2
169
4
area D ------------≈ --------2 2 5λ 5λ
Discussion A retro-reflector will return a beam of light to the location of the illuminator. The diffraction-limited return beam half angle from a retro-reflector of diameter D is 1.22( λ ⁄ D ) , so the beam fills an area of λ 2 2 π⎛ 1.22 ----⎞ R ⎝ D⎠ for a circular retro at a distance R. The solid angle of the beam emitted by the retro is defined as the ratio of the area of its beam and the range squared so the solid angle is approximately 4.67( λ2 ⁄ area ) . Cross section is defined as the area of the emitter divided by the solid angle of the beam it emits. Thus, 2
2
4.67λ area Cross section = area / ⎛ ----------------⎞ = ---------------2⎝ area ⎠ 4.67λ This rule applies to perfect “cube corner” retro-reflectors only. Occasionally, such devices will have a cross section that is slightly less as a result of less-than-unity reflection and less-than-perfect tolerances on the angles of the mirrors. This rule gives an immediate estimate of the detectability of an object equipped with a retro-reflector. It also allows the reflector to be sized so that detection at an appropriate range and for a particular laser power can be estimated. Additionally, targets (even uncooperative ones) frequently have structures that approximate cube corners, giving them a much larger signature than would be assumed otherwise. It is important that the user of this equation realize that the total efficiency of the retro process must be considered. While the retro has very high cross section, it is typically illuminated only by a small section of the illuminating beam. That part of the beam that does strike the retro will be returned with high efficiency. The rest, of course, is lost. Retro-reflectors are often used in tracking and pointing experiments to assure that the target is detected and the experiments can be carried out reliably. The reason is clear. By providing even the smallest retro-reflector, the target’s signature is large and can be seen at great distances, even with modest-power laser systems. As an example, consider a cube of 0.1 m edge dimension exposed to visible (0.5 µm) light. The result is 80,000,000 m2 (80 km2), which is rather huge. Thus, the presence of a retro-reflector makes a target behave as if it were many orders of magnitude larger than it actually is. This also explains the phenomenon of optical augmentation. An active system observing another optical system will have the ability to exploit the optical gain from the observed system and will receive a bright return. Of course, this works best when the geometry of the illuminated system has a retro-reflection property. The human eye is just such a device. Illuminated by an invisible infrared laser, a subject’s location can be revealed clandestinely. If the cube corner is illuminated by a moving laser, there is a limit to how big the cube corner can be and still be detected.1 If the retro is too big, its return beam is too small to illuminate the moving receiver. For the signal to be seen at the illuminator, the following relationship must hold:
170
Chapter Nine
0.61λc Dr ≤ ---------------Vt where Dr = diameter of the retro (a circular retro assumed) λ = laser wavelength c = speed of light in the media Vt = transverse speed of transmitter and receiver This must be one of the few situations in which small optics are preferred. The reader is cautioned, however, to do a complete study of the proposed engagement, because the small retro will catch less of the initial beam than a larger one, requiring adjustment in the illuminator power. Finally, an unfortunate term has crept into the literature concerning cube corners. The expression corner cube is much more common but is misleading. A retro-reflector can be formed from the corner of a cube over restricted angles. The term corner cube does not describe the geometry of any retro-reflector currently in use, although it is frequently used.
Reference 1. C. Cooke, J. Cernius, and A. LaRocca, “Ranging, Communications, and Simulation Systems,” Chap. 23, The Infrared Handbook, W. Wolfe and G, Zissis, Eds., ERIM, Ann Arbor, MI, pp. 23–29, 1978.
GAUSSIAN BEAM RADIUS RELATIONSHIPS Hobbs1 gives 1.
r
1/e
2
1 λ = ----------- or ----------2NA πNA
2.
r 99 = 1.517r
3.
r 3dB = 0.693 r
where r
1/e
2
1/e
2
1/e
2
= radius where the intensity is decreased by 1/e2 compared to its size at the
NA = λ= r99 = r3dB =
beam waist numerical aperture = 1/2f/# wavelength 99 percent of the power is included in a circle with this radius 3-dB power density radius
Discussion Laser spots emitted by low-numerical-aperture optics tend to be Gaussian. The first equation is a simplification of λ/(πNA). The factor of 2 comes from π/λ, which is 2 for a wavelength of 1.557 µm. Gaussian beams tend to be tighter than imaging spots with Airy disk patterns. Remember that the radius is in the same units as the wavelength, so if you use nanometers for the wavelength, the radius will be in nanometers as well.
Lasers
171
Hobbs points out, “The Gaussian beam is a paraxial animal. It is hard to make a good one of high NA. The extreme smoothness of the Gaussian beam makes it exquisitely sensitive to vignetting (which of course becomes inevitable as sinθ approaches 1), and the slowly varying envelope approximation itself breaks down as the numerical aperture increases.” Just to prove that nobody agrees on anything, we quote from Ref. 2, in which the authors show that the diameter of a focused Gaussian spot is Fλ 2 Fλ 2 d = ⎛ ---------⎞ + ⎛ 1.69 -------⎞ ⎝ πd 0⎠ ⎝ d0 ⎠
1⁄2
where F = focal length of the lens d0 = diameter of the beam as it encounters the lens Some manipulation of this equation shows a result of d = 1.72λf /# , so r = 0.86λ f/# We can compare this result from those presented in the rule as follows: 2r
1/e
2
= d = λ/( πNA ) = 1.273 λf /#
2r 99 = d = 1.57r
1/e
2
= 1.99 λf /#
2r 3dB = d = 0.88 λf /# Finally, it is often useful to have a good Gaussian approximation to the diffraction spot.3 This can be found by matching the two curves at the 1/e point of the Gaussian. In that case, the appropriate value of sigma of the Gaussian is 0.431 λf /# for a circular aperture 0.358 λf /# for a square aperture where f/# = ratio of focal length to aperture This final set of equations can be used in the computation of encircled energy or other characteristics of the diffracted field. This approximation allows results that are correct to within about 10 percent. For example, in quadrant cell star tracers, the blur spot falls on four detectors. To compute the photon flux on each detector as a function of the position of the center of the blur, a series of integrals over the spatial extent of the detectors must be performed. This is not convenient using the exact distribution of the radiation, which includes Bessel functions. This rule provides results that are good enough in most situations. When the value of sigma suggested above is used to approximate a diffraction spot, one gets 1 –r 2/2σ2 P( r ) = -----------2-e 2πσ
References 1. P. Hobbs, Building Electro-Optical Systems: Making It All Work, John Wiley & Sons, New York, pp. 12–13, 2000.
172
Chapter Nine
2. J. Yin et al., “Observation and Discrimination of the Mode Patterns in a Micron-Size Hollow Optical Fiber and Its Synthetic Measurements: Far-Field Micro-imaging Technique,” Optical Engineering, 37(8), pp. 2277–2283, August 1998. 3. G. Cao and X. Yu, “Accuracy Analysis of a Hartmann-Shack Wavefront Sensor Operated with a Faint Object,” Optical Engineering, 33(7), p. 233, 1994.
INCREASED REQUIREMENT FOR RANGEFINDER SNR TO OVERCOME ATMOSPHERIC EFFECTS In weak atmospheric turbulence, the required SNR of a rangefinder using laser pulses must be increased to overcome the added scintillation. The increase is about ⎧ –1 2⎫ E SNR = exp ⎨ ( 2σI erfc ( 2 Pd – 1 ) + ( 1 ⁄ 2 )σI ⎬ ⎩ ⎭ where ESNR = required increase to the SNR (required SNR in turbulence = ESNR multiplied by the required SNR in calm air) Pd = probability of detection requirement erfc –1 = inverse of the complementary error function, which is defined in any number of books on advanced engineering, mathematics, or statistics σI = variance of log intensity ratio
Discussion Propagation through atmospheric turbulence broadens a laser beam. In addition, it causes the center of the beam to meander. This combination of broadening and beam wander causes the energy to be distributed over a larger angular area than when the atmospheric effect is not present. This means that less energy is put on the target, thus reducing the signal reflected to the sensor. Therefore, the SNR will be smaller in turbulent conditions than in calm conditions. To achieve a fixed level of probability of detection and probability of false alarm, the SNR must be increased. This rule results from an analysis of the ability of an active EO system to generate the necessary SNR, taking into account the fluctuating intensity at the target and the receiver resulting from turbulence along the path. It provides a simple explanation of the effect of turbulence on laser systems. System engineers will find it useful in assessing the impact of atmospheric effects and estimating system performance. This rule does not include the attenuation effects that also occur in the atmosphere as a result of particulate scattering and absorption. Those effects are rather easily included in the calculation. Attenuation properties of the atmosphere are discussed in the rule, “Atmospheric Attenuation or Beer’s Law” (p. 47). The following examples show how this rule can easily be used to determine the impact of the atmosphere on the performance of any type of active system. For wavelengths asso2 ciated with doubled YAG laser light (532 nm), near the ground C n is about 10–14 at night –14 and 1.7 × 10 during the day. For light turbulence, 2π 7 ⁄ 12 11 ⁄ 12 σI = 2 0.31⎛ ------⎞ L Cn ⎝λ⎠ where L = distance over which the observations are made λ = wavelength of light
Lasers
173
Thus, for a night condition, –3 11 ⁄ 12
σI = 1.5 × 10 L
for C n = 10
–7
Using the above equation, we find that for a path length of 600 m, the signal intensity has a variation with a standard deviation of about 53 percent. To achieve a probability of detection of 0.99, we complete the remainder of the terms in the equation, –1
erfc [ 2 × ( 0.99 – 1 ) ] ≈ 0.165 Using the equation for this rule, we find that the enhancement in SNR must be 1.3. That is, the radiometrics of the system must be considered to ensure that the combination of laser power and receiver sensitivity leads to a value of SNR 30 percent larger than would be needed to conduct the same experiment in turbulence-free air. The prior calculations were for nighttime conditions. During the day, C 2n is about 1.7 × 10–14, and the enhancement requirement jumps to 48 percent. This required increase is reduced to 15 percent for night operations if the wavelength of the laser is 1.06 µm. This is because σI depends on the inverse of the wavelength.
Reference 1. R. Byren, “Laser Rangefinders,” in Vol. 6, Active Electro-Optical Systems, C. Fox, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 103, 1993.
LASER BEAM DIVERGENCE A laser beam’s full divergence angle is approximately the wavelength divided by the diameter of the transmitter aperture, or λ θ ≈ --d where θ = transmitter beam width (full angle) in radians λ = wavelength d = aperture diameter in the same units as the wavelength
Discussion This is a basic result drawn from diffraction theory and does not include any of the additional aberrations that occur in real systems. However, this rule is widely used to estimate the size of a laser beam that has propagated through a vacuum and is also frequently used as a first estimate even in atmospheric applications. The rule works fine in environments in which scattering is small as compared with absorption, because, in those cases, the beam shape is not affected. This rule provides quick estimations of minimum beam divergence. A more accurate presentation is provided below. In the far field, the full beam width of a Gaussian beam can be approximated by
174
Chapter Nine
2λ θ = --------πωo where ωο = Gaussian beam waist radius in meters This definition of beam width is based on the location of the 1/e points in the beam. This is one of several common ways that beam spread is defined for Gaussian laser beams. Siegman describes several, including the more conservative 99 percent criterion in which the size of the beam is defined as the area that includes 99 percent of the energy in the beam. The reader will find that this definition of beam size is consistent with the predictions of etendue as defined in the chapter on radiometry.3 The etendue for such a system is AΩ = λ2 , where A is the area of the beam at the waist, and Ω is the solid angle of the beam. As pointed out above, rules of this type must be interpreted within the context of the definition of what constitutes the beam. Furthermore, the rule applies only in the far field, 2 which is defined as area at which the range is greater than ( πwo ) ⁄ λ . This last distance is called the Rayleigh range and is equal to the distance from the waist at which the diverging beam is the same size as the waist. The beam width of a Gaussian beam can also be defined as the full width across the beam measured to the e–2 irradiance levels. Often, one will encounter the beam width defined at the full width half maximum (FWHM) points. To convert a Gaussian beam profile specified at FWHM to the equivalent 1/e2 points, multiply it by ≈1.17.
References 1. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 56, 1986. 2. G. Kamerman, “Laser Radar,” in Vol. 6, Active Electro-Optical Systems, C. Fox, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 15 and 16, 1993. 3. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 671, 1986.
LASER BEAM QUALITY Beam quality is defined as 1 2 1 BQ = exp --- ( 2πWFE ) = -----2 S where BQ = (unitless) the wavefront error WFE expressed in waves S = Strehl ratio (unitless) WFE = RMS wavefront error in waves for errors less than about 1/5 of a wave
Discussion The best beam focusing and collimation that can be obtained is derived from the diffraction theory for plane and Gaussian beams encountering sharp-edged apertures, as described in virtually every optics and laser book. In those analyses, it is assumed that the wavefront is ideal and that there are no tilt or higher-order aberrations in the phase front. The concept of beam quality has been developed to deal simply with the additional impact of nonuniform phase fronts in those beams. In a wide variety of applications, this definition is used to characterize the spreading that will be encountered in focused or parallel beams, beyond that associated with diffraction.
Lasers
175
There are many other definitions of beam quality, so the reader is cautioned to understand what is meant by “BQ” in a particular application. These characterizations of a laser beam are effective measures when the beam is nearly diffraction limited. A typical guide is that Strehl ratio and beam quality can be related to the RMS wavefront error (regardless of the composition of the aberrations) if the errors are less than λ/5 (some references use λ/2π as the standard). For highly aberrated beams, it may be difficult to establish the beam quality. For example, using the definition that relates beam quality to beam size, a highly aberrated beam will have an ill-defined diameter that varies with azimuthal angle, thus limiting the usefulness of the definition. On the other hand, stretching the rule to about λ/20 seems to work well, as described in Ref. 1. This rule provides a simple parameter that defines the beam spread of a laser beam. For example, the dimension of the spot of a beam will be expressed as ( λ ⁄ D )BQ rather than the ideal λ ⁄ D . This means that the spot will cover an area that is proportional to BQ2. 2 Therefore, we find that the energy density in the beam will depend on 1 ⁄ ( BQ ) . This can be an effective definition if the BQ is close to unity. For very poor beam quality, such as might result from turbulence and other atmospheric effects, the entire concept of a well defined beam becomes useless, and this definition fails to characterize the beam. BQ can also be defined in terms of the power inside a circle at the target.2 BQ =
Pideal --------------Pactual
where the powers are compared at a common radius from the center of the target. The effect of beam quality is included in the typical diffraction spreading of a beam by 2BQλ θD = ⎛ --------------⎞ ⎝ πD ⎠ where BQ = beam quality at the aperture θD = full angle beam spreading associated with diffraction Clearly, when BQ is unity, the diffraction angle is the same as described in another rule in this chapter, 2λ θD = ⎛ -------⎞ ⎝ πD⎠ Thus, we see that BQ is included as a linear term in estimating the beam spread of a laser.
References 1. M. Katzman, Ed., Laser Satellite Communications, Prentice-Hall, New York, p. 182, 1987. 2. G. Golnik, “Directed Energy Systems,” in Vol. 3, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 411, 472, 1993.
LASER BEAM SCINTILLATION Variance in the irradiance from a beam caused by turbulence can be estimated as follows for the special case of a horizontal beam path:
176
Chapter Nine
2
2
σ = 4σt 2
2 7 ⁄ 6 11 ⁄ 6
where σt = 0.31C n k
L
2
2
C n = index of refraction structure constant discussed in the rule called “ C n ” 2π k = -----λ L = propagation path length in the turbulent medium λ = wavelength Combining the equations above, we get 2
2 7 ⁄ 6 11 ⁄ 6
σ = 1.24C n k
L
The standard deviation variation in irradiance is the square root of the term on the right.
Discussion This type of simplifying analysis of beam propagation in the atmosphere has been spearheaded by both the military, which is interested in laser beam propagation, and the astronomical community, which is very much concerned with the disturbance that the atmosphere imposes on light collected by terrestrial telescopes. Of course, the latter group has little use for analysis of horizontal propagation of light, but the underlying theory that results in the equations above derives from a more general theory that applies to all cases. A basic assumption used in the development of the results for the horizontal-beam case 2 is that the value of C 2n is constant over the path. This is generally not the case, as C n is the manifestation of temperature variations in the atmosphere. However, this simplifying assumption is often used for horizontal beams and has been qualitatively confirmed in many experiments. When C 2n varies along the path, a more complicated formalism must be used. The equations above apply to plane waves. Spherical waves can be characterized by the same analysis except that the first equation uses 0.124 as the multiplier rather than 0.31. In addition, there is a limit to the range of atmospheric conditions over which the rule ap2 plies. The best estimate is that the expressions above can be used if σt is not bigger than about 0.3. Use this rule to make rapid assessments of the performance of laser beam and other light transmission systems in the presence of atmospheric effects. The reference also provides the additional details necessary to deal with beam paths that are not horizontal. To 2 do so requires that C n be known or estimated as a function of altitude. Use other rules in 2 this chapter and in Chap. 3 to estimate C n as a function of altitude. We also note a related result from Ref. 2. It shows that an estimate for the edge motion for horizontal path observation is 2 2 8⁄3 3⁄5
θ = [ ( 3 ⁄ 8 )2.91k C n L
]
Reference 2 points out that this measure (which is exactly equal to the isoplanatic angle 2 for an atmosphere with a uniform C n ) is not quite right, as it is a measure of the effect of all aberrations induced by the atmosphere, whereas edge motion derives almost entirely from tilt. For most cases, however, this equation is a good place to start in estimating the angle of arrival effects.
Lasers
177
2
For example, if we use a typical C n value of 10–14, a path length of 375 m, and a wavelength of 0.5 µm, we get an irradiance variance of about 0.12, which is equivalent to a standard deviation of about 35 percent variation in the intensity at the receiver.
Reference 1. J. Accetta, “Infrared Search and Track Systems,” in Vol. 5, Passive Electro-Optical Systems, S. Campana, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 288, 1993. 2. M. Belen’kii, J. Stewart, and P. Gillespie, “Turbulence-Induced Edge Image Waviness: Theory and Experiment,” Applied Optics, 40(9), p. 1321, March 20, 2001.
LASER BEAM SPREAD The angular spread, θ, (half angle of a cone) of a beam projected along an atmospheric path is1,2 1 1 θ = --------- + ---------2 2 2 2 k a k ρ
(1)
2π where k = -----λ a = beam radius when projected The parameter ρ is the transverse coherence distance and is related to the effect that the turbulence of the atmosphere has on propagation of light. For situations in which the atmosphere is uniform in properties over the path length, we find that the coherence distance is2 3 ---
⎛ π2 ⎞ 5 -⎟ ρ = ⎜ -------------⎝ k 2 C 2n L⎠ where
(2)
L = path length 2 Cn
2
= refractive index structure constant, as further defined in the “ C n Estimates rule” (p. 52)
For situations in which the atmosphere is not uniform, additional computations, shown in the references, must be performed.
Discussion Here, we have made some assumptions to avoid introducing a complicated calculation. The first term is the beam spread caused by diffraction. The second term is the additional spreading associated with turbulence effects. This rule comes from a combination of the analytic description of laser beam propagation in a vacuum (the diffraction component) along with a simplified assumption about the way that other beam spreading effects, such as the atmosphere, add to the theoretical beam spread.
178
Chapter Nine 2
These estimates of beam size are as good as the quality of the estimates of C n , as the description of beam spreading derived from the theory of laser resonators is a mature science. It should be noted that various authors use two different descriptions for ρ. One is for plane waves, and the other is for spherical waves. The above expression for ρ is the typical form used for laser beams propagating in the atmosphere. The plane wave case is typically used only for starlight propagating in the atmosphere. It is also true that Fried’s parameter, discussed elsewhere in this chapter, has the identical form as ρ, but it differs by a roughly a factor of 2. Reference 2 reports that, for short propagation distances, a horizontal laser beam spreads to a waist size wb, expressed as ⎛ w2 + 2.86C 2 k 1 ⁄ 3 L8 ⁄ 3 w1 ⁄ 3 ⎞ n o ⎠ ⎝ o
1⁄2
(3)
2
and over longer distances, L » πwo ⁄ λ . 2
2 2 3 –1 ⁄ 3 4L wb = ---------- + 3.58C n L wo 2 2 k wo
(4)
In these equations, wo refers to the beam size as it exits the laser. Laser beams are used for so many applications that the study of their beam spread in turbulence is of considerable use. Clearly, the simplest results are obtained when one can accurately assume that C 2n is constant. This is rarely the case, but the assumption is adequate for many applications. Considerable attention has been paid to this problem by a number of researchers. Recently, the emphasis has been in three areas: military applications, communications based on modulated laser beams, and astronomical telescope systems for which atmospheric corrections are made. In the latter case, a laser beam is used to create a synthetic star to provide information on corrections that must occur to remove the effects of the atmospheric turbulence. The reader is alerted to another version of this rule. In Ref. 3, we find that the beam spread depends not only on the system parameters but also on the number of correlation lengths across the aperture D. In what follows, rdl is the diffraction-limited beam spread radius (which ignores the effects of the atmosphere) and is equal to λ/πD. 2 ⎛ D⎞ Beam spread = ⎜ 1 + 0.182 -----2-⎟ ⎝ ρ ⎠
where
1⁄2
r dl ,
D ---- < 3 ρ
(5)
D = beam diameter ρ = coherence scale rdl = diffraction-limited value of the beam radius at the receiver plane
This expression is valid for D ⁄ ρ < 3. For D ⁄ ρ from 3 to 7.5, the expression is D 5⁄3 D Beam spread = 1 + -----2- – 1.18⎛ ----⎞ ⎝ ρ⎠ ρ 2
1⁄2
r dl ,
D 3 < ---- < 7.5 ρ
(6)
A simplification can also be used, as follows: D 1.24 Beam spread = 0.423⎛ -----⎞ r dl , ⎝ ρ0⎠
D 3 < ----- < 7.5 ρ0
(7)
Lasers
179
Equation (5) is the same as Eq. (1) except for the coefficient of the second term in the radical, which changes from 1 in Eq. (1) to 0.182 in Eq. (5). Equations (6) and (7) show entirely different forms when compared with Eq. (1). They apply to cases in which the turbulence is more profound. Keep in mind that a smaller coherence length is found when the turbulence is greater. Therefore, when the coherence drops, the regime of Eqs. (6) and (7) must be used. Note that the results shown in the last three equations derive from curve fitting of results obtained from numerical calculations.
Reference 1. R. Tyson and P. Ulrich, “Adaptive Optics,” in Volume 8, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 180, 1993. 2. R. Tyson, Principles of Adaptive Optics, Academic Press, San Diego, CA, p. 32, 1991. 3. Y. Yenice and B. Evans, “Adaptive Beam-Size Control Scheme for Ground-to-Satellite Optical Communications,” Optical Engineering, 38(11), pp. 1889–1895, November 1999.
LASER BEAM SPREAD COMPARED WITH DIFFRACTION A Gaussian spherical wave spreads considerably less than a plane wave diffracted by a circular aperture.
Discussion A plane wave passing through an opaque screen has an angular diameter (d) caused by diffraction of 2.44 (λ/d) and contains 84 percent of the beam power. A Gaussian spherical wave contains 86 percent of its power in an angular diameter of 2 (λ/d), where d is defined as πwo, and wo is the diameter of the waist (or smallest) size of the beam in the beamforming optics. In the examples above, the light is propagated through a circular aperture. It is worth noting that, if the aperture is equal to πwo, 99 percent of the power in the beam is transmitted through the aperture. The interested reader will want to read the rules in this chapter that pertain to diffraction and the “Etendue or Optical Invariant” rule (p. 286). This result should not be overlooked in any application where laser beams are to be propagated. That is, one should not assume that the typical diffraction formula λ 2.44 ---D applies for lasers. The specific results given above derive from a definition of the “size” of the beam. Because a Gaussian beam has an extent that is not well defined, some latitude must be accepted in the power numbers that are selected. For example, Siegman1 points out examples in which the results vary, depending on how the beam radius is defined. Smith2 points out that the beam spread far from the beam waist is 1.27λ 4λ α = ----------------- = --------------------π( 2w0 ) diameter for beams defined by its 1/e2 points.
References 1. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 672, 1986.
180
Chapter Nine
2. W. J. Smith, Modern Optical Engineering, McGraw-Hill, New York, p. 166, 2000. 3. H. Weichel, Laser System Design, SPIE Course Notes, SPIE Press, Bellingham, WA, p. 72, 1988.
LASER BEAM WANDER VARIANCE 1. The variance (σ2) in the position of a beam propagating in the atmosphere is 2 –1 ⁄ 6 17 ⁄ 6
1.83 C n λ
L
where L = length of a horizontal path 2
C n = atmospheric structure constant λ = wavelength in consistent units 2. The square root of the variance is the standard deviation of the beam wander.
Discussion A whole generation of atmospheric scientists have worked on the problem of laser beam propagation in the atmosphere. Ultimately, all of the work derives from a seminal analysis performed by the Russians Rytov and Kolmogorov. Fried has also made important contributions to the theory. Military and astronomical scientists have extended the theory and have made considerable progress in demonstrating agreement between theory and experiment. The theory is too complex to repeat here. Fortunately, the effect on propagation can be expressed with relatively simple algebraic expressions such as the one shown above. As with any rule related to the atmosphere, the details of the conditions really determine the propagation that will be observed. This result assumes that the value of C 2n along the path is constant and is of such a value that the turbulence effect falls into the category of “weak.” This means that the variance in the beam intensity is less than about 0.54. Otherwise, the assumptions inherent in Kolmogorov’s adaptation of Rytov’s work no longer apply, and the results are flawed. Use of this rule defines the size that a receiver must possess to encounter the bulk of a beam used for communications, tracking, or other pointing-sensitive applications. The mathematics behind this analysis, first done by Tatarski, are beyond the scope of this book. Suffice it to say that the result shown above is a rather substantial simplification of the real analysis that must be performed. For example, Wolfe and Zissis provide a more complete analysis and show how the beam wander is translated into motion of the centroid of the beam in the focal plane of a receiver. The nighttime value of C 2n is about 10–14 m2/3. For a path length of 5000 m and a wavelength of 0.5 µm, σ ≈ 78 mm.
References 1. H. Weichel, Laser System Design, SPIE Course Notes, SPIE Press, Bellingham, WA, 1988. 2. W. Wolfe and G. Zissis, Eds., The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 6–37, 1978.
LASER BRIGHTNESS The brightness of a single-mode laser can be closely estimated by dividing the power-area product by the wavelength squared.1
Lasers
181
PA B ≈ ------2λ where B = brightness of the beam (watts/steradian) P = power of the laser (watts) A = area of the radiating aperture λ = wavelength
Discussion This rule derives directly from defining the on-axis irradiance of a laser as the brightness divided by the range squared. Brightness is defined as the ratio of the power output to the solid angle into which the beam is projected, so Power of the beam Brightness = --------------------------------------------------------Solid angle of the beam The solid angle of the beam is the area of the beam at the target divided by the distance to the target squared. area Solid angle = --------------2range λ The area of the beam is approximately the square of the product of the beam angle, ≈ ---- , D and the range λ ⎞ ⎛ ---R ⎝D ⎠
2
Therefore, brightness is 2
PD PA --------- = ------22 λ λ The irradiance that is created also depends on the optical quality of the beam. This is sometimes more fully expressed as2 PA irradiance = --------------------2 2 2 λ BQ R where BQ represents the beam quality measured as a factor that is equal to unity for diffraction-limited performance and a number exceeding unity for all other cases. Using this formulation, we find that brightness is defined as PA --------------2 2 λ BQ This rule is related to the antenna theorem,3 which states that AΩ ≈ λ2. This can be illustrated simply by noting that, for a diffraction-limited beam, the solid angle obtained (Ω) is
182
Chapter Nine
1.22λ 2 π⎛ -------------⎞ ⎝ D ⎠ 2
and the area of the aperture is ( πD ) ⁄ 4 , so the product of these two terms is 3.67λ
2
As an example of laser brightness, consider a HeNe laser operating at 0.6328 µm and a line width of 1000 Hz, producing 1 mW, which radiates as if it were a blackbody of temperature 1010 K if it emits through a 1-mm aperture. Such a laser projects a beam with a solid angle of 1.88 × 10–6 sr. The bandwidth is 1.335 × 10–12 µm. The area from which the beam is projected is 7.8 × 10–7 m2. Therefore, the spectral radiance is 5.1 × 1020 w/sr/µm/m2. One can use Planck’s formula to determine the equivalent temperature of a blackbody required to achieve this output. The appropriate equation is 2
2c h L = --------------------------------5 hc/λkT – 1) λ (e where k = Boltzmann’s constant h = Planck’s constant c = speed of light This formula shows that the temperature is hc 1 T = ------ ----------------------------λk 2 2c h ln 1 + ---------5 λ L
References 1. G. Fowles, Introduction to Modern Optics, Dover Publications, New York, p. 223, 1975. 2. G. Golnik, “Directed Energy Systems,” in Vol. 8, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 451, 1993. 3. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 672, 1986.
LED VS. LASER RELIABILITY Mean time between failure (MTBF) is approximately 106 to 107 hr for LEDs operating at 25°C. Conversely, commercially available lasers have an MTBF of about 105 hr at 22°C.
Discussion Advancements in materials technology will continue to improve these values but, for now, these rules apply. Also, be aware that the operating temperature of various lasers has a dramatic effect on the life of these components. Additionally, the MTBF of high-technology, high-power lasers is usually several orders of magnitude lower.
Lasers
183
For optical communication, commercially available lasers have failures from defects in the active region, facet damage, and nonradiative recombination in the active region. A great deal of work is being invested in extending laser reliability. A light-emitting diode (LED) is a PN junction semiconductor diode that emits nearly monochromatic (single-color) light when operated in a forward-biased direction. The first usable LEDs were developed in the 1960s by combining gallium, arsenic, and phosphorus (GaAsP) to obtain a 655-nm red light source. These devices produced about 1 to 10 millicandela at 20 mA. The most common materials in second-generation LEDs were GaP green and red, GaAsP orange, and high-efficiency red and GaAsP (yellow). By the 1980s, a new material, gallium aluminum arsenide (GaAlAs) was introduced. These devices have a life expectancy better than 10 times greater than that of standard LEDs, which derives from increased efficiency and multilayer, heterojunction type structures. GaAlAs produces light at a 660-nm wavelength. Some GaAlAs LEDs may decrease in output by 50 percent after only 50,000 to 70,000 hr of operation if operated in high-temperature and/or highhumidity environments. Perhaps the best way to compare the reliability of the various types of laser diodes is the activation energy (AE) associated with failure. As explained in the “Arrhenius Equation” rule in the Chap. 11 (p. 217), AE is a measure of how sensitive a device is to potential failure. The higher the value of AE, the better the reliability. The reader should keep in mind that the value of AE is used in an exponential equation, so the reliability is dramatically higher as AE increases. The Table 9.1 compares values of AE (in electron-volts) and the relative reliability.1 The relative lifetime assumes that equal amounts of drive current and operating temperature will be used in all cases. In general, conditions will vary, but this assumption does allow comparison of the inherent reliability of the devices. TABLE 9.1 Activation Energy Comparison
Diode type
Approx. activation energy (eV)
Relative life
AIGaAs/GaAs lasers
0.7
4.3 × 10–4
AlGaAs LEDs
0.5
1.9 × 10–7
InGaAsP/InP (longer wavelength)
0.16
3.6 × 10–13
InGaAsP/InP buried heterostructure
0.9
1.0
GaAlAs double heterostructure LED
0.56
1.9 × 10–06
Clearly, several of the diode types need to be operated at lower temperature and current density to obtain useful lifetimes. Finally, in Table 9.2, we compare the properties of LEDs and laser diodes.1,3
Reference 1. M. Ott, Capabilities and Reliability of LEDs and Laser Diodes, available at http:// nepp.nasa.gov/photonics/pdf/sources1.pdf, 2002. 2. N. Lewis and M. Miller, “Fiber Optic Systems,” in Vol. 6, Active Electro-Optical Systems, C. Fox, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 258–259, 1993. 3. http://www.gamma.ru/technica/articles-1/leds_lasers.htm, 2003.
184
Chapter Nine
TABLE 9.2 Comparison of Typical Parameters of Interest for LEDs and Laser Diodes Attribute Feasible wavelength3
LEDs 0.40 to 1.5 µm
Laser diodes 0.63 to 1.5 µm
Best choice of technology as a function of desired operating wavelength 665 nm
GaAsP
GaAlAs
800 to 930 nm
Ga1–xAlxAs*
Ga1–xAlxAs*
1300, 150 nm
InGaAsP
InGaAsP
Monochromaticity
monochromaticity
nonmonochromatic
Radiative recombination
spontaneous emission
stimulated emission
Coherence
Incoherent
coherent
Pulse duration3
100 µs
70 ns
Polarization direction
random
Spectral width
linear 2
∆λ ≈ 1.45λ kT , with λ in
broad
µm, kT in eV, k = Boltzmann’s constant, T = junction temp Spectral width, GaAlAs
Tens of nm
<1.5 nm
Spectral width, InGaAsP
Surface emitting, 100 nm Edge emitting, 60–80nm
0.1 to 10 nm
Significant parameters
BW vs. power BW increases at the expense of power
Threshold current, index guided: 10 to 30 mA Gain guided: 60 to 150 mA
Reliability lifetimes
105 to 108 hr
105 hr
Temperature effects
Increases wavelength by 0.6 nm/°C
Wavelength varies by 0.25 nm/°C, threshold current rises by 0.5 mA/°C
Rise time†
1 to 100 ns
<1 to 10 ns
Output power
10 to 50 (high power) µW
1 to 1000 mW
Maximum power3
100 mW
100 W
Modulation
3 to 350 MHz
>350 MHz
Single-item price3
U.S. $0.1 to $1
U.S. $3 to $100
Radiation power that can be coupled in a 200-µm light-guiding fiber3
0.5%
50%
Maximal efficiency of stock produced devices3
3%
10%
Average divergence angle
20° to 50°
~ 20° to 30°
Maximal illumination intensity from a single radiation source in a continuous-wave (CW) mode3
0.1 W/cm2
200 W/cm2
*x
is between 0 and 1 in Ga1–xAlxAs general, the bandwidth-to-rise time relationship is calculated as BW = 0.35/rise time
†In
Lasers
185
LIDAR PERFORMANCE A laser ranger, also called a ladar or lidar (usually defined as laser radar) has a signal-tonoise ratio that varies as 1/R2, where R is the distance from source to target. This applies when the beam is smaller than the target at the target range. When the beam is bigger than the target, the return varies with distance as 1/R4.
Discussion The geometry of the problem shows that when the beam is smaller than the target, all of the radiation from the laser hits the target. The reflected light is scattered into a hemisphere, a part of which includes the receiver. In this case, the amount of light received is proportional to 1/R2. In the case of a beam larger than the target, some of the light does not participate in creating the return signal. This means that the amount of energy imposed on the target by the laser goes as 1/R2. That energy then scatters toward the sensor with the aforementioned 1/R2 factor. Therefore, in this case, the signal decreases as 1/R4. It is also true that if the system is intended to image the object, the comments above apply to each pixel. That is, if the beam illuminates an area larger than an imaging pixel, the signal will drop as the fourth power of the distance. However, the reader should note that the number of pixels participating in forming the image increases as R2, so the overall effect is that the part of the beam that intersects the target creates an image that grows as range squared. Moreover, the size of each increases as R2, so the overall effect of an change in target range is 1/R2. Atmospheric effects and the surface properties of the target add additional effects to the expected performance of a lidar. For example, a target with a specular surface can actually make the target nearly invisible, because the reflected radiation may have very little component in the direction of the receiver. Thus, a mirror can be nearly impossible to detect using active sensors if it is tilted away from the laser source. In addition, highly absorptive surfaces will further suppress the amount of reflected energy that is detected. Thus, flat, highly absorptive surfaces may cause the range sensitivity of a lidar to be far worse than 1/R4. Again, the 1/R2 applies only when the footprint of the projected beam completely falls on the target. Figure 9.1 shows how the beam divergence affects the performance of a ranging system. The small circle represents a laser beam of small divergence that encounters the target. In
FIGURE 9.1 Comparison of illumination beams that are both larger and smaller than the target.
186
Chapter Nine
this case, none of the light from the laser is lost. The larger beam does not use all of the available light. A substantial part of the light is lost and cannot contribute to the signal. It should be noted that, in assessing the relative performance of the two cases, we assume that both have the same amount of laser power available.
ON-AXIS INTENSITY OF A BEAM For a beam with no aberrations, the on-axis intensity, in watts per area, is1 PA ----------2 2 R λ where R = range P = beam power in watts A = transmitting telescope area in square meters λ = wavelength in meters
Discussion If aberrations exist and are characterized by beam quality BQ, defined elsewhere in this chapter (see p. 174), we get PA --------------------2 2 2 BQ R λ This form is a direct result of the definition of beam quality as a constant that multiplies the beam spread associated with diffraction. As a result of the multiplication, the beam is bigger in each dimension by a factor of BQ, so the area over which the beam is spread is proportional to 1/BQ2. This rule is derived from basic laser theory and applies in general. It is limited by the assumption that the beam is propagating without significant atmospheric or other path effects. The discussion below illustrates how such factors complicate things. In many applications, the size of the detector is smaller than the beam at the destination. As a result, the on-axis intensity represents the maximum power that can be delivered into such a detector. As described in more detail below, the above formula is for the ideal case. The presence of aberrations in the beam expander and/or laser, along with atmospheric influences, will reduce the power that can be delivered. The far-field intensity for a circular aperture with reductions caused by diffraction, transmission loss, and jitter is2,3 2
I o TK exp ( –σ ) I ff = -----------------------------------------2 1 + ( 1.57σ jit D/λ ) where Io = intensity at the aperture T = product of the transmissions of the m optical components in the telescope K = an aperture shape factor, described in Ref. 4, that is found to be very nearly unity in most cases σ = k∆Φ k = propagation constant = ( 2π ) ⁄ λ
Lasers
∆Φ = σjit = D= λ=
187
wavefront error two-axis RMS jitter aperture diameter wavelength
This leads to a definition of brightness of a laser with jitter and wavefront error,3 2
2
πD PTK exp ( –σ ) Brightness = ----------------------------------------------------------2
4λ 1 + ( 1.57σ jit D/λ )
2
where P = laser power We also note that, when the wavefront error, σ, is zero and the jitter term is zero, we get the following: APTK PA Brightness = --------------= ------22 λ λ Therefore, the on-axis intensity is equal to Brightness -----------------------2 Range
References 1. E. Friedman, “On-Axis Irradiance for Obscured Rectangular Apertures,” Applied Optics, 31(1), pp. 14–18, January 1, 1992. 2. K. Gilbert et al., “Aerodynamic Effects,” in Vol. 2, Atmospheric Propagation of Radiation, F. Smith, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 256, 1993. 3. R. Tyson, Principles of Adaptive Optics, Academic Press, San Diego, CA, p. 16, 1991. 4. D. Holmes and P. Avizonis, “Approximate Optical System Model,” Applied Optics, 15(4), p. 1075, April 1976.
PEAK INTENSITY OF A BEAM WITH INTERVENING ATMOSPHERE Peak intensity (W/m2) of a beam going a distance L (m) is –εL
Pe -----------------------------2 2 2 πL ( σL + σB ) where
L = distance σB = effect of blooming, radians ε = atmospheric extinction in units that are the inverse of the distance units P = beam power at the transmitter in watts 2 2 2 σL = combined effect of linear beam spread functions and is equal to σD + σT + σJ
188
Chapter Nine
σD = combined effect of diffraction and beam quality in radians σT = effect of turbulence in radians σJ = effect of jitter in radians
Discussion The beam intensity at some distance, L, is the result of the combined effect of beam spreading and atmospheric attenuation. The latter is contained in the exponential term in the numerator. It contains both the absorption, which removes energy from the beam, and scattering, which redirects the energy but removes it from the beam. The denominator simply describes the area over which the beam will be spread at the distance L. It relies on several terms to describe the size of the beam, as described above. Of course, the description of the beam shape is a simplification, particularly with respect to atmospheric effects. The range of limitation really applies to the parts of the rule relating to atmospheric effects. The estimation of the impact of the atmosphere applies as long as the turbulence falls into the “light” category, i.e., the regime in which profound scintillation does not occur. A number of rules in this chapter deal with how to compute the conditions that apply for light turbulence. The diffraction effect is 2BQλ 2 σo = ⎛ --------------⎞ ⎝ πD ⎠
2
where BQ = beam quality at the aperture Note that when BQ = 1, we get the diffraction effect, which cannot be avoided. When D ⁄ r o < 3 , which applies for short paths, small apertures, or very light turbulence, σD 2 D 2 2 σT = 0.182⎛ --------⎞ ⎛ ----⎞ ⎝ BQ⎠ ⎝ r o⎠ where r0 = Fried’s parameter, discussed in Chap. 3, “Atmospherics,” and elsewhere in this chapter (The term r0 varies from about 10 cm for a vertical path through the entire atmosphere from sea level to several meters for short horizontal paths.) When D ⁄ r o > 3 , 2
⎛ SD ⎞ D 2 D 5⁄3 2 σT = ⎜ ----------⎟ ⎛ ----⎞ – 1.18⎛ ----⎞ ⎝ ⎠ ⎝ r o⎠ ⎝ BQ ⎠ r o Slightly different forms of these equations appear in the “Laser Beam Spread” rule (p. 177). In that rule, the expressions define the linear size of the beam at some distance from the source. Here, we have shown the angular size of the spreading beam. Also, note that the first equation reverts to PA --------------------2 2 2 BQ λ L when there are no atmospheric effects, which is consistent with other rules that describe the case in which no atmosphere is present.
Lasers
189
Reference 1. R. Tyson and P. Ulrich, “Adaptive Optics,” in Vol. 8, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 198, 1993.
POINTING OF A BEAM OF LIGHT The standard deviation in pointing error (σ) for a laser illuminator is defined as θd 1 σ = ----- --------------------------------2 –2 ln ( 1 – P ) h In this equation, Ph, is the probability that a beam of half-angular radius θd will be pointed to within its own radius (θd/2).
Discussion Note that the definition of the beam angular radius is a choice to be made by the user of this equation. For example, it could be the half-power point of a Gaussian laser beam or the 1/e point. The equation applies in either case, but the choice must take into account that the definition of beam radius determines the actual amount of power imposed on the target. Figure 9.2 illustrates the example. Note that the equation applies regardless of the distribution of energy in the beam. For example, if the beam has a Gaussian distribution of energy, θd could describe the 1/e points of the beam. Now for the derivation of the equation. The probability that the beam is within a solid angle Ω is ⎛ θ ⎞ -⎟ exp ⎜ – ---------⎝ 2σ2 ⎠ P( Ω ) = ----------------------------2 2πσ
FIGURE 9.2 this rule.
Geometry used in deriving the equations for
190
Chapter Nine
The probability that the beam is pointed at the target within the angle θ is ⎛ θ ⎞ exp ⎜ – --------2-⎟ ⎝ 2σ ⎠ dΩ P( θ ) = P( Ω ) ------- = θ --------------------------2 dθ σ If we want to compute the probability that the beam is pointed to within one-half of the beam diameter, we integrate P(θ) over that interval as follows. θδ /2
P=
∫ 0
⎛ θ ⎞ exp ⎜ – --------2-⎟ ⎝ 2σ ⎠ - dθ θ --------------------------2 σ
Of course, other measures of merit can be chosen as well, such as 1/10 of the beam diameter. For the case of θd/2, some manipulation results in 2
P = 1–e
–θd /8σ
2
Solving for σ, we get the equation displayed at the beginning of this rule. The rule, as stated, applies to the illumination of a point target by a Gaussian beam. Larger targets are, of course, easier to hit. The size of the target is added to θd/2. The analysis also assumes that there is no bias in the pointing of the beam. If one performs calculations at the half-power point or 1/e points of a laser beam’s intensity, one is building in a margin. Usually, the power will be 50 to 70 percent higher. If multiple “hits” or multiple observations are allowed, then a comfortable margin is built in, as it is unlikely that random errors will result in several observations at the minimum points. Suppose we have a beam that has a divergence, measured in half-cone angle, of 10 mrad. What pointing is necessary to ensure that the beam will encounter a point target with a probability of 0.99? This works out to be 1.67 × 10–3 radians. This result is consistent with the general conclusion that for a high probability of pointing to within the beam radius, the pointing must be about 1/5 of the radius. Of course, the equation can be manipulated to calculate the probability of hit if the beam dimension is given.
PULSE STRETCHING IN SCATTERING ENVIRONMENTS The increase in transmitted laser pulse duration ∆τ caused by scattering in the atmosphere can be estimated as 1.5 L ⎛ 0.3 ⎞ ⎛ 2 -⎟ 1 + 2.25aτθrms⎞ – 1 – 1 ∆τ = --- ⎜ ----------------⎠ c ⎝ aτθ2rms⎠ ⎝
where L = propagation distance c = speed of light a = single scatter albedo ≈ 1
Lasers
191
τ = product of the scatter cross section per unit volume and the propagation distance (The value of τ ranges from 12 to 268 in the experiments illustrated in the paper, and it is unitless.) θrms = rms scatter angle ≈ 30° for water (expressed in radians in the equation)
Discussion This result comes from an analysis, using some simplifying assumptions, of the multiple scattering that occurs in the atmosphere. In addition, the formulation uses some results from the classical theory of electron scattering of electromagnetic waves. It is likely that the rule breaks down for optical depths (τ) in excess of about 300, so it will not work well for dense fogs and clouds. Those involved in laser communications in the atmosphere will find this rule helpful, because pulse stretching limits the data rate of the channel. This is evident from the consideration of situations in which the last photons to arrive from a first pulse are still arriving (having gone through many multiple scattering paths) as the first light arrives from a second pulse. Clearly, this would confound the receiver and prevent it from properly interpreting the data. Stotts1 shows that this formulation compares well with both experiment and more complex simulations using Monte Carlo methods. In view of the simplicity of the rule, this is a most attractive place to start. Systems that require more accuracy can use the Monte Carlo methods referenced in Stotts’ paper. The results in the reference show that pulse stretching on the order of microseconds results from situations in which fog or clouds are present. This will have profound impact on the ability of a pulsed communication system.
Reference 1. L. Stotts, “Closed Form Expression for Optical Pulse Broadening in Multiple-Scattering Media,” Applied Optics, 17(4), p. 504, February 15, 1978.
THERMAL FOCUSING IN ROD LASERS Thermal focusing is related to rod geometric and material parameters via the equation αr 0 ( n0 – 1 ) 1 η 1 dn 2 --- = ------- -------- ------ + n0 αC r ,φ + ------------------------f KA 2n0 dT L where
–1
PPFN
f= PPFN = η= n0 = dn/dT = K= A= α= C=
focal length of the thermally induced lens average pump power fraction of the electrical power absorbed by the rod refractive index at ambient temperature temperature coefficient of the refractive index thermal conductivity rod cross section thermal expansion coefficient elasto-optical coefficients in the radial (index r) and tangential (index φ) directions r0 = ambient temperature radius of the rod and L is its length1,2
Discussion In this rule, we are referring to lasers that employ a rod-shaped amplifying medium. Examples would be Nd:YAG, Ti:sapphire, and others. A focusing effect in a laser rod can
192
Chapter Nine
mean changes in the performance of a system in which it is employed. This rule provides insight into the fact that focusing will occur in the laser itself, and this will change the focus point of the entire system because, in most cases, the transmitter optics will be expecting to project a beam that is afocal. Clearly, the user of the rule has some work to do to dig up the required parameters for a particular application. The reader is reminded that the focal length of a plano-convex lens is related to the index of refraction by the following equation: 1 1 --- = ( n – 1 ) --R f where f = focal length of an optic n = its index of refraction R = radius of curvature of the convex surface The radius induced in front of the laser results from the heating (and expansion) of the medium through which the beam is passing. From this we can see that changes in the index will change the optical behavior of the material. Using simple calculus, we find that R∆n –∆f = ----------------2(n – 1) this shows that a positive change in index shortens the focal length of the lens. This is to be expected, as the focal length of the medium prior to the heating is infinite. Of course, the equation in the rule also allows for a negative change in the index, should that occur. The first term on the right-hand side of the equation in the rule derives from the radial change in index of refraction of the rod material as heat is absorbed by the rod. This represents the major effect among the three terms. Because the index varies with radius, a spherical wavefront is induced, which is equivalent to placing a lens at the end of the rod. The second term results from stress associated with elasto-optical effects in the rod. Two terms must be considered, because the change in index is different for each polarization of the light in the rod. This represents about 20 percent of the total focusing effect. The third term represents the curvature of the end face of the rod. This leads to a focal length change. For example, if only the last term is considered, we get AL K f = ------------------------- ------ar 0 ( n0 – 1 ) ηP For Nd:YAG, this amounts to about 6 percent of the effect. Note that the focal length resulting from this term is infinite (flat wavefront) if P is 0 (meaning no power is injected into the rod), α is zero (meaning that temperature changes do not affect the length of the rod), or η is zero (meaning that the rod is completely transparent at the pump wavelength). For a material with high thermal conductivity, the focal length remains high (remember that no effect means that f is infinity), and a low thermal expansion coefficient has the same effect. Similarly, if the power per unit volume is low, the focal length is large, because the second fraction would have a near-zero denominator. Reference 3 provides the following numbers for key parameters for Nd:YAG: α
K no Cr Cφ
7.5 × 10–6/K 0.14 W/cmK 1.82 0.017 –0.0025
Lasers
193
Reference 4 presents a similar set of terms in calculating the change in optical path length (hence phase) as a result of thermally induced effects. The optical path length is Λ = n0 L + ∆nthermal L + ∆nstress L + n0 ∆Lthermal where
n0 = material index L = length ∆nthermal = change in index as a result of change in temperature ∆nstress = change in index as a result of change in stress ∆Lthermal = change in length of optic
The first term is the traditional optical path length (index times distance through the optic). The second and third terms in the equation can be written more explicitly. The second term is rewritten in terms of the change in the index of refraction induced by the temperature change in the optical material. The third term is described by a nonlinear effect in which the change in temperature creates stress within the material. The stress, in turn, changes the index. In many materials, the third term (stress) is small compared with the thermal effects. dn ∆Λthermal = ------L∆T dT 3
n0 ∆Λstress = – ----- ρ12 αL∆T 2 where ρ12 = photoelastic coefficient of the optical material α = thermal expansion coefficient The last term is written as ∆Λ exp ansion ≈ 2αno ω∆T where ω = the 1/e2 radius of the incident laser beam (This term relates to the change in the size of the optics as a result of its change in temperature.) Finally, we include a description of the time-dependent behavior of the index resulting from heating by the laser light.5 The time dependence of the index of refraction (n) in the thermal lens induced by laser light passing through a material is approximated as 2 ⎛ dn ηP 2t ⎞ 2( r/ω ) ∆n( r,t ) = ------ ------------ ln ⎜ 1 + --------⎟ – ----------------------dT 4πJκ ⎝ t c ⎠ 1 + ( t c /2t )
where tc is the characteristic thermal time constant given (in seconds) by 2
ω t c = ------4D where the following definitions apply. Example values are given for ethanol in Table 9.3.5
194
Chapter Nine
TABLE 9.3 Definitions Abbreviation
Parameter definition and units
r
radius from center of beam
ρ
density (g/cm3)
ω
beam waist in millimeters
D
thermal diffusivity; κ ⁄ ( ρc p ) in mm2/second
κ
thermal conductivity in cal/cm s K
cp
specific heat in cal/gram/K
dn/dT J
K–1 constant of energy conversion in J/calorie
Value for ethanol
0.79
0.0939 4.23 × 10–4 0.57 4 × 10–4 4.184
The authors of Ref. 5 emphasize that some approximations are made to derive this equation. The following list summarizes those assumptions: 1. The pump beam is Gaussian and in TEM00 mode, and the beam size remains constant in the sample. 2. The sample is homogeneous, and its optical absorption satisfies Beer’s law; that is, both fluorescence and nonlinear effects are neglected. 3. There are no convection effects. 4. r/ω << 1 5. The value of dn/dT remains constant over the range of temperature variation in the sample.
References 1. M. Tilleman, S. Jackel, and I. Moshe, “High-Power, High-Fracture-Strength, Eye-Safe Er:Glass Laser,” Optical Engineering, 37(9), pp. 2512–2520, September 1998. 2. http://www.intellite.com/Dissertation20-ch3wavefront.pdf, 2003. 3. W. Koechner, Solid-State Laser Engineering, 2nd ed., Springer Verlag, New York, p. 416, 1988. 4. J. Mansell et al., “Evaluating the Effect of Transmissive Optic Thermal Lensing on Laser Beam Quality With a Shack-Hartmann Wave-Front Sensor,” Applied Optics, 40(3), pp. 366–374, January 2001. 5. R.A. Escalona, Z. Rosi, and C. Rosi, “Space and Time Characterization of a Thermal Lens Using an Interferometric Technique,” Optical Engineering, 38(9), p. 1594, September 1999. 6. www.sintecoptronics.com/ref/YAGlaser.pdf, 2003.
Chapter
10 Material Properties
The optical designer is constantly challenged to make a design accommodate the thermal, mechanical, and vibrational environment in which it must operate. A wide range of tools make this possible, but the design process is still an art. Ironically, as optical technology has advanced, the demands on the performance of the mechanical structure and mechanisms involved have become more and more difficult. It is now common to find requirements for space optical structures’ stability to be well below a micrometer. Without such precise management, the exquisite wavefront control demanded by today’s systems could not be met. Even ground telescopes must exhibit outstanding mechanical system performance. In both cases, the modern designer is blessed and cursed by the availability of sensing and actuation systems—blessed because they provide the last level of control of large optical systems and make their revolutionary performance possible, and cursed because the range of technologies now required to design such systems is beyond the capacity for any one person to understand. The modern, large optical system requires thoughtful input from a variety of skilled engineers, including structural, controls, materials, computer, mechanical, and optical engineers, and those associated with the specific operating environment. A space observatory must, of course, be designed by a team that includes all of those mentioned above as well as experts in spacecraft, attitude control, power, thermal, and other areas. The ground telescope is only slightly less complex, as its design team must include those familiar with geotechnical issues, aerodynamics, and civil engineering. Large optics on airplanes and balloons are in the works as well. NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) is an international project that must address many complex mechanical and optical issues. The new advancement that has made it possible to gracefully integrate the work of these various skills is the integrated model. Such models integrate the effects of optics, structures, controls, and disturbances to create a time-based simulation that shows how the system reacts to its environment. Deep within the overall model is the optical model that is derived from a ray-tracing system. The ray-tracing system is able to fully describe the impacts of decenters, tilts, and despaces of optical elements that might be induced by mechanical properties of the underlying structure. From the ray-tracing results, a sensitivity matrix is derived that allows for computationally efficient prediction of the effects of these motions. Using the sensitivity matrix allows a full-featured simulation to produce many
195
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
196
Chapter Ten
small time steps in the history of the system, thereby showing the designer how the design responds to disturbances. To augment this optical capability, new tools are becoming available that allow very detailed analyses of structures with a high density of nodes. Using this type of simulation, a designer can generate a time-based prediction of the performance of a design. That is, at each moment (usually separated by small fractions of a second), one can see how the sensors and controls embedded within the optics and structure detect and react to the disturbances that would otherwise prevent high-performance operation. A central tool for such sensing is the wavefront sensor. Through its use, the state of the optical system can be detected, and signals can be sent to the appropriate structural or optical actuation systems. This chapter contains some of the simpler rules for dealing efficiently with predictions of how a system might perform. It is an eclectic array; for example, the first rule in the chapter deals with the general properties of the Cauchy equation, which is often used to estimate the index of refraction of transparent materials. On the other hand, we have a rule that deals with the shapes obtained when a fluid is spun to form a nearly parabolic surface. The formula provided in the rule is not just for fun; the University of British Columbia is among several groups building a telescope based on a primary mirror formed by spinning a tub of mercury. The mirror will have impressive size (about 6 m). This effort reprises (much better, of course) attempts conducted as early as 1872 (by Skey) to create high-performance metal mirrors by spinning a plate of mercury. In principle, this approach is the same as that used by Angel and others to make giant mirrors by spinning molten glass to create a near-net-shape paraboloid. In any case, the rules in this chapter cover an array of topics. This is, of course, a field that is the subject of entire books, so we must be content here to provide a few samples of useful shortcuts and to supply some references that the reader might find useful. For additional information, the technical literature (particularly Optical Engineering and Applied Optics magazines) offers numerous articles on the intersection of optics and mechanics. This is an art addressed most effectively in the workplace and only by a few real experts. The skilled practitioner is able to integrate skills in mechanisms, materials selection, machining, testing, and management of operational environment to ensure a successful design. In comparison with the number of books on optics, there are only a few on the field of electromechanical design. Significant among these are Yoder’s Opto-Mechanical Systems Design1 and Vukobratovich’s Introduction to Optomechanical Design,2 but both of these books were last updated more than a decade ago. Hopefully, each of these authors will create new editions that include the latest in materials and techniques for the use of the community. A more recent (2002) addition to the library is Integrated Optomechanical Analysis, by Doyle et al.3 An example of the importance of mechanical issues in optics is illustrative here. One of the authors (Friedman) was invited to witness the testing of a small telescope designed for a military application. The test was conducted in Florida. The vendor of the telescope (located in the northeastern U.S.) had already successfully tested the telescope in its own labs. A key encircled energy test (the ability of the telescope to form a sharp focus) was easily passed in the initial testing. Testing in Florida went badly, in spite of using an exact duplicate of the test done in New England. Continued testing showed that every test in New England went well, but the tests in Florida always failed. It was soon discovered that the temperature in the New England lab was about 2° cooler than the lab in Florida. This seemed like a minor difference, particularly because the telescope was going to be used in the field where temperature excursions in the degree range of 20s could be expected. Eventually, a bright young optical designer asked about the history of the aluminum billet from which the primary mirror had been made. After some explorations, it was found that the aluminum was something that was found lying around the optical shop and that the
Material Properties
197
blank was not heat treated before or after machining. The difference in temperature between the two laboratories was enough to stimulate stress in the material of the primary mirror. This caused a significant amount of astigmatism in the primary mirror. A second mirror, heat treated before and after diamond turning, worked just fine. Every experienced optical designer has a story like this. The problems evolve from incompatible materials, inadequate preparation of materials, use of the optics in an environment for which they were not designed, aging of materials, parts shocked in shipping, and on and on. The list is long, so you must employ all of the possible tools to be sure that your design will work as promised. Three tables in Appendix A relate to this chapter: Table A.19, Materials (Fundamental Mechanical Properties), on p. 381; Table A.20, Materials (Derived Mechanical Properties), on p. 382; and Table A.21, Materials (Derived Thermal Parameters), on p. 383. These tables summarize the mechanical properties of materials commonly considered for optical mirrors. Some of the materials (e.g., silicon, Pyrex®, and SiC) have other photonic uses as well. In addition, many of the materials that are described are candidates for the structure that must hold the telescope together. A large number of refractive materials are described in many sources. We emphasize the mirror materials here, as they are receiving a great deal of attention for astronomy, space optics for defense and remote sensing, and all types of applications that require low-mass solutions. The intrinsic properties of the materials are density, Young’s modulus, coefficient of thermal expansion, and thermal conductivity. Density drives mirror mass and areal density. These properties define how big a mirror can be made, delivered to the observation site (including the very expensive problem of putting telescopes into orbit), and controlled. Young’s modulus plays a critical role in determining the mirror stiffness, the fundamental frequency of a mirror substrate, and its interaction with sources of vibration (slewing, pointing, reaction wheels, gyros); higher values are preferred. The coefficient of thermal expansion controls how the material changes shape as temperature varies; a low value is preferred. High thermal conductivity is important, because it allows the mirror to reach a steady state faster; higher values are better. In addition to the inherent properties of the materials, we include tables of derived properties. For example, specific stiffness compares stiffness and mass. This property is an expression of structural efficiency. Materials that are not stiff and are heavy do not have a high fundamental frequency. The steady-state heat transfer coefficient is an expression of the effect of heat on the substrate. Mirrors with high heat conductivity and low thermal expansion will equilibrate quickly and, in the presence of thermal gradients, will distort very little; a low value is preferred.
References 1. P. R. Yoder, Jr., Opto-Mechanical Systems Design, 2nd ed., Marcel Dekker, New York, p. 310, 1986. 2. D. Vukobratovich, Introduction to Optomechanical Design, SPIE Press, Bellingham, WA, 1993. 3. K. B. Doyle, V. L. Genberg, and G. J. Michels, Integrated Optomechanical Analysis, SPIE Press, Bellingham, WA, 2002.
198
Chapter Ten
CAUCHY EQUATION The Cauchy equation provides a simple estimate of the variation of index of refraction as a function of wavelength. The Cauchy equation is generally expressed as b n( λ ) = a + ----2λ
Discussion Some optics books provide the appropriate values of a and b for typical optical materials. For example, aluminum nitride has the values shown in Table 10.1.1 TABLE 10.1 Polarization
a for wavelength in µm
b for wavelength in µm
Ordinary
2.035
0.015
Extraordinary
2.078
0.018
Another, generally more accurate, approach for estimating the infrared index is given by the Herzberger formula,2 2
2
4
n = A + BL + CL + Dλ + Eλ + Fλ
6
where L = 1/(λ2 – 0.028) The values of the various constants A through F depend on the material in question. An alternative approach for expressing the Herzberger formula is found in Ref. 6. It is 2
4
2
n ( λ ) = c1 + c2 λ + c3 λ + c4 L + c5 L + c6 L
3
and uses the same definition of L. An additional approach is provided by the Sellmeier formula; it applies to the index of refraction of materials from ≈365 to ≈2300 nm. The Sellmeier formula is used by some optics vendors to characterize the glasses they sell. They merely provide the constants and expect the consumer to do the computations. 2
2
2
K 1λ K 2λ K 3λ 2 n – 1 = --------------+ --------------+ --------------2 2 2 λ – L1 λ – L2 λ – L3 The following formula illustrates the use of the Sellmeier formulation to describe both the ordinary (o) and extraordinary (e) indices of calcite.3 2
2
2
2
2 0.8391λ 0.0009λ 0.6845λ 0.8559λ - + ----------------------------- + ----------------------------- + ----------------------------no ( λ ) = 1 + -------------------------------2 2 2 2 2 2 2 2 λ – ( 0.0588 ) λ – ( 0.141 ) λ – ( 0.197 ) λ – ( 7.005 ) 2
2
(1)
2
2 0.0988λ 0.317λ 1.0856λ - + ----------------------------- + -------------------------------ne ( λ ) = 1 + ----------------------------------2 2 2 2 2 2 λ – ( 0.07897 ) λ – ( 0.142 ) λ – ( 11.468 )
(2)
Material Properties
199
where λ =wavelength of the source in micrometers The Conrady formula uses the following form:6 c2 c n( λ ) = no + ----1- + -------λ λ3.5 Values for silicon for the Conrady formula are given by7 C1 C2
0.3292191 –1677.394
Another rule in this chapter provides another approach for estimating the index and attenuation coefficient of silicon. Several other rules in this chapter provide additional algorithms for computing the index of refraction of other optical materials. In addition, Refs. 4 and 5 provide dispersion models for mercury halides and cesium lithium borate, respectively.
References 1. D. Blanc, A. Cachard, and J-C Pommier, “All-Optical Probing of Material Structure Second-Harmonic Generation: Piezoelectric Aluminum Nitride Thin Films,” Optical Engineering, 36(4), April 1997. 2. D. Ren and J. Allington-Smith, “Apochromatic Lenses for Near-Infrared Astronomical Instruments,” Optical Engineering, 38(3), pp. 537–542, March 1999. 3. M. North-Morris, J. VanDelden, and J. Wyant, “Phase-Shifting Birefringent Scatterplate Interferometer,” Applied Optics, 41(4), February 1, 2002. 4. H. Schmitzer et al., “Phase-Matched Third-Harmonic Generation in Mercury-I-Chloride,” Applied Optics, 41(3), pp. 470–474, January 20, 2002. 5. J. Zhang et al., “Optical Parametric Properties of Ultraviolet-Pumped Cesium Lithium Borated Crystals,” Applied Optics, 41(3), pp. 475–482, January 20, 2002. 6. http://glassbank.inmo.ru/eng/help.php. 7. E. V. Loewenstein, D. R. Smith, and R. L. Morgan, “Optical Constants of Far Infrared Materials 2: Crystalline Solids,” Applied Optics, 12(2), p. 398, February 1973.
DIAMETER-TO-THICKNESS (ASPECT) RATIO FOR MIRRORS A rule of thumb is to make a glass mirror’s diameter six times its thickness. This is sometimes called the aspect ratio of the design.
Discussion Large ground telescopes built before the recent advancements in active optics usually followed this general rule. Even the Hubble Space Telescope is close to having this set of dimensions; it has an aspect ratio of about 8.1 Of course, this ratio was defined by the desire to have enough stiffness in the mirror to avoid significant changes in its shape as the telescope tracks objects across the sky. Various ingenious methods were invented to offload the mirror weight, but still the aspect ratio has remained about the same since the invention of mirror-based telescopes early in the twentieth century. To some degree, the concept of aspect ratio has been lost, as all large mirrors are cast or machined to have large cavities in the back. These allow more rapid thermalization of the material and provide nearly the same stiffness as a full blank, with somewhat lower mass.
200
Chapter Ten
Roberts2 points out that the aspect ratio for aluminum can range from 8 to 10. A value of 12 is appropriate for beryllium and advanced composites. The latest generation of giant mirrors for ground astronomy (which are equipped with actuators for constant control of the mirror shape) have a very high aspect ratio, with a typical number for a segmented system being about 100.3 This number is achieved by having quite conventional aspect ratios for the individual segments of the mirror. Monolithic large mirrors, such as the European Southern Observatory’s Very Large Telescope, have an aspect ratio of about 45.4 In both the segmented and monolithic cases, this high performance can be achieved because of the actuation technology that ensures that the proper figure is always obtained. To do so by invoking only the properties of the materials involved would result in very small aspect ratios. In systems that do not use active supports and adaptive optics, the thickness of the mirror must be sufficient to support the mirror’s shape. For circular disks freely supported at the edge, the center deflection is 4
2
4
4
12Pa ( 5 + υ )( 1 – υ ) 3Pa ( 5 + υ )( 1 – υ ) 3ρga ------------------------------------------------ = ------------------------------------------= --------------2- ( 1 – ν )( 5 + υ ) 3 3 16Et 64Et ( 1 + υ ) 16Et where P = pressure on disk (This is also computed as mg/πa2 for the self-deflection of the disk.) t = thickness ν = Poisson’s ratio E = Young’s modulus a = radius of disk ρ = density g = acceleration of gravity The equation shows that the deflection depends on the ratio of the fourth power of the radius to the second power of the thickness. Consider a 2.4-m diameter Pyrex mirror blank (a = 1.2 m). Physical properties of Pyrex are shown in Table 10.2. When suspended loosely by its edges and with an aspect ratio of 8 (t = 0.3 m), the center will sag about 6 µm as a result of the force of gravity. TABLE 10.2 Material
ρ ⋅ (kg/m3)
E (in GPa)
ν
Pyrex®
2230
63
0.2
If the edges are constrained, a different equation pertains, and the center deflection is 2
4
2
4
ρgπa t a 12( 1 – υ ) 3ρga ---------------- ------ ----------------------= --------------22 3 πa 64 Et 16Et where
ρ = specific mass kilograms per square meter (kg/m3) ν = Poisson’s ratio E = Young’s modulus in gigapascals (GPa) (a Pascal = 1 newton/m2)
In this case, the center deflection is about 1.5 µm. For applications of more than 1 g (such as in a missile seeker), or less than 1 g, the ratio can be adjusted accordingly. Small mirrors can often be much thinner—1/30th or less.
Material Properties
201
References 1. 2. 3. 4.
www.kodak.com/US/en/government/ias/heritage/hubble.shtml, 2003. Private communications with Tom Roberts, 1995. http://scikits.com/KFacts.html, 2003. http://www.eso.org/outreach/info-events/ut1fl/whitebook/wb20.html, 2003.
DIP COATING It is occasionally useful to know how thick an applied coating will be when one dips a hydrophilic substrate into a material. The Landau-Levich equation, below, applies. 2⁄3
3µU 2 ⁄ 3 ( µU ) ⎛ 1 ⎞ 1 ⁄ 6 - --------= 0.643⎛ -----------⎞ R t = 0.946 -----------------1 ⁄ 2 ⎝σ ⎠ ⎝ σLV ⎠ LV ( ρg ) where
µ = viscosity ρ = density σLV = surface tension g = acceleration of gravity U = speed of withdrawal R = radius of curvature of the meniscus region and is equal to
σLV -------2ρg
Discussion This rule describes the process of drawing a substrate vertically from a pool of coating material. It is assumed that the fluid is incompressible.
References 1. D. Hartmann et al., “Optimization and Theoretical Modeling of Polymer Microlens Arrays Fabricated with the Hydrophobic Effect,” Applied Optics, 40(16), p. 2736, June 1 2001. 2. L. Landau and B. Levich, “Dragging of a Liquid by a Moving Plate,” Acta Physicochim URSS, 17(1–2), pp. 42–54, 1942.
DOME COLLAPSE PRESSURE An optical dome will collapse when the pressure equals1 0.8E Ro – Ri 2 Pcollapse = ----------------- ⎛ ---------------⎞ 2 ⎝ Ro ⎠ 1–υ where Pcollapse = pressure at which the dome will collapse E = Young’s elastic modulus of the material of the dome υ = Poisson’s ratio for the material R = outer radius Ri = inner radius This applies if the pressure loading is uniform (such as with aerodynamic loading) on the projected area. For thin domes, this simplifies to
202
Chapter Ten
PR Stress < ------2h where P = magnitude of the uniform loading R = radius of the dome h = thickness of the dome
Discussion Frequently, sensors require a dome or window through which to view. This is usually true for sensors placed on tactical air platforms, underwater, in missile seekers, and behind windows as sometimes employed in space for protection. A dome will buckle when the pressure matches the above equation. When properly designed and seated, domes can survive extreme pressures. One seeker’s dome routinely survives 11,000 g and 26 MPa. The fact that a dome shape is very strong can be derived from the following argument. First, the Poisson’s ratio of most materials is much less than unity, so the radical in the denominator can be safely ignored. Next, for a typical dome (e.g., 50 mm in diameter and 3 mm thick), the ratio of radii is about 10–3. This means that the buckling pressure is proportional to Young’s modulus times a number that is about 10–3. For many optical materials, Young’s modulus is quite high. For Pyrex®, for example, it is 63 × 109 newtons/m2 or about 107 psi. Thus, the collapse pressure is about 103 psi. In the ocean, this occurs at a depth of about 2304 ft. One of the authors of this book (Friedman) can remember calibrating a sonar array in the Atlantic Ocean by dropping light bulbs into the water, each weighted down by a brick. They were so alike that the implosion depth could be used to determine the relative positions of the many hydrophones being used in the array. This rule was developed empirically from extensive model testing and is a simplification of more complex equations. A more typical value for the constant in the equation (derived from structures texts) is 1.73 rather than 0.8. The equation we have provided is on the conservative side. Reference 2 points out that, for flat windows, one must be careful to add a safety factor. Rules exist for the design of flat optical windows that use this measure but, . . . published figures for apparent elastic limit, flexural strength, or rupture modulus may be used, but it should be realized that these three terms relate to different methods of test . . . . A conservative safety factor should always be applied to the minimum calculated thickness therefore.2
The reference suggests the following formula for a circular window, avoiding plastic deformation, the minimum design thickness being indicated by 2
pD t min = K ---------S where K = 1.06 for an unclamped window and 0.866 for a clamped window D = unsupported diameter S = apparent elastic limit (This is usually defined as the deformation beyond which permanent deformation is induced.) p = pressure differential We take this opportunity to illustrate how the pressure scale using Pascals is derived. The pressure of one atmosphere is measured in different units as enumerated below: pounds per square inch (lb/in2) inches (in) of mercury
14.7 psi 29.9213
Material Properties
millimeters (mm) of mercury millibars (mBar)
203
760 1013.240
Converted to pascals, we get 101.324kPa, because 1 atm equals 760 mm of mercury, which has a density of 13.595 g/cc at 0°C. Therefore, the force of this column is = 0.76 m × 13595 kg·m–3 × 9.80665 m/s2 = 101324 Pa (N/m2)(m–1·kg·s–1) = 101.324 kPa
References 1. D. Vukobratovich, “Optomechanical System Design,” in Vol. 4, Electro Optical Systems Design, Analysis and Testing, M. Dudzik, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 127, 1993. 2. http://www.crystran.co.uk/optics.htm, 2003.
FIGURE CHANGE OF METAL MIRRORS Metal mirrors change size and figure. The figure can be expected to change at a rate of about one wave per year. The higher the melting temperature, the better the stability.
Discussion For IR and laser applications, mirrors frequently are made from aluminum, beryllium, molybdenum alloys, or copper. Metal mirrors have many advantages and some disadvantages as compared with ceramic and glass mirrors. Among the disadvantages are the potential for corrosion, difficulty in achieving a low scatter surface, bimetallic corrosion considerations, and long-term dimensional instability. Metal mirrors frequently change figure, especially when cycled in temperature. In general, metal mirrors can be assumed to change figure at a rate of about 1 wave per year (using the typical wavelength of 632 nm as a wavelength standard). Pursuant to the above rule, one should allow for a change in figure of 600 nm per year to be conservative. Nevertheless, several fielded systems have noted a much smaller change after a year or so. Additionally, there seems to be an unproved correlation between the melting (or transition) temperature of a metal and its stability. Andrade’s beta law states that the change in size of a metal is proportional to the time raised to a power, or ε( t ) = βt
m
where ε(t) = creep strain β = a constant dependent on the material, stress, and temperature t = time m = another constant (usually between 0.25 and 0.4) with a typical value of 0.33 One thing to keep in mind is that creep in mirrors intended to have a static figure is a bad thing, whereas the advent of adaptive optics has allowed more tolerance of these types of instabilities. The rate of change is very low, and a modern telescope that is equipped with wavefront sensing and control for other reasons (such as correcting for atmospheric distortions) will eliminate the effect of these changes.
204
Chapter Ten
References 1. D. Vukobratovich, “Optomechanical System Design,” Vol. 4, Electro Optical Systems Design, Analysis and Testing, M. Dudzik, Ed., of The Infrared and Electro-optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 165–166, 1993. 2. E. Benn and W. Walker, “Effect of Microstructure on the Dimensional Stability of Cast Aluminum Substrates,” Applied Optics, 12(5), pp. 976–978, May 1973. 3. L. Noethe et al., “Optical Wavefront Analysis of Thermally Cycled 500 nm Metallic Mirrors,” Proceedings of the IAU Colloquium No. 79: Very Large Telescopes, April 9–12, 1984. 4. F. Holden, A Review of Dimensional Instability in Metals, NTIS AD602379, 1964. 5. C. Marshall, R. Maringer, and F. Cepollina, “Dimensional Stability and Micromechanical Properties of Materials for Use in an Orbiting Astronomical Observatory,” AIAA paper # 72-325, 1972.
MASS IS PROPORTIONAL TO ELEMENT SIZE CUBED The difference in mass between two optical elements (or telescope assemblies) of different sizes is usually proportional to the ratio of their diameters raised to a power between 2 and 3. Mathematically, n
n
M 1 = M 2 ( D1 /D2 ) where M1 = unknown mass of optic or telescope M2 = known mass of similar optic or telescope D1 = diameter of unknown-mass optic or telescope aperture D2 = diameter of known-mass optic or telescope aperture n = a constant between 2 and 3, usually 2.7
Discussion This rule is based on empirical observations of the present state of the art. An optical element of the same material and figure will generally need to be thicker as its diameter is scaled (to maintain a given surface), so the volume follows the same rule. To use this rule, the telescope’s diameters should be within a factor of 3 of each other, the optics must be of the same type and material (e.g., two on-axis reflective Cassegrains made of aluminum), the optics should be of the same prescription (or close), mechanical and environmental specifications must be the similar, and off-axis stray light rejection specifications should be comparable. This rule is commonly valid for optics from 1 cm to several meters in diameter. Just keep in mind that the rule applies only if the mirror diameters are within about a factor of 3 of one another. Knowing this rule can help with quick system trades-offs of comparing the mass impact of changing the optics size and estimating whether a given optic requires advanced lightweighting techniques (hence lots of bucks) to meet mass goals. The mass of a given mirror or lens depends on its material, density, volume, optical prescription, required strength, and the lightweighting techniques applied. An estimate of the mass of an unknown optical element can be made based on the known mass of a similar element. Telescope masses usually track the mass of the optical elements linearly, so this rule can also approximately hold for entire telescope assemblies. Generally, the diameter-to-thickness ratio for normal optical elements is 6:1 to 10:1.
Material Properties
205
Vukobratovich suggests that the exponent n is equal to 2.92. He states that a state-ofthe-art lightweight mirror mass can be estimated by W = 53D2.67, where W is the mass in kilograms and D is the diameter in meters. For example, assume that one wishes to know the weight impact of increasing the aperture diameter from 10 cm to 30 cm. Let’s say a 10-cm diameter silicon carbide (SiC) optic mirror of a special lightweight design that weighs 20 grams is used for scaling. Therefore, the desired 30-cm mirror would weigh approximately (20)(30)2.7 divided by (10)2.7, or 388 g. Thus, one can estimate that the weight of the larger mirror to be about 400 g.
References 1. J. Miller, Principles of Infrared Technology, Kluwer New York, pp. 88–91, 1994. 2. D. Vukobratovich, “Optical Mechanical Systems Design,” Vol. 4, Electro Optical Systems Design, Analysis and Testing, M. Dudzik, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 159–160, 1993.
MECHANICAL STABILITY RULES ■
■
■ ■
To avoid dimensional instability, a material should be stressed at no more than one-half its endurance limit. Stress should be kept at less than or equal to a material’s microyield stress or precision elastic limit. (Both of these are the amount of stress required to produce a 1-ppm permanent strain in a material.) Glass and ceramics are dimensionally unstable to about 1 part in 107 years at 300 K. Although lightweight and stiff, common epoxies and graphite structures have dimensional instability that varies with humidity.
Discussion The above rules are just the very tip of the iceberg of issues related to the mechanical stability of materials used to create optics and the structures used with them. Among the most critical issues to be addressed in the design of an optical system is the coefficient of thermal expansion (CTE). A great deal of difficulty can be avoided by matching the CTE of the components in the system, thereby making it athermal. This condition ensures that the inevitable thermal changes to be encountered by the system will have a minimal impact on performance. Among the modern materials used for optical applications, graphite-reinforced plastic (GRP) has gained great favor. It does not solve all problems, however. For example, because the material is designed for a particular application, it is expensive to use. In addition, the designs usually do not have the same CTE in all directions. Moreover, the best designs have nearly zero CTE at only one temperature, which limits the applicability of the design to a variety of environments. Graphite-reinforced plastic (GRP) is very popular in spacecraft designs where temperature and temperature gradients can be predicted and managed. At the same time, the designer must be aware of the gradual aging of materials that might result in changes in size of critical components. For example, a GRP optical bench would likely get smaller after deployment in space as trapped water creeps out. Not only would the change in structural size have an impact on the optical performance, but the water (and other evolutes) can condense on cold optical surfaces. On the other hand, GRP has a propensity to uptake materials associated with the cleaning process. This is quantified by the coefficient of materials expansion.
206
Chapter Ten
Of course, GRP is not the only material that the optical designer encounters. Each has its own challenges but, on the whole, the art involved in designed optical and optomechanical materials continues to improve. Among the exciting recent advancements is silicon carbide, a material with isotropic and low CTE, as well as high thermal conductivity, which can be a key factor in managing the effects of thermal gradients. Similarly, beryllium is becoming more popular as both an optical and structural material, although environmental and safety issues have limited the number of firms willing and able to work with it. Alloys of aluminum and beryllium have many desirable properties and are much less hazardous to process. Finally, we should recognize that advancements in other areas are opening up the range of candidate materials for critical optical components. For example, the evolution of both metallurgy and adaptive optics now makes it possible to reconsider aluminum as a mirror substrate. Although used early in the recent history of large ground optics, these mirrors fell from favor because of the constant “creep” of the aluminum material. The newly minted ability of actuators to control the shape of mirrors, and the complementary capability to manage the wavefront, allow this material to be reconsidered. It has some distinct advantages over typical glass and ceramic optical materials. The metal mirror is more durable and has much higher thermal conductivity. The latter advantage is important in designing systems in which thermal gradients and their effects are important. The European Southern Observatory has been active in developing the concept of aluminum mirrors for large ground observatories.
Reference 1. Daniel Vukobratovich, SPIE Course Notes, SPIE Press, Bellingham, WA, 2000.
MIRROR SUPPORT CRITERIA The minimum number of mirror support points needed to control self-weight deflections of an optical element as a function of the permissible peak-to-valley deformation is1 2
1.5r ρg N = ------------ -----t Eδ where N = minimum number of support points r = mirror radius (mm) E = mirror modulus of elasticity (Young’s modulus) δ = allowable peak to valley distortion in linear measurements ρ = mirror material density t = mirror thickness
Discussion It is always best to support a mirror at as many support points as possible. However, design constraints frequently lead to simple and few supports. Hall2 derived the above relationship to estimate the number of support points needed to prevent self-weight deflections larger than a specific allowable peak-to-valley deflection. Yoder1 reports that this has proved satisfactory for mirrors ranging from 1 m in diameter and 10 cm thick to 2.6 m in diameter and 30 cm thick. The above rule can be very useful in determining the size and design of a test fixture for mirror support, the size and design of a mirror mount, and the number of supports for a
Material Properties
207
polishing fixture (Don’t forget to include the extra weight of the polishing tools and auxiliary weights.) As an example, consider a Pyrex® mirror of 1 m in diameter with an allowable peak-tovalley distortion of 1 µm and mirror thickness of 100 mm. Properties of Pyrex were shown previously in Table 10.2. 2
N = ( 1.5r /t )( ρg/Eδ )
1⁄2
2
( 1.5 )( 0.5 ) 2230 × 9.8 = -------------------------- -------------------------------------= 2.2 9 –6 0.1 ( 63 × 10 )( 10 )
We calculate that three supports should be able to provide proper mounting for this mirror. Note that changing the requirement for sag to about λ/25 (about 1/25th µm) will increase the number of mounts from 3 to 15. Bely3 provides a description of the optimal location for a set of mirror mounts when three mounting points are used. In this case, the largest deflection anywhere on the mirror is found to be 4
βqa δ = ----------3 Et The weight per unit area is q (which is also equal to ρtg). This equation applies when the mirror is a disk of uniform thickness and is smaller than about 30 cm. As can be seen using the equations above, a mirror much bigger than 30 cm will have an unacceptable sag at its center, regardless of the number of supports that are provided. The minimum value of β is found to be about 0.3 when the supports are located at about 2/3 of the radius of the disk. Any book on stress and strain shows that a mirror constrained around its entire edge has a β value of 3/16, which is about one-half of what occurs for the three-mount support system. Finally, using the equation in the rule for the case of three mounting points produces a value for β of 2.25/9, which is close to the value of 0.3 provided by Bely.
References 1. P. R. Yoder, Jr., Opto-Mechanical Systems Design, 2nd ed., Marcel Dekker, New York, p. 310, 1986. 2. H. Hall, “Problems in Adapting Small Mirror Fabrication Techniques to Large Mirrors,” in Proceedings of the Optical Telescope Workshop, NASA Report SP-233, p. 149, 1970. 3. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, Springer, New York, p. 219, 2003.
NATURAL FREQUENCY OF A DEFORMABLE MIRROR The theory of vibrations of plates clamped at the edge defines the first bending mode (also called the fundamental or natural frequency) as 2
2
10.214h 3.196 D 10.214 Eh E - --------------------------- = ------------------ --------------------------f = -------------------- = --------------2 2 2 2 2 ρh a a a 12( 1 – υ )ρ 12( 1 – υ )ρ This result derives from the fact that the first frequency of a plate is derived from the roots of the Bessel function, J1(x). The first root is at x = 3.196. D is the typical definition used
208
Chapter Ten 3
2
in structures discussions, Eh ⁄ 12( 1 – υ ) , a = radius of the mirror, h = mirror thickness, E = Young’s modulus, ν = Poisson’s ratio, and ρ is the density.
Discussion Reference 1 points out that this result can be applied to a continuous deformable mirror such that the fundamental frequency of the part of the mirror actuated by a single actuator is 10.214h E - --------------------------f = -----------------2 2 2πR 12( 1 – υ )ρ where R = spacing between actuators That is, each area of the deformable mirror associated with a single actuator can be treated as an independent plate with a radius of 2 R for the purpose of determining its fundamental frequency. Because corrections can be required up to hundreds of hertz, the natural frequency of a deformable membrane (DM) should be no smaller than a few kilohertz. In almost all cases, the control loop bandwidth will determine the maximum correction rate, but the mirror mechanic design, as captured in the discussion above, should not be ignored. The advent of adaptive optics demands that all who deal with the atmosphere (whether it involves looking at the stars, imaging through the atmosphere, or performing optical communications) understand the limitations for such corrections. One element of the problem is to determine the bandwidth at which the AO system must function (see the Greenwood frequency discussion on p. 51) and to be sure that the mechanical properties of the deformable mirror meet those requirements. This rule allows one to quickly determine whether the design of the mirror surface of the DM is consistent with the bandwidth requirements imposed by the atmosphere. Typically, the Greenwood frequency will be around 200 Hz, so a natural frequency for the DM in the range of 2000 Hz ensures that the response of the mirror will not limit system performance. As an example, we assume that the membrane is made of aluminum 1 mm thick. The Young’s elastic modulus of aluminum is 70 × 109 kg/m sec2. The density of aluminum is 2.7 × 10–3 kg/mm3, and its Poisson’s ratio is 0.33. If the spacing between actuators is 5 mm, we get a natural frequency of 3200 Hz.
Reference 1. R. Tyson, Principles of Adaptive Optics, Academic Press, New York, p. 187, 1991.
PRESSURE ON A PLANE WINDOW To survive a pressure difference, a plane window should have the following minimum thickness:1 d t = ---------------------------------------8σ f 2 -------------------------------3∆P( x + v )SF where
t = required minimum thickness of the window d = diameter of the window
Material Properties
209
σf = fracture stress of the window material ∆P = axial pressure differential on the window x = a constant depending on the mounting for the window (Vukobratovich suggests 3 for a simply supported window and 1 for a clamped window.) ν = Poisson’s ratio for the window material SF = safety factor
Discussion A window’s thickness is usually driven by a desire that it survive aerodynamic or hydrostatic pressure as well as minor impacts (e.g., bugs, micrometeorites, and fish), each of which causes a pressure differential load across its surface. When a pressure is applied to a window that is supported at its edges, the center bows inward, forming a weak negative meniscus lens. If the pressure is strong enough, the bowing will be great enough to break the window. This rule is based on pressure dynamics and is backed up by measurements. It is based on the mechanical engineering assertion that the tensile stress should not exceed the fracture stress of the material divided by a safety factor. It applies to flat (unpowered) circular windows only, with mountings at the edge. Other window shapes require modification of the equation. This rule assumes brittle materials, as most windows are. This assumes a static pressure differential loading. When stressed by such a pressure differential, the window may not break, but its survival will be sensitive to a momentary increase in pressure (e.g., a dragonfly hitting the surface) and may break. A safety factor should be applied. Often, large safety factors are used (e.g., 10 or 20). The equation allows a good estimate for the thickness based on a constant pressure loading (e.g., aerodynamic loads and underwater loads). However, for high-optical-quality instruments, the distortion from the bending of the window (into a meniscus form) may be the driving constraint. The following equation estimates the expected optical path difference for a window undergoing uniform loading from a difference in pressure. Typically, the allowable optical path difference (OPD) depends on the wavelength and MTF of the system and is usually a few “waves.” OPD = 9 × 10
–3
2 6
( n – 1 )∆P d ----------------------------------2 5 E h
where OPD = maximum allowable optical path difference n = index of refraction of the window material ∆P = axial pressure differential across the window d = window diameter E = elastic (Young’s) modulus of the material h = window thickness The actual deflection of the center of the window that occurs when it is exposed to a pressure differential is expressed below.2 For circular disks freely supported at the edge, the center deflection is 4
3ρga --------------2- ( 1 – v )( 5 + υ ) 16Et where P = pressure on disk (This is also computed as mg/πa2 for the self-deflection of the disk.) t = thickness
210
Chapter Ten
υ = Poisson’s ratio E = Young’s modulus a = radius of window ρ = density g = acceleration of gravity The equation shows that the deflection depends on the ratio of the fourth power of the radius to the second power of the thickness. If the edges are constrained a different equation pertains, and the center deflection is 4
3ρga --------------216Et Another rule in this chapter provides more details on deflections of circular plates. Reference 2 also provides an estimate for the deflection at the center of a rectangular plate made of a material with a Poisson’s ratio of about 0.3, 4
PMin( L x ,L y ) wmax = w( L x /2,L y /2 ) = c1 --------------------------------3 Et 4
where c1 is as computed from Table 10.3. The terminology Min( L x ,L y ) means that the smaller of the dimensions of the plate should be chosen, then raised to the fourth power. Note that the values in the table depend on the maximum dimension, whereas the equation uses the minimum dimension. TABLE 10.3 Maximum of
Lx /Ly or Ly /Lx
c1
Shape
1
0.0138
Square
1.2
0.0188
Rectangle
1.4
0.0226
Rectangle
1.6
0.0251
Rectangle
1.8
0.0267
Rectangle
2
0.0277
Rectangle
0.1547
Slit
∞
It is interesting to compare the performance of a square plate that is similar in dimension to the circular cases mentioned above. Assume that we have a square plate of dimension 2a. With the following algebra, one can show that the central deflection is about 18 percent more than the deflection that occurs for a circular plate of the same maximum dimension: 4
3ρga 1.18 --------------216Et
Material Properties
211
This is not surprising because, in the circular case, the center is a meters from any edge, whereas, in a square of the same dimension, the corners are more than a meters from the center. The above equations apply only if the deflection is less than one-half the thickness of the window.
References 1. D. Vukobratovich, “Optomechanical System Design,” Vol. 4, Electro Optical Systems Design, Analysis and Testing, M. Dudzik, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 123 to 125, 1993. 2. “Guidelines for Estimating Thickness Required for Beryllium X-Ray Windows,” BrushWellman, 1982. 3. http://www.efunda.com/formulae/solid_mechanics/plates/casestudy_list.cfm#cpS, 2003.
PROPERTIES OF FUSED SILICA Fused silica is a widely used optical material. Its index of refraction can be estimated by1 2
2
2
2 0.4079426λ 0.8974794λ 0.6961663λ - + ----------------------------------------- + -------------------------------------n – 1 = ----------------------------------------2 2 2 2 2 2 λ – ( 0.0684043 ) λ – ( 0.1162414 ) λ – ( 9.896161 )
where wavelength is given in micrometers for the range from 180 to 2100 nm.
Discussion Other important properties of fused silica are found in a table in Appendix A and in Ref. 2. In addition, a reader who has an interest in finding the index of refraction of optical materials should visit the web site www.luxpop.com. It provides a wealth of information on all common optical materials and includes an index of refraction calculator that includes both wavelength and temperature. It also provides references to allow additional research. Of particular interest is silicon. Silicon is a very important IR optical material. Its index of refraction, n, and absorption coefficient, k, can be approximated as a function of wavelength as3 n = n0 + A1 exp { –[ ( λ – λ0 )/t 1 ] } + A2 exp { –[ ( λ – λ0 )/t 2 ] } k = A11 exp –[ ( λ – λ00 )/t 11 ] + A22 exp { –[ ( λ – λ00 )/t 22 ] } where n0 = 3.46537 A1 = 1.230 A2 = 1.071 t1 = 46.38 t2 = 258.8 λ0 = 392.0322 A11 = 0.1219 A22 = 0.2262 t11 = 17.28 t22 = 403.491 λ00 = 403.591
212
Chapter Ten
In the above formula, wavelengths are expressed in nanometers. The value of k is applicable only for wavelengths above 900 nm, where Si becomes transparent. Silicon has a refractive index close to 3.50, which is quite high. A less complex expression for the index of refraction of silicon at 293 K is4 2
Bλ1 2 A n = ε + ----2- + --------------2 2 λ λ – λ1 where λ1 = 1.1071 µm ε = 11.6858 A = 0.939816 B = 8.10461 × 10–3 Finally, the reader should consult the “Cauchy Equation” rule (p. 198) for more details on creating estimates of the index of refraction.
References 1. I. Malitson, “Interspecimen Comparison of the Refractive Index of Fused Silica,” Journal of the OSA, 55(10), pp. 1205–1209, October 1965. 2. “Properties Of Optical Materials,” Oriel Corp., 2002. 3. S. Kumar et al., “Near-Infrared Bandpass Filters from Si/SiO 2 Multilayer Coatings,” Optical Engineering, 38(2), pp. 368–380, February 1999. 4. http://www.cyber.rdg.ac.uk/ISP/infrared/technical_data/infrared_materials/si_dispersion.htm, 2003.
SPIN-CAST MIRRORS The focal length (f) of a parabolic mirror formed by spinning a fluid is described by Refs. 1 and 2 as 2
f = T /2.838 where T = the period of revolution (seconds per revolution)
Discussion This result derives from the following analysis. The focal length is governed by the equation f = g/2ω
2
where f = focal length g = acceleration of gravity ω = rotation rate in radians per second The period of rotation T and the focal length are then related by 2π T = --------------- = 2.832 f g/2 f
Material Properties
213
It is also useful to know the area of the parabolic surface so that the right amount of material can be spread over the optic. This is given by1 2 8 2 ⎛ r ⎞ SA = --- π f ⎜ 1 + --------2⎟ 3 ⎝ 4f ⎠
3⁄2
–1
where SA = surface area f = focal length r = radius of the mirror Use of this technique to form a large mirror in an astronomical telescope dates back more than 100 years. Newton first described the use of a rotating liquid to form the paraboloid primary of a telescope, and it was later contemplated by Brewster, Foucault, Buchan, and Perkins. Several descriptive papers and attempts were made in the nineteenth century, and the first successful imagery was probably obtained by Henry Skey (1836–1914) at Dunedin Observatory in Otago, New Zealand. Skey used both electric and water-driven motors to rotate a 35 cm vat of mercury to form a paraboloid primary mirror, and he could alter focal length by changing angular velocity. Robert Wood (1868–1955) published three papers in 1909 (in Astrophysical Journal and Scientific American) describing a rotating liquid mirror that he used in experiments.4 Recently, as listed below, a number of other applications have been found for this method of temporarily forming a concave shape. In addition, spinning the molten material that will become a mirror leads to some large production advantages. The University of Arizona’s Optical Science Center employed this technique to form “near-net-shape” large, monolithic parabolic surfaces in molten glass. By spinning molten glass during cooling, a parabola is approximated. This reduces the grinding time, especially for large optics. The following list illustrates the number of locations using this technique:3 ■ NASA Orbital Debris Observatory—3.0 m ■ Liquid Mirror Telescopes at the University of British Columbia (UBC)—2.7 m (currently building a 5.1-m unit) ■ LMs at Université Laval—2.5 m (currently building a 3.6-m unit) ■ HIPAS LIDAR near Fairbanks (UCLA)—2.7 m ■ Purple Crow LIDAR at the University of Western Ontario—2.7 m ■ Liquid Mirrors at Centre Spatial de Liège—1.4 m
References 1. R. Richardson and P. Griffiths, “Generation of Front-Surface Low-Mass Epoxy-Composite Mirrors by Spin-Casting,” Optical Engineering, 40(2), pp. 252–258, February 2001. 2. A. Meinel and M. Meinel, “Inflatable Membrane Mirrors for Optical Passband Imagery,” Optical Engineering, 39(2), pp. 541–550, February 2000. 3. J. Magnuson, D. Watson, and R. States, “Space Based Liquid Ring Mirror Telescope (SBLRMT),” presented at the Ultra Lightweight Space Optics Challenge Workshop, sponsored by JPL, March 24–25, 1999. 4. http://home.europa.com/~telscope/binotele.htm, 2003.
This page intentionally left blank
Chapter
11 Miscellaneous
There are always a few “orphans” in any large group. That certainly applies here. This collection of rules just did not fit into any of the other chapters of this book in a natural way, so they ended up here. These rules are not to be ignored, however. Some of the most universal rules in the book appear here. For example, this is where we address the issues of estimating the time it takes light to go from place to place, how to deal with statistical issues, Moore’s law (and Murphy’s law), the definition of solid angle, methods of estimating temperature during a field test, photolithographic yield, and so forth. Many of these rules can be used on field trials to approximate the environmental conditions, such as the Crickets as Thermometers, Distance to Horizon, and Speed of Light rules.
215
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
216
Chapter Eleven
AMDAHL’S AND GUSTAFSON’S LAWS FOR PROCESSING SPEEDUP Amdahl’s law for estimating the improvement in processing speed by employing parallel processors is n S f = -------------------------1 + ( n – 1 )a where Sf = speedup factor n = number of processors α = fraction of code which is sequential in nature
Discussion Complex image processing problems can require massive processing time. Frequently, the suggested solution is to apply more than one processor and operate them in parallel. The measure of performance is called speedup. Speedup is defined as Time required for one processor to complete the task ------------------------------------------------------------------------------------------------------------------------------Time required when n processors complete the task The more complete form of speedup is defined for the general case of a task with m operations to be executed with p parallel processors. Here, r1 is the processing rate for each processor, and q is the fraction of the operations that can be conducted in parallel, leaving 1 – q to be conducted serially. m 6 ----*10 t1 r1 1 Speedup = ---- = ------------------------------------------------------------------------------- = ----------------------6 t p mq106 p q m ( 1 – q )10 p 6 6 -------------------*10 + -------------------------------*10 --p- + ( 1 – q ) r r 1
1
If q = 1, then all operations can be done in parallel, and the time required is p. The reader will easily compute that if q = 0.5, a 32-node computer can complete the task in about half the time of a fully serial solution. The number of nodes, p, has little effect on the speedup, whereas more complete parallelization can lead to substantial gains. Amdahl’s law is based on a “fixed-load” assumption in which the processing load is fixed regardless of the number of processors utilized. This is applicable to remote sensing, security, and intelligence images in which real-time application is not required and one has the same number of pixels and operations to go through for each image. Conversely, Gustafson’s law assumes a fixed time for the processing, regardless of the number of processors applied, and thus allows one to estimate the ability to solve a larger problem or load. Gustafson’s law can be written as follows: S f = n – α(n – 1) Gustafson’s law applies to image processing of video or real-time signals, where the time is necessarily fixed, and adding more processors allows one to do more processing in that time. This law is often stated as
Miscellaneous
217
Size of processing load with one processor in unit time -----------------------------------------------------------------------------------------------------------------------------------Size of problem solved with n processors in unit time
References 1. D. Sinha and E. Dougherty, “Hardware Architecture for Image Processing,” in Electronic Imaging Technology, E. Dougherty, Ed., SPIE Press, Bellingham, WA, pp. 409–410, 1999. 2. E. Pitman, “High Performance Computing,” http://www.math.buffalo.edu/~pitman/courses/ cor501/HPC1/HPC1.htm, 2003.
ARRHENIUS EQUATION It is quite common for accelerated testing to make use of the Arrhenius equation, which takes advantage of the effects of temperature. A general form of the equation is1,2 –2 E q /kT
MTTF = AJ e
where MTTF = mean time to failure A = a constant that is specific to the item being tested (found by calibration with a large sample of items) Eq = characterizes the energy of a process (more on this later) J = current density in amp/m2 k = Boltzmann’s constant T = temperature in kelvins
Discussion By adjusting T or J, the system can be exposed to an equivalent lifetime much longer than that which occurs at normal operating temperature. This allows accelerated testing of a system to estimate its inherent reliability. Examples of uses of the Arrhenius equation abound on the World Wide Web. A quick review will show that many of the mechanisms important to electronics used in photonic applications have an activation energy (Eq) that ranges from about 0.2 to about 1 electron volt (eV). Armed with this information, we can investigate the acceleration in life that results from testing at elevated temperature. To use the equation above for estimating the accelerated life of components, you must use Boltzmann’s constant in units of eV/K. This constant is about 1.38 × 10–23 joule/K or 8.6 × 10–5 eV/K. Figure 11.1 shows that, for the range of activation energies quoted above, the time required to test the reliability of the part can be greatly reduced. In this case, we assume that the conditions of use (current density, for example) are the same in the test as in normal use. For a failure mechanism with an activation energy of 1.1 eV, testing at 380 K accelerates the failure rate by about a factor of 1 million. A part with an average expected life of 1 million hours can be driven to failure in an average of just 1 hour of testing. Figure 11.1 also illustrates the dramatic difference in average life as a function of the activation energy of the failure mechanism. Finally, the figure shows that the reduction in life induced by testing at elevated temperature is larger for larger activation energies. For example, testing at 450 K reduces the life of a part only by a factor of approximately 100 (compared with operation at 270 K) if the activation energy is 0.3 eV, but it is more like a factor of 1 billion for failure modes with an activation energy of 1.1 eV. This is a valuable property, as the higher activation energy case generally has a very large MTTF. Without accelerated testing, determining the life of these parts would take very long time intervals and simultaneous testing of many parts.
218
Chapter Eleven
FIGURE 11.1 This curve shows that the aging rate of an electronic part can be controlled over many orders of magnitude through manipulation of the operating temperature, with the final effect depending on the activation energy of failure modes.
The reader will benefit from reading existing material2 on how to interpret the failure rates using the techniques of reliability analysis.
References 1. P. M. C. De Dobbelaere et al., “Intrinsic Reliability Properties of Polymer Based SolidState Optical Switches,” 1998 National Fiber Optic Engineers Conference. 2. Electronic Industries Alliance, JEDEC Standard JESD74, Early Life Failure Rate Calculation Procedure for Electronic Components, April 2000.
COST OF A PHOTON A photon from a bulb costs about 10–25 dollars; a diode laser photon runs about 10–22 dollars.
Discussion The above is based on the contemporary costs to buy and operate a light bulb and semiconductor laser. For example, a 100-W household light bulb costs about $1 and consumes about 100 kW·hr of electricity in its approximately 1000-hr lifetime. It has a tungsten filament that is raised to a temperature between 2000 and 2900 K. If we assume that the tungsten
Miscellaneous
219
filament’s temperature is 2600 K and that it has 1 cm2 of area (with an emissivity of 0.45, which is typical for hot W), it will emit about 2.2 × 1019 photons per second in the visible bandpass. Infrared and UV photons that are emitted from the filament do not count, as they do not make it through the glass but, instead, rattle around inside and heat up the bulb structure. One thousand hours is 3.6 million seconds, so the bulb will emit about 7.8 × 1025 usable photons in its lifetime at a cost of about $6 to $9 (including electricity costs). Thus, the cost of a bulb-generated photon is about 10–25 dollars. Keep in mind that a UV photon is “expensive” in the sense that it contains far more energy than one visible photon. Therefore, if we could get UV photons out of the bulb, we would spend the same money to get fewer items, by a factor of about three. On the other hand, infrared photons are “cheap.” In fact, they cost nothing, if you are willing to use the ones emitted by the walls of the lab and ignore the cost of heating the walls. Additionally, a 10-mW diode laser can be purchased for $1.30. It operates for approximately 1 million hours before it fails, but it emits all of its specified light in its given bandpass. At 1.5 µm, a 1-mW laser emits 7.5 × 1016 photons per second and 7.5 × 1022 photons over its lifetime. Here we note that the cost of operating the diode compares favorably with the cost of operating the bulb. The diode lasts 1000 times longer, but in its life it consumes only 1 kW·h, as opposed to the 100 kW·h used by the bulb. Of course, the applications are different. It’s difficult to read by the light of a 10-mW diode. A telecom transmitter laser diode has other costs added to it, such as certification, warranty, and modulation mechanisms. Generally, these produce more than 10 mW and cost much more than $10. The cost of a telecom photon is more like 10–20 dollars.
Reference 1. P. Hobbs, Building Electro-Optical Systems: Making It All Work, John Wiley & Sons, New York, pp. 64, 68, 2000.
CRICKETS AS THERMOMETERS Tree crickets provide a convenient way to derive the temperature. Count the number of chirps in 15 sec, add 37, and you have the temperature in Fahrenheit.
Discussion This rule and its variants derive from Dolbear,1 who first proposed the relationship in 1857, 1896, or 1897, depending on the reference you consult. The reason for this behavior of crickets seems to derive from the fact that crickets are cold blooded. In warmer temperatures, their metabolism speeds up. Male crickets seem to constantly emit sounds to attract mates or scare away competitors. It is likely that other insects have the same reaction to temperature, but we don’t know about it, because they don’t emit regular sounds. Some authors suggest that the best results come from the snowy tree cricket. This insect is related to katydids. Others suggest alternative algorithms that include ■ Count the clicks for 14 sec, then add 40. ■ Count for 24 sec, then add 35. ■ Count for 15 sec, then add 40. ■ Count for 60 sec, then subtract 40, divide by 4, and add 50.
Reference 1. A. E. Dolbear, “The Cricket as a Thermometer,” American Naturalist, Vol. 31, pp. 970–971, 1897.
220
Chapter Eleven
DISTANCE TO HORIZON In statute miles, the distance to the horizon is approximately the square root of your altitude in feet.
Discussion The nearly correct formula for this effect is as follows: D = 2Rh where D = distance to the horizon R = radius of the Earth h = height of the eye of the observer Obviously, all of the units must match. As an example, consider the case in which all of the units are kilometers. R would then be 6378 km. Then, D = h( 12, 756 ) Because h is in kilometers, we can convert to feet by multiplying by 3280, leaving 3.88 inside the radical. This means that the value of D is D≈2 h when D is in km and h is in feet. Now consider the case in which D is expressed in miles and h is in feet. In this case, D = 2Rh = h7927 = 1.2 h so, to a good approximation, the distance in miles is equal to the square root of the height of the observer’s eyes in feet. Multiplying by 1.2 helps the accuracy. In addition to the geometry, the index of refraction depends on the density, and the density of the atmosphere depends on the altitude. An observer at an altitude may see slightly beyond the geometric horizon. There are two notes of interest resulting from the differential refraction and dispersion of the atmosphere. The angular extent of the Sun is about 0.5°, and the correction of the horizon for refraction can exceed 0.57°. This means that, in some cases, the Sun is still visible after it has geometrically set! Based on the fact that the Sun crosses the sky at about 15°/hr, this result means that the Sun may have been set for as long as 2 min geometrically before it appears to go below the horizon. This effect can be greater for atmospheres on other planets, such as Venus. The dispersion of the refractive angle causes the refraction correction to be different for different colors. The setting Sun dips over the horizon one color at a time, with red being first and blue last.
LEARNING CURVES Each time a succeeding unit is made, it takes less time to build than did its predecessor. The time can be estimated by a regular decrease every time the number of units made is increased by a factor of two. Mathematically,
Miscellaneous
Cn = C1N
221
( log PLR/0.3 )
where Cn = time it takes to make the nth unit in production C1 = time it takes to make the first unit N = production number of the unit PLR = percent learning rate in decimal notation
Discussion This rule was initially based on empirical studies of production lines. It has been applied successfully to all kinds of activities. Usually, manufacturing time decreases by a fixed percentage every time production doubles. Also, most variable costs (e.g., raw materials) also follow this rule. The 0.3 in the equation is log 2, because PLR is defined as the improvement experienced when production doubles. If the first unit of a production sensor takes 10 hr to test, and the second takes 9 hr, the PLR is 90 percent, and 0.9 should be used as the PLR. The biggest caution is to ensure that you are applying the correct PLR. The above equation assumes that the learning rate is constant; in fact, they usually are not. Typically, there is a lower (better) PLR in the beginning as manufacturing procedures are refined and obvious and easy corrections to the line are applied. As more and more units are made, the rate usually goes higher (fewer gains). A very mature line may actually experience the opposite effect, wherein it takes longer to make the next unit because of tooling wear and old equipment that keeps breaking. Some electro-optical programs have experienced a negative learning curve! Although the development of learning curves was based on, and calibrated by, examining touch labor, these curves can be applied to any variable cost, such as material, and sometimes even to some traditionally fixed costs such as sustaining engineering or management. Therefore, they can be used to estimate total cost and total price directly by using the first unit price in the above equation. Learning curves are based on the phenomenon that every time you double the numbers of units produced, you will experience a predictable decrease in the time that it takes to produce each one. The learning curves provide a powerful tool to estimate the reduction in costs or time for a production run. Surprisingly, they can be amazingly accurate. The trick is to know what PLR to use. Typically, infrared and complicated multispectral systems experience learning curves in the 90 percent range. Visible cameras and simpler systems and components usually have learning curves in the 80 percent range, and simple mechanical assemblies may even get into the 70 percent range. As an example, let us assume that you develop a product, and it cost $1 million to produce the first prototype after several million dollars of nonrecurring research, development, and engineering (for a production system, these “one-time” costs are to be excluded). Let us also assume that this electro-optical product follows typical learning curves for sensors of about 90 percent and that you are conservative and expect the next unit (the first that you charge the customer) to cost as much as the prototype. A quote for five of these is immediately requested. Table 11.1 shows the presumable cost of producing the first five units. Because your company exists to make money, and it has overhead and marketing expenses, you double the figure and respond with an $8.6 million quote. Now let us say the president of your company calls you in to inquire about the commercial prospects of your hardware. He wants to know what the 1000th unit would cost to produce. You get aggressive with the learning curve for such a large number (assuming the company will invest in automated production facilities) and apply an 80 percent PLR and come up with the following projection:
222
Chapter Eleven
Cn = C1N(log PLR/0.3) C1000 = (1,000,000)(1000)(log 0.8/0.3) and come up with an estimate for the average unit cost of a mere $110,000. TABLE 11.1 Production unit number
Cost per unit
Total cost
1 2 3 4 5
$1,000,000 $ 900,000 $ 846,206 $ 810,000 $ 782,987
$1,000,000 $1,900,000 $2,746,206 $3,556,206 $4,339,193
MOORE’S LAW 1. The number of transistors on a chip double every 18 months. 2. Rule 1 is often paraphrased to read, “The power of a microprocessor (or memory on a chip) doubles every 18 months.” 3. Rule 2 is often paraphrased to read, “Processing power per unit cost doubles every 18 months.”
Discussion This is perhaps the most ubiquitous rule of thumb in this tome. It contains all the aspects that the authors of this text like about any rule of thumb—it is simple, useful, and easy to remember and calculate with a hand calculator in a meeting; it provides insight and intuition; and it is surprisingly accurate. In 1965, Gordon Moore (of Intel) extrapolated from the capability of early chips that the number of transistors on a chip would approximately double each year for the next ten years. He revised this law in 1975 to the doubling of the number of transistors every 18 months. According to the Intel web site,1 “In 26 years, the number of transistors on a chip has increased more than 3,200 times, from 2,300 on the 4004 in 1971 to 7.5 million on the Pentium® II processor.” As an interesting aside, Moore also estimates that more than 100 quadrillion transistors have been manufactured as of 2002.2 Even though the driving forces in this rule include micro- and macroeconomics, the development of capital equipment, chip technology, and even sociology, Moore’s simple extrapolation has proven to be quite accurate. The final paraphrase implies a simple mathematical equation that can relate cost or price drop every 18 months to a given (constant) processing power as Ci C y = -------------0.463y e where Cy = cost for a given amount of processing at year y Ci = initial cost (at year zero) e = mathematical constant 2.7183 . . . y = difference in years between i and y (initial and projected)
Miscellaneous
223
Recently, many individuals have cautioned that, because of the approaching quantum effects and diffractive effects with X-ray photolithography, Moore’s law may slow and even stop when the feature size approaches about 10 nm. “Physicists tell us that Moore’s law will end when the gates that control the flow of information inside a chip become as small as the wavelength of an electron (on the order of 10 nm in silicon), because transmitters then cease to transmit.”2 This feature size is bound to be reached on a commercial level sometime between 2010 and 2020.2 Additionally, Takahashi3 astutely indicates that the barrier to Moore’s law might well result not from miniaturization but from more mundane attributes such as testing, reliability, and the equipment failure rate when going to smaller-scale and newer techniques. He also points out that the delay in the development cycle can be large, and the risks get larger with each increase in density. He observed, “The risk of a six-month delay have grown from 1 to 3 percent a few years ago, with the 250-nm generation, to 15 to 30 percent for the 130-nm generation, according to the design tool maker Synopsys, based on a survey of its design teams.” However, these naysayers are selling human ingenuity short and potentially ignoring the human insatiable need and ability to pay for information and technology. First, new materials can extend Moore’s law beyond what is predicted, albeit with a massive (trillion dollar) investment in new foundries. Second, quantum effects aren’t necessarily all bad; they can be potentially tamed to extend the “effective feature size” even lower than the wavelength of an electron with proper quantum computing. Moreover, silicon hasn’t quite lost its edge, as three-dimensional or vertically integrated chips hold the promise of keeping Moore’s law alive for another 20 years or more. This technology is being actively pursued by several companies, including Irvine Sensors, Matrix Semiconductor, Raytheon, SUNY, DRS, RTI, Rockwell Scientific, and Ziptronix. Other techniques (e.g., IBM’s silicon on insulator), novel interconnects, and innovative micropackaging may also allow the effects of Moore’s law to continue for several decades. Moore’s law has held up surprisingly well for several decades, but we may be approaching a turning point. The future rate of expansion in the number of transistors per chip is questionable but not impossible.
References 1. 2. 3. 4.
http://www.intel.com/intel/museum/25anniv/hof/moore.htm, 2003. T. Lee, “Vertical Leap for Microchips,” Scientific American, pp. 52–59, January 2002. D. Takahashi, “Obeying the Law,” Red Herring, pp. 46–47, April 2002. J. Westland, “The Growing Importance of Intangibles: Valuation of Knowledge Assets,” BusinessWeek CFO Forum, 2001.
MURPHY’S LAW If anything can go wrong, it will.
Discussion This rule was developed by Mr. Murphy from painfully acquired experience and has become part of the popular American lexicon. Although a gross generalization, this rule seems to always work. Edward A. Murphy developed this simple concept that will probably be forever associated with his name. His actual job was as an engineer involved with rocket sled programs performed by the Army Air Force in the late 1940s. Most people have seen the distorted faces of the subjects riding in these sleds. In any case, a number of stories exist on how Murphy came to invent his law. One is that the sled was equipped with 16 accelerometers,
224
Chapter Eleven
each of which could be attached in one of two orientations. In a famous case that could have inspired the rule, all of the accelerometers were installed backward! The original version of the rule was in the following form: “If there are two or more ways to do something, and one of these ways can result in a catastrophe, then someone will do it.” The more common form shown in the rule is sometimes called Finagle’s law of dynamic negatives. This label derives from the science fiction author, Larry Niven, who included both Finagle and Murphy is some of his books. For the purpose of this book and the preservation of a cultural icon, this book chooses to anoint the rule with Murphy’s name. Corollaries abound. One of our favorites is the Law of maximum pain, which says that, in any system where a range of unpleasant outcomes are possible, the one that will occur is the one that will cause you the most pain. Another is “cleanliness is next to impossible,” a concern for those who work in clean rooms for the aerospace and semiconductors industries. Peter Anthonissen has recently written a book (in Dutch) called “Murphy was an optimist,” which seems to have been derived from O’Toole’s commentary that is widely advertised on the ’net.
NOISE RESULTING FROM QUANTIZATION ERROR The effective noise that results from a quantization error is LSB Qn = ---------12 where Qn = standard deviation of the quantization noise LSB = value of the least significant bit
Discussion Typically, the original analog output from a detector is transformed into a digital stream. This digitization has a finite resolution based on the desired total dynamic range and the number of digital bits employed. For some system designs and circumstances, this can be a serious contributor to noise. The quantization error is one-half of the least significant bit. The LSB is approximately the full-scale range of the analog signal divided by 2n where n bits are encoded as the signal is converted to digital form. In formula form, V full scale analog signal V full scale analog signal = --------------------------------------------- = q LSB = --------------------------------------------B 2 2 –1 A statistical approach is also useful. q/2
Quantization noise variance =
2 σe
=
∫
–q/2
q/2
1 z f ( z )dz = --q 2
∫
2
2 q z dz = -----12
–q/2
In this equation, f(z) is the power spectral density function of the noise and is equal to 1/q. Note that this approach satisfies the rules of statistics in that
Miscellaneous
∞
∫
225
q/2
f ( z )dz =
–∞
1
∫ --q-dz = 1
–q/2
We also immediately see that, for these conditions, the rule is proved. If the least significant bit is small compared to the total noise, then the probability that it will fall between –LSB/2 and +LSB/2 is roughly constant, and the RMS error reduces to the above relationship. The LSB value is small compared to the total dynamic range of the system. It is only one component of the total noise of the system. The noise is found by summing the squares of the various terms. Keep in mind that the quantization error can never be smaller than 29 percent of the LSB. The reader should review a similar rule in Chap. 6, “Detectors.”
NOISE ROOT SUM OF SQUARES When performing a root sum of squares, a rough answer can be had by just taking the largest value. If the separate components are nearly the same (e.g., they vary by no more than 20 percent), then multiply the largest value by 1.4 for the combined figure.
Discussion Independent noise sources can be added as the root sum of squares. Calculating the combined effect of several sources of error in a measurement can be done more easily than most people realize. First, we take note that uncertainty of independent noise sources is computed as the root of the sum of their squares, not by adding them. That is, the total variance in the noise of a system is equal to the sum of the variances of the contributing terms, 2
2
2
2
2
σ = σ1 + σ2 + σ3 + σ4 … The random nature of most noise sources encountered in electro-optics allows this rule to hold. This rule is also derived from a common statistical analysis in which the errors in a system are separately analyzed by taking partial derivatives and eliminating the higherorder terms. Any text on error propagation or analysis of experimental data demonstrates this approach. Use caution here, because many other real “noise” sources are not random, such as microphonics, noise resulting from 60-cycle electronics, and other periodic sources that cannot be root sum squared. Moreover, the most widely considered noise term in many astronomical and low light applications is the Poisson noise that results from the inherent statistical properties of photons. For the photon case, the variance equals the mean rate of arrival of photons. Photon noise from blackbody radiation has a variance equal to the mean photon flux and a standard deviation equal to the square root of the mean photon arrival rate. Thus, the noise in a detector will appear as the square root of the sum of the variances, 2
2
2
σ1 = σ2 + σ3 + σphoton where the terms 1, 2, and 3 result from random noise sources. This rule is somewhat limited in some important cases. For example, it does not strictly apply to clutter or other noise sources that may not have a random property.
226
Chapter Eleven
Care must be taken in developing an error budget in which all terms are assumed to add as the sum of squares. It is quite common for complex optical systems, particularly those with control systems, to accumulate errors in other than a sum of squares way. However, it is almost universally the case that the first analysis performed uses this rule to determine the problem areas. A complete accounting of all error sources and the way they accumulate is a complex endeavor requiring a complete description of the system, which usually doesn’t exist until late in the program. Here’s an example that proves the point. Suppose the system has four error terms of 5, 4, 2, and 1. Using the rule above, we would estimate the error as 1.4 × 5 = 7. Doing the root sum of squares, we get 6.8.
PHOTOLITHOGRAPHY YIELD Chip yield (be it a detector, ASIC, semiconductor laser, or MEMS device) is inversely proportional to the size of the chip and defect density raised to the number of masking steps.
Discussion The yield of a photolithographic process is proportional to 1 Yield = ----------------------m ( 1 + DA )
(1)
where Yield = expected fraction of successful die from a wafer D = expected defect density per square centimeter per masking step (If you don’t know this value, assume one defect per square centimeter for silicon, and more for other materials.) A = area of the chip in square centimeters m = number of masking steps Murphy developed the following model for the probability that a given die is good from a wafer: – AD 2
1–e Pg = ⎛ -------------------⎞ ⎝ AD ⎠
(2)
where Pg = probability that a die is good after the processing of a wafer Additionally Seeds give us Pg = e
– AD
(3)
Calculating yield is a complicated process; foundries typically have complex models to do this, although sometimes just averaging the above three equations works pretty well. Yield is also sensitive to design layout, design methodology, type of circuit, production line quality, and age of the manufacturing equipment. However, the above equations provide a good approximation with the assumptions that the process is well controlled and developed, crystal defects are low, and breakage and human handling do not contribute. Often, this rule can be used to estimate complete device yield, as steps such as sawing the wafer and packaging usually produce a high yield.
Miscellaneous
227
One must use the right numbers for defect density. This estimation isn’t always trivial, as it can depend on not only the obvious (rule size, material, material purity) but subtle process attributes (individual machine, operators, the kind of night the operator had, and even room temperature and humidity). All of these models tend to give small probabilities of success when typical silicon defect densities are used (one to two per square centimeter). It is probably best to scale these yields with a constant based on experience in producing a similar device. For more exotic materials than silicon (as often used with laser detectors as well as MEMS devices), the defect density is usually higher. Technology is always getting better, so the form of these equations may change as processes improve, as implied by the absence of the number of masking steps in the Eqs. (2) and (3). Some defects can cause a circuit to be inoperable but do not effect the interconnects. Generally, yield is better when we consider that some of the chip is just interconnects, and when we ignore the edges of the wafer (where circuits are not made, and crystal defects are larger). Often, the exotic materials used by EO engineers have wafer sizes much smaller than those of silicon (e.g., approximately 100 mm diameter for HgCdTe, InP, or InGaAs, as opposed to 200 to 350 mm for silicon); therefore, the wafer size effects on final product yield should be considered for these materials. The cost per square centimeter for Si is not likely to decrease much in the upcoming years. Future reduction in Si chips is more likely to occur from decreased feature size, added functionality per chip, three-dimensional integration, and increased wafer size rather than reduced wafer processing costs.
References 1. R. Geiger, P. Allen, and N. Strader, VLSI, Design Techniques for Analog and Digital Circuits, McGraw-Hill, New York, pp. 19–27, 1990. 2. R. Seeds, “Yield and Cost Analysis of Bipolar LSI,” IEEE International Device Meeting, p.12, 1967. 3. B. Murphy, “Cost Size Optima of Monolithic Integrated Circuits,” Proceedings of the IEEE, Vol. 52, pp. 1537–1545, December 1964. 4. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 124–125, 1994. 5. R. Jaeger, Introduction to Microelectronic Fabrication, Addison-Wesley, Reading, MA, pp. 167–169, 1993.
SOLID ANGLES 1. The cone half angle corresponding to unit solid angle (1 sr) is π ≈ --- – 1 radians with an error of 0.2 percent or about 32.7° 2 2. For cones with small angles, the solid angle Ω is ≈ πθ2, where θ is the half angle of the cone. 3. A 2 × 2-ft window 10 ft away subtends a solid angle of 0.04 sr. 4. The Sun subtends a solid angle of about 7 × 10–5 sr. 5. The ice cream at the top of a cone subtends a solid angle, when viewed from the vertex of the cone, of about 0.2 sr. 6. The continental United States subtends about 0.22 sr of the face of the globe. This is obtained directly from the fact that the land area of the country is about 9 × 106 km2, and the radius of the Earth is 6378 km.
228
Chapter Eleven
Discussion Solid angles are two-dimensional as opposed to normal Euclidean plane geometric angles. Solid angles are the two-dimensional projections of three-dimensional objects. Solid angles are generally calculated by dividing the area of the object by the distance to the object squared. Solid angles always enter into radiometric calculations. Solid angle is defined as Ω = 2π(1 – cosθ) where θ is the half angle of the cone that forms the solid angle (see graphic below). Stated another way, the solid angle of a cone is equal to the area of the cap of the cone divided by the radius from the vertex squared. The cone half angle corresponding to unit solid angle = arc cos[1 – (1/2π)] ≈ π ⁄ 2 – 1 radians with an error of 0.2 percent. For small angles, Ω = π θ2.2 Finally, always remember that, in those rare occasions when you have to know the number of square degrees in a sphere, you don’t get it by squaring 360. The reason is that a sphere has 4π sr. Each of the radians has about 57°, so the answer is about 57 × 57 × 4 × π which is 360 × 360/π.
Reference 1. Rules 3, 4, and 5 were adapted from J. Vincent, Fundamentals of Infrared Detector Operation & Testing, John Wiley & Sons, New York, p. 133, 1989. 2. C. Williams and O. Becklund, Optics: A Short Course for Engineers and Scientists, Wiley Interscience, New York, p. 29, 1972.
SPEED OF LIGHT The speed of light is approximately one foot per nanosecond.
Discussion The speed of light works out to be about one foot per nanosecond. This is useful to remember in a host of applications with range gating, range finding, and lasers. This can also be used to quickly estimate the time of flight of a light pulse. If the target is a kilometer away, this is approximately 3300 ft, so it would take a rangefinder pulse of twice that value to travel this length and return, or 6600 ns.
Chapter
12 Ocean Optics
The interaction of light and water has been a topic of study for many centuries, and for good reason. It is well understood that light in the ocean stimulates the microscopic plant life that supports the food chain and ultimately defines the availability of food resources for man. The earliest interest in the subject related to the characteristics of vision when the observer is submerged or is viewing submerged objects. These problems were successfully addressed when the principle of refraction was understood. By the 1940s, Duntley had begun his pioneering work on the optical properties of clear lake waters. Preisendorfer assembled the existing theory in the mid 1970s and thoroughly summarized the state of the theoretical nature of the problem. The introduction of the laser also stimulated additional work on light propagation in the ocean. G. D. Hickman pioneered the use of pulsed lasers to measure water depth in coastal regions and define the environmental properties of ocean and coastal waters. One of the authors (Friedman) spent a number of years working with Hickman and his team in the characterization of surface pollution, using fluorescence and other techniques for detection of oil, algae, and environmental contaminants. Much of that work relied on traditional characterizations of the aquatic environment, including the absorption, scattering, and total attenuation coefficients. We also used a variety of instruments for characterizing water in both the natural and laboratory environments. One of the important instruments in our arsenal was the absorption meter developed by contractors and employees of NASA and the U.S. Navy. In addition, the advent of the environmental movement led to the realization that optical properties of water could be a sensitive indicator of its quality, the nature of the suspended sediments in it, and the presence of biological and hazardous materials. Methods for using transparency of water were developed more than 100 years ago and have become a standard part of the diagnostic arsenal of agencies, such as the United States Geological Survey, as they conduct their water quality surveys. The development of the laser also added greatly to interest in, and the success of, underwater instrumentation. The recognition that beam attenuation coefficient would be as important as diffuse attenuation coefficient in ocean optics caused NASA, among other government agencies, to begin research programs to improve interpretation of remote sensing data. The high brightness of lasers and the ability to select wavelengths that propagate well in the ocean led to attempts to develop imaging and target tracking systems. This required
229
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
230
Chapter Twelve
developments in both the theory of radiation propagation in turbid media, as well as the creation of new optical systems able to cope with the low temperatures and high pressures associated with operation deep in the ocean. The result has been a great leap forward in underwater imaging science of all types. Television cameras are regularly used at thousands of feet of depth. In a related development, the properties of the ocean were evaluated to determine the impact of the transparency of the ocean on remotely sensed images taken from space platforms. Knowledge of ocean optics has become essential in the proper interpretation of images that include water scenes. This continues to be an active research area, with regular publications appearing in journals like Applied Optics. The U.S. Navy has had a long-standing interest in the use of lasers for communications with submarines. This has stimulated considerable work on the properties of the ocean, as well as the development of special lasers and receivers that match wavelengths well with the optical window of water. In addition, there has been considerable effort to understand how to take advantage of the spectral purity of lasers to perform underwater imaging that is not possible using conventional light sources. In all of these cases, both experimental and theoretical work has been performed to determine the limits of performance imposed by the natural water turbidity and the sensitivity of the system to the ratio of absorption to scattering in the medium. The latter effect has a potentially degrading impact on imaging capability, since light emanating from the object does not make a direct path to the viewer. The absorption part of the problem has a greater impact on the amount of radiation that can reach a particular distance but does not affect imaging. Similarly, the U.S. Navy has been interested for some time in remote sensing of the clarity of water. To some degree, this has been motivated by its interest in establishing the limits of the field of bathymetry (remote measurement of water depth). For obvious reasons, the U.S. Navy and Marine Corps would like to be able to quickly and automatically measure water depth in places where amphibious assaults might be mounted. Of course, in these regions, the water is necessarily shallow but is also frequently muddy. Because the ability to sense the water depth depends on illumination of it with a pulsed laser and detecting the return from the bottom, this technique demands that the combination of the water depth and the absorption coefficient not be too high. Navy researchers have been exploring this field since the 1970s. The reader interested in finding new information about this field should concentrate on reading the various SPIE compendiums of papers that focus on ocean optics. They have had a common title for years, Ocean Optics. The conference proceedings come out about every three years, with the most recent ones accommodated into a joint session on atmospheric and ocean optics. Most of these papers are presented at a fairly sophisticated level, requiring that the reader have some familiarity with the field. Occasionally, one finds an oceanography book that provides a good foundation. An example is J. Apel (Principles of Ocean Optics, Academic Press, 1987). This is a field of EO that gets relatively little attention, so the interested reader will have to do some digging to find new ideas and instrument descriptions. On the other hand, the Internet has made a wide range of arcane data available. For example, Johns Hopkins University’s Applied Physics Laboratory maintains a database of worldwide ocean optics information at http://wood.jhuapl.edu/.
Ocean Optics
231
ABSORPTION COEFFICIENT The optical properties of water include the scattering and absorption coefficients, which, when summed, give the total attenuation coefficient for collimated light. The absorption coefficient, a, can be estimated from the total diffuse attenuation coefficient, K, by1 3K a ≈ ------4
Discussion Empirical evidence, based on the propagation of beams and plane waves such as sunlight, have resulted in a series of approximations that let one ocean optics parameter be derived from other measurements. The absorption coefficient, a, is used to compute energy loss in light propagating in the ocean due to absorption by water and its suspended constituents. This is done using Beer’s law to compute the intensity I(z). –az I ---- = e Io
where z = path length Io = initial intensity when no scattering is present The reader should note that Beer’s law is accurate only for beams, because it includes both absorption and scatter terms. Water types and optical conditions in the ocean vary significantly worldwide. However, this rule can be useful in setting up the dynamic range of beam attenuation instruments based on diffuse attenuation measurements. It is frequently difficult to obtain the absorption coefficient for a body of water, because to do so requires a special instrument such as the one developed by Friedman et al.2 Instead, when some inaccuracy is allowed, this rule can be used. K is measured using a wide-field instrument that is suspended in the ocean and collects light from the hemisphere above it. By measuring the intensity as a function of depth, the value of K can be determined, as it, too, is a scale factor in Beer’s law for diffuse light. To the first order, the ratio of the diffuse attenuation coefficient, K, to the beam attenuation coefficient, α, is equal to ≈0.3 m–1 for clear ocean waters.3 It is somewhat smaller for coastal waters, in the range of 0.15 to 0.25 m–1. That is, K ---- ≈ 0.3 for clear waters α For coastal waters, it is suggested that K ⁄ α depends on the single scatter albedo by the following equation:3 ωo /2 K ---- = [ 0.19( 1 – ωo ) ] α where ωo = the single scatter albedo
Another approximation is as follows: K 4⎛ s ⎞ ---- = --- 1 – --α 3 ⎝ α⎠ where s = total scattering coefficient, which, like α, has the units of meters–1
232
Chapter Twelve
Finally, another approximation is K s ---- ≈ 0.25⎛ 1 – ---⎞ ⎝ α⎠ α
s/2α
References 1. H. R. Gordon, et al., “Introduction to Ocean Optics,” Ocean Optics VII, Vol. 489, SPIE Press, Bellingham, WA, p. 36, 1984. 2. E. Friedman, L. Poole, A. Cherdak, and W. Houghton, “Absorption Coefficient Instrument for Turbid Natural Waters,” Applied Optics, 19(10), pp. 1688–1693, May 15, 1980. 3. G. Guenther, “Wind and Nadir Angle Effects on Airborne Lidar Water Surface Returns,” Ocean Optics VIII, Vol. 637, SPIE Press, Bellingham, WA, 1986. 4. W. Wilson, “Spreading of Light Beams in Ocean Water,” Ocean Optics VI, Vol. 208, SPIE Press, Bellingham, WA, 1979.
ABSORPTION CAUSED BY CHLOROPHYLL The following rule can be used to estimate the additional absorption caused by chlorophyll, over and above that contributed by the water itself. This absorption is 0.0667 (chlorophyll concentration in micrograms/liter)0.758 m–1
Discussion It is well known that phytoplankton in the ocean contribute to optical absorption. The methods employed to measure the absorption are subject to some debate but, in the final analysis, the methods employed by Yentsch and Phinney,1 and the results they have obtained, will be useful to those who use remote sensing data to estimate concentrations of algal “blooms.” Some of the methods involve reflectance measurements of algal samples, whereas others use mechanical or chemical methods for extracting the pigment-bearing part of the cells and doing spectroscopy on the resultant solution. This rule results from measurements in the North Atlantic Ocean. Data were assembled from measurements at a variety of depths in the euphotic zone (the region in which light is available with enough intensity to support life that requires it). The wavelength that the equation applies to is 670 nm. Reference 1 describes the method for measuring the concentration of plankton absorption by using a filter to collect the plankton samples. The reference notes that other techniques have been suggested. This rule, and others like it, can be effective in estimating the penetration of light into the ocean, which can be used with remote sensing information to estimate primary production of small aquatic plants and unicellular algae. It should be pointed out that the data obtained include mixes of algal types and that blooms tend to contain high concentrations of a single specie, thus meaning that the exact values of the coefficients presented in the rule might have to be modified on a species-byspecies basis. For a more detailed, wavelength-dependent expression of the absorption coefficient, a, we consider the following:2 a( λ ) = aw ( λ ) + 0.06ac ( λ )C where
C = concentration of chlorophyll a Y(λ) = e
Γ ( λ – λo )
0.65
[ 1.0 + 0.2Y ( λ ) ]
Ocean Optics
233
Y is the component of yellow substance associated with the pigment chlorophyll a. Γ is taken to be –0.014 nm–1, and λo = 440 nm. The absorption of pure seawater and chlorophyll are represented by aw and ac , respectively. The scattering coefficient for these materials is expressed as b ( λ ) = bw ( λ ) + bc ( λ ) where bw(λ) = wavelength-dependent scattering of pure water bc(λ) = computed from the concentration of chlorophyll by 0.3C0.62(λo/λ) Finally, the diffuse attenuation coefficient of the ocean can be estimated as3 0.022 + 0.0794 pigment0.871 where the word pigment is replaced by the concentration of pigment in milligrams per cubic meter. This rule was developed for a single but important wavelength of 490 nm. The rule applies for concentrations of pigment less than 1.5 mg/m3. As an example, pigment concentrations of 1.5 mg/m3 are found in many ocean conditions. This leads to a K value at 490 nm of 0.135 m–1, which is considerably different from the K value of the water itself, 0.022 m–1.
References 1. C. S. Yentsch and D. A. Phinney, “Relationship between Cross-Sectional Absorption and Chlorophyll Content in Natural Populations of Marine Phytoplankton,” Ocean Optics IX, Vol. 925, SPIE Press, Bellingham, WA, p. 109, 1988. 2. O. Frette et al., “Optical Remote Sensing of Waters with Vertical Structure,” Applied Optics, 40(9), p. 1478, March 20, 2001. 3. D. Collins, et al “A Model of the Photosynthetically Available and Usable Irradiance in the Sea,” in Ocean Optics IX, Vol. 925, SPIE Press, Bellingham, WA, 1988.
ABSORPTION OF ICE AT 532 NM The following formula approximates the absorption of ice at 532 nm:1 –3
a = –( 0.066 ± 0.013 ) + ( 0.458 ± 0.056 ) × 10 T ( K ) where
a = absorption coefficient in m–1 T(K) = temperature in kelvins
Discussion The first term compares well with the absorption of the purest ocean waters of about 0.06 m–1, although pristine Antarctic glacial ice may be as much as an order of magnitude clearer. The data used in Fig. 12.1 is for more common ice that may have micrometer-size impurities that affect its clarity. The first term in the equation is independent of temperature. As can be seen in the figure, which we have included to illustrate how the absorption of water and ice varies over the visible and infrared wavelengths, ice has a minimum absorption that aligns well with the maximum transmission of ocean water. Its absorption for the range of wavelengths from the UV to 10 µm shows no other transmission windows.2,3 See Ref. 4 for the data that produced the figure in this rule. The reader will want to review the rule related to Beer’s law (p. 47) for a refresher on how to compute the transmission of a medium with a particular value of absorption (or scattering) coefficient.
234
Chapter Twelve
FIGURE 12.1
Wavelength in nanometers.
References 1. K. Woschnagg and P. Price, “Temperature Dependence of Absorption in Ice at 532 nm,” Applied Optics, 40(15), pp. 2496–2500, May 20, 2001. 2. S. G. Warren, “Optical Constants of Ice from the Ultraviolet to the Microwave,” Applied Optics, Vol. 23, pp. 1026–1225, 1984. 3. G. M. Hale and M. R. Querry, “Optical Constants of Water in the 200 nm to 200 µm Wavelength Region,” Applied Optics, Vol. 12, pp. 555–563, 1973. 4. http://omlc.ogi.edu/spectra/water/abs, 2003.
BATHYMETRY The form of the return pulse when a laser pulse is projected into the ocean from above is L
Aσ( L ) P( L ) = ----------------------2 exp –2 ε( l )dl ( nH + L )
∫ 0
where P(L) = power obtained from depth L n = index of refraction of seawater
Ocean Optics
235
σ(L) = backscatter coefficient as a function of depth ε(L) = seawater extinction coefficient, equal to σo + ξ σo = scattering albedo of seawater; the scattering coefficient integrated over all angles ξ = absorption coefficient of seawater H = altitude of transmitter Pcτη nζ( 1 – ρ )D sin β 2 A = ------------- ------------------------------------2n tan α ρ = external reflectivity of the ocean τ = laser pulse duration β = half angle field of view of the receiver α = angular divergence of the laser beam η = transmission of the optics in the receiver H
ζ=
∫ εa(l)dl 0
εa(L) = D= P= c=
atmospheric extinction coefficient as a function of altitude aperture of the receiver power of the laser pulse exiting from transmitter speed of light
Discussion The absorption and scattering of the ocean play a role in determining the backscattered light that is detected when a pulsed laser is used to measure water depth. This process is known as bathymetry. This is done by firing a pulse of laser light toward the ocean surface from above (usually from an airplane or helicopter). The pulse first reflects from the surface and, if the product of the water depth and absorption are small enough, the bottom. The detected time interval between the two pulse returns (usually coincident with the transmitter) allows computation of the water depth, because we know that the speed of light in water is about one foot per nanosecond. Note that the depth resolution of the system can be no better than one-half the pulse width (in time) times the speed of light in water (c/n); ∆L = cτ/2n. One of the authors (Friedman) of this book has used this technique to measure tree height by measuring the time between the first pulse response (from the tops of the trees) and the second response (from the soil near the trunks of the trees). This rule assumes that the water surface is smooth.
Reference 1. K. Lee et al., “Helicopter-Based Lidar System for Monitoring the Upper Ocean and Terrain Surface,” Applied Optics, 41(3), pp. 401–406, January 20, 2002.
f-STOP UNDER WATER A good rule of thumb for underwater photography is that the f-stop should be increased one full increment (i.e., the effective aperture increased by a factor of 2) for every 36 ft of increased distance from the subject.
236
Chapter Twelve
Discussion This rule has been observed in real ocean conditions. It could be guessed by the following argument. The beam attenuation coefficient in very clear water (the type in which photography will be attempted) is about 0.05 m–1. Following Beer’s law, this means that any light emanating from the object to be photographed will be reduced in intensity by a factor of two in about 12 m, which is about 36 ft. Photography in any conditions, including in the ocean, can be affected by scattered light, unnoticed sources of light, conditions of the target, and other factors. Therefore, it is best in all cases to determine the most likely exposure and then “bracket” it with exposures one f-stop above and below the most likely value. The same applies here. In fact, because most underwater photographic opportunities are rare, it would be appropriate to shoot a series of exposures that vary from 2 stops below to 2 stops above the most likely value to be sure of obtaining a good image. As stated above, this rule applies for photography in clear water. This is almost always the condition under which photos will be obtained, as cloudy and turbid water will not only attenuate the propagation of light but will have very low contrast as a result of the extensive scattering in the medium. Moreover, most divers do not attempt photography in turbid water. Water is an exponential medium from the point of view of light transmission. This means that both directed beams of light and diffuse fields of light are attenuated exponentially with distance. The reader should note that Beer’s law describes the attenuation for beams of light. The attenuation for diffuse fields is also exponential, but the amount of attenuation is not described by Beer’s law, as scattered light can play an important role. This occurs because, in the diffuse-field case, the field of view of the receiver is large enough to allow the scattered radiation to be detected. Because of the much higher scattering and attenuation of water as compared with air, the methods of estimating the f-stop required for proper exposures in air do not apply in water.
References 1. L. Mertens, In-Water Photography, Wiley Interscience, New York, p. 29, 1970.
INDEX OF REFRACTION OF SEAWATER The index of refraction of seawater depends on its temperature and its salinity. It can be approximated by1 2
3
2
3
n = 1.33 + ( 3400 + n1 T + n2 T + n3 T + S ( n4 + n5 T + n6 T + n7 T ) ) × 10
–6
for a wavelength of 589.3 nm where the following coefficients are used: n1 n2 n3 n4 n5 n6 n7
–0.86667 –0.2350 1.16667 19.65 –0.1 2.25 × 10–3 –2.5 × 10–5
T and S are the temperature (Celsius) and salinity (parts per thousand), respectively.
Ocean Optics
237
Discussion The index of refraction of water is a significant factor in interpreting imaging in the ocean and is important in properly interpreting remotely sensed data. This approximation compares favorably with the real index of refraction of water, as shown in Ref. 1. It provides an easy method for estimating the change in index as a function of temperature and salinity. The index of refraction is a key factor in determining the reflectivity of the ocean. Another approach, not dependent on temperature and salinity, is2 2 0.0065438 ⎞ 1 ⁄ 2 nwater ( λ ) = ⎛ 1.76148 – 0.013414λ + ---------------------------------2 ⎝ ⎠ λ – 0.0132526
where wavelength is expressed in micrometers.
References 1. D. Collins, “Recent Progress in the Measurement of Temperature and Salinity by Optical Scattering,” Ocean Optics VII, Vol. 489, SPIE Press, Bellingham, WA, p. 247, 1984. 2. A. Jaaskelainen, “Estimation of the Refractive Index of Plastic Pigments by Wiener Bounds,” Optical Engineering, 39(11), pp. 2959–2963, November 2000.
OCEAN REFLECTANCE Given the diffuse backscatter coefficient, bb (meters–1), and the absorption coefficient, a (meters–1), of ocean water, we can estimate the reflectance, R, as1 bb 0.33 ----a
Discussion Remote sensing of the ocean is a valuable economic, military, and scientific technology. Proper interpretation of the results involves removing the effects of the intervening atmosphere, correcting for the effect of waves and clouds, and proper interpretation of the results of those corrections. A key part of the interpretation is the recognition of the presence of phytoplankton, which impose a spectral content on the signatures obtained in the imagery. The importance of these corrections and interpretations is a strong function of the spectral resolution of the images. For example, many of the detailed effects of plankton on the image spectral content will be lost in LANDSAT or SPOT multispectral data. However, hyperspectral imagery, in which up to hundreds of spectral bands may be recorded, will show the presence of the plankton, and more advanced interpretive methods will be needed. A more complete computation gives bb ----a R = -------------------------------------bb bb 1 + ----- + 1 + 2 ----a a The result of the surface reflectance is an upwelling radiance just above the surface of2
238
Chapter Twelve
t Upwelling radiance = ρLsky + ----2- Lu n where
ρ = surface specular reflectance coefficient Lsky = radiance (watts per square meter) on the ocean surface due to skylight t = transmittance of the air-ocean interface n = index of refraction of the water Lu = upwelling light field in the water in watts per square meter
These rules assume that the optical properties, absorption, and scattering are linear functions of the concentration of plankton. This assumption may be stretched to the limit as one considers the spectral properties of the various plankton types. In addition, the shorter version of the rule is easier to use but is not as accurate as the more complex form. This rule provides an easy and quick estimate of the upwelling light field just above the surface. This allows the designer of ocean optical instruments to define the likely requirement for dynamic range and provides an estimate of the change in the upwelling light as a function of ocean conditions. This also allows the designer to improve estimates of the light intensity beyond that which results from simply considering the ocean reflectance.
References 1. J. C. Erdmann and J. M. Saint Clair, “Simulation of Radiometric Ocean Images Recorded from High-Altitude Platforms, Ocean Optics IX, Vol. 925, SPIE Press, Bellingham, WA, p. 36, 1988. 2. H. R. Gordon et al, “Introduction to Ocean Optics,” Ocean Optics VII, Vol. 489, SPIE Press, Bellingham, WA, p. 40, 1984.
UNDERWATER DETECTION For most observers with experience in attempting to find submerged objects, it is found that the object can be observed at a distance computed from the following expression: 5 -----------------------α – K cos θ where α = attenuation coefficient for collimated light (meters–1) K = diffuse attenuation coefficient (meters–1) θ = zenith angle measured from the swimmer (The notation used by Preisendorfer is a “swimmer centered direction convention.” That is, a downward view corresponds to a zenith angle of 180°. A horizontal view has a zenith angle of 90°, so the scaling range is 1 ⁄ α .)
Discussion This is the result of empirical investigations and will vary somewhat with the abilities of the observer. However, Preisendorfer’s work on this subject is exhaustive and provides a wide range of examples related to visibility and biology in the marine environment. Many of the concepts are derived from the theory of radiation transport in turbid media and are beyond the scope of this book. A quick review of Preisendorfer’s book shows that he covers a wide variety of viewing and lighting conditions. This rule applies in general and can be used as a first approxima-
Ocean Optics
239
tion. As is always the case in the analysis of human vision, the exact conditions of any particular situation must be analyzed in detail. Underwater vision is of aesthetic and practical interest. Early interest in underwater optics did not have the advantage of the advancements in EO technology that have occurred during the last 40 years or so. Today, this area of geophysics has become a fairly mature science, with a full range of theoretical and experimental results. In extremely and exceptionally clear water near the surface, with an α of 0.1 m–1, a swimmer looking horizontally can see about 50 m (5/0.1).
Reference 1. R. W. Preisendorfer, Hydrologic Optics, Vol. 1, U.S. Dept. of Commerce, Washington, DC, p. 194, 1976.
UNDERWATER GLOW A persistent source of light in the ocean is due to the action of biota. Resulting from luminous plankton, bioluminescence has the following properties: 1. At night, the bioluminescence follows the temperature, with the maximum lighting occurring in the mixed layer and with decreasing intensity below the thermocline. 2. There is significant diurnal variation in the light intensity. 3. The spectrum in surface waters ranges from 360 to 620 nm, with a peak around 480 nm.
Discussion Ocean optics include a number of applications in remote sensing for commercial and military purposes. The types of data presented in this rule tend to deal with the case in which observations are being made in the ocean, as the excitement of bioluminescence presumes that something like a submarine or other object is moving through the water. Designers of camera systems or other types of imaging methods need to take this additional source of radiation into account when computing background levels that might be encountered. That is, one can estimate the amount of sunlight present as a function of depth using the diffuse attenuation coefficient, K. The biological sources of radiation must be considered as well, because their presence will reduce the contrast observed in imaging of submerged objects, using either residual sunlight or artificial light sources. These rules are general and depend on the conditions under which the observations are made. The data have been developed in a series of measurements in the Pacific and Atlantic Oceans and the Barents and Mediterranean Seas. Data have been obtained from a variety of depths, including the near-surface region (around 200 m depth) to depths in excess of 3600 m. In the latter case, the presence of a deep-diving submersible is expected to stimulate the light emission from the various organisms in the sea. Clearly, one can imagine that the actual light emission might vary, depending on the velocity, size, and turbulence generated by an object deep in the ocean. Therefore, the specific spectra and radiance levels that were observed may be very much the result of the details of how the experiment was performed. These types of general rules have the purpose of keeping the EO designer aware of the presence of biological sources of light. Observations made in the natural environment will have such lights as a part of the background that sensors will encounter. Turbulence from the movement of ships tends to mix the near-surface layers, thus enhancing the bioluminescence effect. Often, there will be a trail of bioluminescence glow
240
Chapter Twelve
following a heavy ship for several miles. There are stories of Navy pilots losing their instruments and using the glow to find their way back to their carrier.
Reference 1. J. Losee et al., “Bioluminescence in the Marine Environment,” Ocean Optics VII, Vol. 489, SPIE Press, Bellingham, WA, p. 77, 1984.
WAVE SLOPE From Ref. 1, the mean square surface slope is defined by its variance (σ2) and can be approximated by 2
–3
σ = 0.003 + 5.12 × 10 W where W = wind speed (m/sec)
Discussion The apparent surface reflectance of the ocean depends on the combined effects of material reflectance and the range of slopes of the surface. The rule provides a measure of the statistical properties of the waves induced by wind. Ocean reflectance is a critical factor in the performance of a number of remote sensing systems. Knowing the effective reflectance allows the system designer to better estimate the contribution that ocean reflectance will make to the radiation reaching the sensor. Reference 2 shows that surface wind speeds over the globe range up to about 10 m/sec except in storms, where it can be much higher. A truly flat surface exhibits a mix of specular and diffuse reflectance, with the latter resulting from the subsurface scattering that occurs. In the presence of wind, the surface takes on a new character and exhibits glint. Reference 1 also gives 2
σ = ( ln W + 1.2 ) × 10 2
–2
for a wind speed, W , below 7 m/sec
σ = 0.85 ln ( W – 1.45 ) × 10
–1
for higher wind speeds
Using these approximations, the time-averaged radiance of the ocean can be estimated using methods that are defined in Ref. 2 but too complex to be included here.
References 1. D. Fraedrich, “Spatial and Temporal Infrared Radiance Distributions of Solar Sea Glint,” Ocean Optics IX, Vol. 925, SPIE Press, Bellingham, WA, p. 392, 1988. 2. J. R. Apel, Principles of Ocean Physics, Orlando, FL, Academic Press, p. 201, 1987.
Chapter
13 Optics
Optics tends to be a discipline whose state of the art is advanced by the needs of users. Generally, developments in optics seem to have been linked to specific engineering applications. Optics of antiquity, until about 1700, existed mainly as an aid to vision. From about 1600 to World War II, the main impetus for optical development was to develop better instruments for astronomy, window glass, and other industrial applications. Navigation and vision aid also played a key role in this era, but most major developments were somehow geared to astronomy (e.g., the Foucault knife edge test, interferometers, new telescopes, and so on). Military needs dominated optics development from World War II to the 1990s. In the 1990s, the military faded as the driving force, to be replaced by communication and computing. It can be estimated that, in a matter of decades (say, 2020), the emphasis will again shift. It is conjecture to predict what the driving force will be, but it might be something like bionics or robotics. The science of optics began thousands of years ago. Archeological findings involving the Phoenicians suggest that powered lenses were made over 3000 years ago. Clearly, all of the ancient cultures studied light and its interaction with matter. Aristotle, Plato, Ptolemy, Euclid, Pythagoras, and Democritus all wrote extensively about vision and optics. Seneca (4 B.C. to A.D. 45) was the first to write about observing light divided into colors by a prism. To these early investigators, the world was full of rules of thumb and principles explained by the thought process alone. Sometimes this resulted in poorly made or poorly understood observations. Among the incorrect theories was that vision resulted from “ocular beams” emitted from the eye. This theory was finally rejected by al-Kindi (A.D. 801–873) and al-Haitham (A.D. 965–1039). One of the earliest paintings of a person wearing glasses is attributed to Crivelli’s painting of Hugues de St. Cher in 1352; however, spectacles and lenses were known to glassmakers for several centuries prior to this period. No one knows who invented spectacles. Likewise, much controversy surrounds the inventor of the first telescope, although it probably occurred around 1600 by Lippershey (who applied for a patent in 1608), Adriaanzoon, Jansen, or someone else. Refractive telescopes were already being sold as toys and navigation aids when Galileo and others turned them to the heavens for astronomy. Prophetically, Galileo remarked that the science of astronomy would improve with further observations from better telescopes. The microscope was invented about the same time, with almost as much controversy.
241
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
242
Chapter Thirteen
Theory followed inventions and the new observations that they provided. A few decades later, in the mid-1600s, Snell, Descartes, and Huygens worked on the law of refraction, which became known as Snell’s law (or Descartes law). The publication of Isaac Newton’s Opticks in 1704 was a milestone in the science and engineering of optics and started the argument about whether light was a stream of particles (corpuscles) or a wave. We now know that it behaves as both, although Newton’s concept of a light particle and the more familiar photon of today are quite different. A century later, Thomas Young developed his double-slit experiment indicating light was an “undulation of an elastic medium.” A little more that a century later, Einstein won his Nobel prize for analyzing the photoelectric effect that demonstrated the particle nature of light. The seventeenth century also saw the development of the reflective telescope promoted by Marin Mersenne. In the mid-1600s, James Gregory developed the Gregorian configuration, Guillaume Cassegrain the Cassegrain telescope concept, and it is generally accepted that Isaac Newton built the first useful reflective telescope in 1668. Incidentally, when referring to the Cassegrain design, Newton provided a glimpse into his jealousy by boldly stating that “the advantages of this device are none.” Today, the Cassegrain concept is widely used. Then, in 1800, William Hershel (1738–1822) reported the discovery of light beyond the visual spectrum and attempted to determine the radiant power as a function of wavelength. Shortly afterward, Fresnel, Arago, and Fraunhofer developed the diffraction theory, which added greatly to the discipline of optics and provided the basis for many of the following rules. The late 1800s were marked by Maxwell, Kirchoff, and Michelson putting to rest the theory of the ether. The mathematical tools that enabled these developments developed during the 1800s as well. This past century saw the development of manufacturing technology, including John Strong’s advancements in reflective coatings, the invention of the laser, and development of holography. The latter half of the 1900s was the age of the application of optical sciences to other fields such as electronics (photolithography, which makes the integrated circuit possible), medicine, nondestructive testing, spectroscopy (for chemical analysis), and a great deal more. The reader who is interested in more detail about optics has a surfeit of available texts. The authors would suggest Hecht’s Optics or Smith’s Modern Optical Engineering as a first read. For more detailed analytic discussions, one should seek Fowles’ Introduction to Modern Optics and the venerable Born and Wolfe’s Principles of Optics. Additionally, there are many series and handbooks available on this diverse subject. For an understandable discussion of the quantum electrodynamic fundamentals of optics (e.g., to learn what really occurs in reflection, refraction, and the like), one should read Feynman’s QED. The academic journals that typically specialize in this discipline include Applied Optics, the Journal of the Optical Society of America, and Optical Engineering. For late-breaking news of the technology, one should consult Laser Focus, Photonics Spectra, Physics Today, and Sky and Telescope. Several professional organizations frequently hold seminars (and publish their associated proceedings), including SPIE, the Military Sensor Symposium, and AIP/OSA. There are several optics “users groups” on the Internet. Also, do not forget the optics catalogs and users guides available from manufacturers. Several have excellent engineering discussions of pertinent principles and available technologies.
Optics
243
ABERRATION DEGRADING THE BLUR SPOT The angular diameter of the blur spot (Db) as a function of a given aberration can be estimated by the following: 1. Spherical aberration:
2. Coma:
U Db= (Longitudinal spherical aberration) ⎛ ------⎞ ⎝2 f ⎠ U = Transverse spherical aberration/2 f Db = Coma/f
3. Astigmatism:
2U Db = ( 1 ⁄ 2 )(Astigmatic focus difference) ------f
4. Field curvature:
2U Db = (Defocus)⎛ -------⎞ ⎝ f ⎠
where U = slope angle between the marginal ray and the axis at the image f = effective system focal length
Discussion The angular diameter (in radians) of the best focused spot may be estimated by the above equations. The angular blur, Db, is the angle subtended from the second nodal point of the system. The above may be very useful for determining if an aberration is the dominant resolution limiter and for determining aberration specifications or estimating the effect of a given aberration. There is usually good agreement between these estimates and measured values, but these are estimates only. Therefore, they must be viewed with caution for large apertures or fields. These equations also are not valid if there is a mixture of aberrations present. This is based on third-order aberration theory. When several aberrations seem to cause close to (within 50 percent) the same blur spot diameter, then a conservative estimate of the entire blur spot can be made by summing the blurs from individual aberrations, or they may be root-summed-squared. Reference 1 illustrates how the longitudinal spherical aberration of typical lenses is computed. Longitudinal spherical aberration is defined by the distance along the optical axis over which the best focus has been achieved. Similarly, the transverse spherical aberration is smallest at the point on the optical axis where the blur spot is smallest. The blur at this point is also called the circle of least confusion.
References 1. http://www.mellesgriot.com/products/optics/fo_4_2.htm, 2003. 2. W. Smith, “Optical Elements, Lenses and Mirrors,” The Infrared Handbook, W. Wolfe and G. Zissis, Eds., ERIM, Ann Arbor, MI, pp. 9–3, 1978. 3. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, Springer, New York, p. 111, 2003.
ABERRATION SCALING The effect of aberrations can be scaled by field of view (FOV) and f/# raised to the power of 2 or 3.1
244
Chapter Thirteen
Discussion Aberrations depend on field angle and aperture size, so they can be roughly scaled by FOV and f/#. When someone states that the change in a field of view or f/# can be accommodated easily, beware. Those changes will increase aberrations on axis, off axis, or both. The acceptability of the increased aberrations at least needs to be assessed. This rule is related to others in this chapter. This rule can be stated exactly as follows:2 Spherical:
1 ⎞3 ⎛ ------⎝ f /#⎠
Coma:
1 ⎞2 ⎛ ------- FOV ⎝ f /#⎠
Astigmatism and field curvature:
2 1 ⎞ ⎛ ------- FOV ⎝ f /#⎠
Distortion:
FOV
3
The above applies only to third-order aberrations. For residuals and higher-order aberrations, the exponents are larger.
References 1. Private communications with Tom Roberts, 1995. 2. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, Springer, New York, p. 111, 2003.
ACOUSTO-OPTIC TUNABLE FILTER BANDPASS The bandpass of an acousto-optic tunable filter (AOTF) (in micrometers) can be approximated by 2
0.8λ ∆λ ≅ ------------L∆n where ∆λ = filter bandpass in µm λ = center wavelength in µm L = transducer width in the same units as λ ∆n = crystal’s birefringence at wavelength λ The wavelength passed is ∆nV λ = ----------f where V = acoustic velocity in the crystal f = applied acoustic pressure frequency
Optics
245
Discussion AOTFs have been used since the 1980s to tune the wavelengths for imaging and telecommunication applications, as they can be spectrally very narrow and have very fast response times. These filters are exotic crystals with an index of refraction that is a strong function of strain. Thus, a substantial change in the index of refraction can occur for a very small strain. An induced-pressure standing wave can cause a periodic density variation resulting in a “Bragg” effect, which simulates a grating. A piezo-actuator can be used to pump an acoustic standing wave (at an RF frequency) into the crystal, causing rapid and accurate transmission wavelength selection. Because these crystals are birefringent, care should be taken to ensure proper polarization coupling. Typically, these filters can be expected to produce a bandwidth less than 1 nm, a microsecond response, and a tuning range of hundreds of nanometers. Note that, at a wavelength of 1.55 µm, bandwidth is approximated by 2/L∆n
Reference 1. S. Kartalopoulos, Introduction to DWDM Technology, IEEE Press, Piscataway, NJ, p. 82, 2000.
BLUR VS. FIELD-DEPENDENT ABERRATIONS The following equations approximate the expected contributions to the blur diameter associated with field-dependent aberrations for various two-mirror telescope designs. The blur contribution from aberrations in radians and the off-axis angle θ is in degrees. 1. Dall–Kirkham at the plane of paraxial focus has a blur diameter (in radians) of ≈0.001θ. 2. Cassegrain at the plane of paraxial focus has a blur diameter (in radians) of ≈0.00062θ. 3. Cassegrain with curved focal surface at best focus blur diameter (in radians) of ≈0.00035θ. 4. Ritchey–Chretien at plane of paraxial focus has a blur diameter (in radians) of ≈0.0002θ.
Discussion Several on-axis telescope designs are named after their inventors. The difference is the aspheric curvature of the surface of the mirrors. A Dall–Kirkham has an aspheric primary and a spherical secondary; a Cassegrain has a parabolic primary and a hyperboloid secondary; a Ritchey–Chretien has a hyperboloid primary and a hyperboloid secondary. The analysis above assumes an f/3 optic with primary-to-secondary axial spacing of 33 percent of the primary radius and a back focal distance of 1.05 times the primary-to-secondary spacing. The real performance will depend on the optical design and some nonlinear features that are not represented in the equations. The above contribution to blur diameter represents the extent of a point object resulting from the effects off-axis optical aberration only, not diffraction effects. Diffraction effects may dominate and should be treated as explained in other rules. Astigmatism and field curvature often vary as the square of the field, which will cast doubt on the above approximations.
246
Chapter Thirteen
Reference 1. NASA-Goddard Space Flight Center, Advanced Scanners and Imaging Systems for Earth Observations, U.S. Government Printing Office, Washington, DC, pp. 102, 150, 1973.
CIRCULAR VARIABLE FILTERS Circular variable filters (CVFs) can be an important technology for providing continuously variable spectral selection. Reference 1 provides insight into how the bandwidth of the filter depends on system parameters. The transmission at a particular wavelength (λ) has a Gaussian form, Am 1 λ – λm 2 τF,m ( λ ) = -------------------- exp –--- ⎛ --------------⎞ 2 ⎝ σm ⎠ 2π ⋅ σm where the filter linewidth is characterized by the standard deviation σm and the transmission amplitude Am. The central wavelength (λm) may be expressed as a function of the disk rotation angle in degrees, and the remaining two parameters may be expressed as a function of the rotation angle or the central wavelength.
Discussion The reference provides particular data for the range of wavelengths from 2.5 to 12 µm. The parametrizations for selected bands are as follows: ■ 2.5 to 4.5 µm λm = 1.7994 + 0.0245θm , 2
σm = 0.00487 + 0.0043λm – 0.000299λm 2
3
Am = 0.0786 – 0.0828λm + 0.0303λm – 0.00338λm ■
5 to 8 µm λm = –1.7092 + 0.042937θm σm = 0.0039392 + 0.0050875λm 2
3
Am = 10.131 – 5.3954λm + 1.1537λm – 0.070073λm ■
8 to 12 µm λm = – 11.978 + 0.080114θm σm = – 0.021624 + 0.00034057θm 2
3
Am = 268.15 – 3.22θm + 0.012974θm – 0.00001696θm
Reference 1. M. Mermelstein, K. Snail, and R. Priest, “Spectral and Radiometric Calibration of Midwave and Longwave Infrared Cameras,” Optical Engineering, 39(2), pp. 347–352, February 2000.
Optics
247
DEFOCUS FOR A TELESCOPE FOCUSED AT INFINITY An optical system focused at infinity will experience a defocus when the object that it is attempting to image is not at infinity. If the object is at a finite range, the angular blur of a telescope focused for infinity is β = D/R where β = resulting angular blur caused by misfocus from the object being closer than infinity D = clear aperture (aperture diameter) R = distance of object from the sensor’s aperture
Discussion This relationship is based on Newton’s equation and geometrical optics. Smith1 points out that the depth of field for a system with a clear circular aperture can be written as r R ------------------ = ---β( R ± r ) D where r = the distance of the object from the point of focus in object space (in other words, how far it is from the location where the system is focused) Solving, we get 2
R β r = ---------------D ± Rβ For the image side, the above relationship can be reduced to 2
2
R β F β r = --------- = --------- = Fβf /# D D where F = focal length f/# = effective f/# of the system (see related rule on p. 250) For a hyperfocal distance (R + r) is infinity, and β is equal to D/R. Added angular blur from defocus seriously degrades system performance when it exceeds the in-focus angular blurring from other aberrations and detector instantaneous field of view. The above relationship allows one to estimate the amount of defocus from a system (focused at infinity) attempting to image an object not at infinity. Note that the depth of field toward the optical system is smaller than that away from the system. The above also defines the hyperfocal distance of a system, which is the distance at which the system must be focused so that it remains in focus from that distance to infinity.
References 1. W. Smith, Modern Optical Engineering, 3rd ed., McGraw Hill, New York, pp. 155–156, 2000.
248
Chapter Thirteen
DIFFRACTION IS PROPORTIONAL TO PERIMETER Clark et al.1 have shown that the far-field diffraction of a uniformly illuminated aperture of arbitrary cross section is proportional to the perimeter of the aperture.
Discussion A rather involved calculation using diffraction theory was employed to generate this result. This rule is only an approximation but works well enough for many applications. In addition, it applies only in the far field and only in the region near the beam axis. Specifically, the rule states that the half beam angle for which one-half of the power is included is pλ --------2 π A where p = perimeter of the aperture λ = wavelength A = aperture area Generally, it is quite difficult to compute the far-field radiation pattern from an arbitrary aperture shape. This handy rule simplifies the process and provides answers of adequate accuracy for most purposes. For example, it can be used to show the incremental effect of additional secondary mirror struts or other structural elements that might be inserted into the aperture.
Reference 1. P. Clark, et al. “Asymptotic Approximation to the Encircled Energy Function for Arbitrary Aperture Shapes,” Applied Optics, 23(2), p. 353, January 15, 1984.
DIFFRACTION PRINCIPLES DERIVED FROM THE UNCERTAINTY PRINCIPLE Diffraction is related to the momentum uncertainty of the position and energy of a photon, and the basic relationship for diffraction can be easily calculated from the uncertainty principle applied to a photon.
Discussion Consider Fig. 13.1, which shows an opaque aperture with a hole through which light passes. We can analyze this situation with Heisenberg’s uncertainty principle. That is, ∆p∆d ≈ h where ∆p = uncertainty in momentum ∆d = uncertainty of the position of the photon in the y direction For this argument, we will say that the uncertainty of the location is never smaller than ∆d. Recall from freshman physics that the momentum of a photon is
Optics
FIGURE 13.1
249
Light passes through hole in opaque aperture.
h p = --λ where λ = wavelength h = Planck’s constant p = momentum For a photon to strike the screen at other than the origin means that it has some component of momentum in the y direction. Said another way, uncertainty of the photon momentum in the y direction is pθ. Rewriting Heisenberg’s uncertainty rule, we find that the product of the position and momentum uncertainty in the y direction is h d p y = d --- θ = h λ which can, of course, be simplified to θ = λ/d, the familiar equation of diffraction of a rectangular opening. Unfortunately, this does not provide the numerical constant (e.g., 2.44 for the diameter of the first Airy disk for a circular aperture). However, that constant depends on the two-
250
Chapter Thirteen
dimensional shape of the aperture (and such considerations are not addressed by the above). This rule ties classic optics to quantum mechanics and underscores the importance of diffraction, and the foolishness of people who think they can beat it. Also, this provides a great way to terrify graduate students and job interviewees—ask them to derive the basic diffraction law from the uncertainty principle. If they can, then they either have great insight into the relationship of various forms of nature, were accosted by this problem before, or have a copy of this book. In any case, such a candidate would be worthy of the doctorate or job.
References 1. Private communication with Dr. J. Richard Kerr, 1995.
f/# FOR CIRCULAR OBSCURED APERTURES The standard definition of f number is generally inappropriate when an obscuration is present. In such cases, use the effective f number, effective focal length of the overall system f /#effective = -----------------------------------------------------------------------------------------------------diameter of primary mirror
1 ----------------------------Do 2 1 – ⎛ -------⎞ ⎝ D p⎠
where Do = effective diameter of the obscuration Dp = diameter of the primary mirror or other defining entrance aperture
Discussion This rule can be derived from the following argument: The effective f/# is equal to the ratio of the effective focal length and the effective aperture diameter, fe fe f f /#e = -----e- = ------------ = ---------------------------------------De 4 4 π⎛ 2 2⎞ --- Ae --- --- D p – Do π ⎠ π 4⎝ which is equivalent to the equation in the rule. The subscript e refers to the effective value of each parameter, p refers to the primary mirror, and o refers to the obscuration. Ae is the effective area of the entrance aperture. This relationship is useful for determining the f/# or effective focal length that should be used and for estimating the impact of a central obscuration. If there is no central obscuration, this reduces to the classic f/Dp. The effect of a central obscuration is to reduce the energy of the Airy disk and increase its angular extent. A central obscuration transfers energy from the Airy disk to the rings, making them more powerful. However, the presence of the central obscuration decreases the size of the central bright spot, an advantage in certain kinds of imaging systems. There is a special-case simplification of the above rule that applies to many telescopes (especially for visible and IR astronomy). If the diameter of the central obscuration is small as compared to the diameter of the aperture, then
Optics
251
1 1 2 ---------------- ≈ 1 + --- ε 2 2 1–ε where ε = obscuration diameter divided by the aperture diameter In this case, the following equation applies: 2 effective focal length of the overall system ⎛ ε ⎞ f/#effective = ------------------------------------------------------------------------------------------------------ ⎜ 1 + ------⎟ diameter of primary mirror 2⎠ ⎝
A commonly used figure of merit is the Strehl ratio. Reference 2 shows that Aannulus – Aspiders 2 Strehl ratio ≡ S = ⎛ ---------------------------------------⎞ ⎝ ⎠ Aannulus where Aannulus = area of the obscuration Aspiders = total area of all spiders If N struts have a width defined as a fraction δ of the large aperture diameter D, then 2
Aspiders = ND δ( 1 – ε )/2 2
2
Aannulus = πD ( 1 – ε )/4 Therefore, finally, 2Nδ S = 1 – -----------------π(1 + ε)
2
References 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, p. 197, 1969. 2. J. Harvey and C. Ftaclas, “Diffraction Effects of Telescope Secondary Mirror Spiders on Various Image-Quality Criteria,” Applied Optics, 34(28), pp. 6337–6349, October 1, 1995.
FABRY–PEROT ETALONS The transmittance and bandwidth of a Fabry–Perot interference filter can be represented by 1 Peak transmittance = -----------------------2( 1 + A/T ) and (1 – R) Bandwidth ≈ ---------------0.5 πmR where A = absorptance of the surfaces R = reflectance of the surfaces
252
Chapter Thirteen
T = transmittance of the surfaces m = order of interference of a semi-transparent aluminum film
Discussion This rule is useful for explaining and understanding the effects of layer absorptance and reflectance in filter design for cases in which the finesse (F) of the filter is low. Finesse is defined as the separation between the maxima divided by the half-width of each peak. An etalon usually consist of a thin pair of semi-transparent thin films separated by a thin layer of dielectric. Transmission can be increased by thinner deposition of the metal films; however, the bandwidth tends to become greater. The reader may find that the first of the above equations is more effectively expressed as the exact equation A Peak transmittance = 1 – --------------(1 – R)
2
This equation becomes the one shown in the rule when one notes that T + A + R = 1. The reader is cautioned that high-finesse Fabry–Perot etalons may be limited by mirror flatness. Finesse is inversely proportional to wavefront error. This results from the fact that multiple reflections are involved in the operation of this type of filter, meaning that each pass detects and responds to the defects in the surfaces involved. That is, the filter has an inherent (or ideal) finesse defined by the reflectivity of its surfaces as shown below: R F = ----------1–R Inclusion of the effects of surface defects is usually modeled in the following way. An effective finesse is defined as 1 F e = --------------------2 2 Fr + Fd where Fr = finesse related to reflectivity Fd = finesse resulting from defects in the optical surfaces The total effect of these terms is as follows:1 λ F d = ------------------------------------------------2 2 2 4δ8 + 22δ rms + 3δ p where
δs = spherical deviation from flatness δrms = surface roughness of the plates δp = a measure of the nonparallel property of the two plates that make up the etalon
Note that the finesse drops linearly with wavelength.
Reference 1. P. D. Atherton, N. K. Reay, J. Ring, and T. R. Hicks, “Tunable Fabry–Perot Filters,” Optical Engineering, Vol. 20, pp. 806–814, 1981.
Optics
253
FOCAL LENGTH AND FIELD OF VIEW The instantaneous field of view (IFOV) can be estimated by 1000( pixel size ) IFOV = --------------------------------------focal length where IFOV refers to the instantaneous or pixel field of view in microradians, pixel size is in micrometers, and focal length is in millimeters. Converting the rule to degrees, a system’s FOV can be estimated from 57.3( FPA size ) FOV = -----------------------------------focal length where FOV is in degrees, the FPA size and focal length are in millimeters. In this case, we have used the size of the entire FPA, resulting in an estimate of the entire field of view.
Discussion These rules derive from the basic fact that a field of view depends on the size of the image plane divided by the focal length.
GRATING BLOCKERS If you keep the spectral range to less than 90 percent of an octave, you don’t need spectral blockers.
Discussion This rule applies to diffraction gratings but generally not echelles, as they operate in higher multiple orders. Generally, a grating will diffract light of a wavelength λ at the same angle as light of the wavelength λ/2 and λ/3…, and so forth. However, there is always some leakage and uncertainty, so it is wise to reduce this by another 10 percent; hence, the authors suggest the above rule. For diffraction gratings, the well known grating equation,1 mλ = d ( sin α + sin β ) is satisfied whenever m is an integer. Terms α and β are the incident and diffracted angles from normal, respectively. Unfortunately, this means that light of multiple colors (where m is an integer) will overlap and fall on the same angle. This can be eliminated by pesky and costly order blocking filters or by adhering to the above rule. The range of wavelength (Fλ) for which this superposition of other orders does not occur is called the free spectral range and can be calculated1 by assuming that, in the order m, the wavelength that diffracts along the direction λ1 in order m +1 is λ1 + ∆λ, where m+1 λ + ∆λ = ------------ λ1 m and then applying the 90 percent suggestion,
254
Chapter Thirteen
0.9λ F = 0.9∆λ = ------------1m For example, if you need to split a spectrum, you can go from about 2 to 1.1 µm without a blocker, or 1.5 to 0.82.
References 1. C. Palmer, Diffraction Grating Handbook, Richardson Grating Laboratory, Inc., Rochester, New York, pp. 14–17, 2000. 2. www.thermorgl.com/library/handbook4/chapter2.asp, 2002.
GRATING EFFICIENCY AS A FUNCTION OF WAVELENGTH Fifty percent efficiency is obtained from approximately 0.7λb to 1.8 λb, where λb is the grating’s peak or (blaze) wavelength.
Discussion The peak wavelength or blaze wavelength of a grating is the wavelength at which the diffraction energy efficiency is at a maximum. The efficiency typically approaches 98 percent or so (the same as a mirror) for a narrow wavelength band at its most efficient. The efficiency falls off more quickly with shorter wavelengths than longer wavelengths, so the λb should not be in the center of the bandpass. The blaze angle is the angle of the cut on the grating and generally varies from a few degrees to about 50° or 60°. For very low blaze angles (e.g., <5°), polarization effects frequently can be ignored. For any blaze angle above 5°, polarization effects can make the efficiency curve complicated and must be carefully considered. The wavelength efficiency is a slight function of the blaze angle. For instance, Ref. 1 points out that, for blaze angles of less than 5°, the 50 percent efficiency range is from is from 0.67λb to 1.8λb whereas, for blaze angles of 22° < θ < 38°, the 50 percent point is from 0.7λb to 2λb. This rule assumes the Littrow configuration. In this configuration, light imposed on the grating encounters it nearly perpendicular to the grating surface. In addition, Ref. 2 shows example grating efficiency curves.
References 1. C. Palmer, Diffraction Grating Handbook, Richardson Grating Laboratory, Inc., Rochester, New York, pp. 85–93, 2000. 2. E. G. Loewen et al., “Grating Efficiency Theory as Applied to Blazed and Holographic Gratings,” Applied Optics, 1 (10), pp. 2711–2721, October 1977. 3. http://www.wpiinc.com/WPI_Web/Spectroscopy/Gratings.html, 2003.
HOLLOW WAVEGUIDES Minimum loss in a hollow circular waveguide with a dielectric coating on metal occurs when the dielectric coating has a thickness1–3
Optics
255
nd λ –1 -----------------------------tan ----------------------1⁄2 1⁄4 2 2 2π( nd – 1 ) ( nd – 1 ) where nd = index of refraction of the dielectric Attenuation in a straight guide depends on the modes being propagated according to2,4 U o 2 λ2 n ⎞ - f α = ⎛ ------⎞ ----3- ⎛ -------------⎝ 2π ⎠ ⎝ 2 2⎠ a n +k where 2
nd 1 f = --- 1 + ----------------------1⁄2 2 2 ( nd – 1 )
2
and the following definitions apply: Uo = mode parameter, which is 2.405 for HE11(HE11 being the fundamental mode of a fiber (The value of 2.405 is the first root of the zeroth-order Bessel function J0, which describes this mode. For other modes, the value of U0 is computed from the appropriate Bessel functions. HE11 is usually considered, because it has the lowest attenuation of all modes.) λ = wavelength n, k = real and imaginary components of the index of refraction of the metallic layer in guide nd = index of refraction of the dielectric coating a = bore radius
Discussion Hollow glass waveguides coated with metal films and dielectrics can provide an effective method of transporting light and can be used to convey radiation from CO2 lasers. Most often, the interior of the guide is coated with a metallic surface such as silver and overcoated with dielectric. Reference 2 describes examples in which the metal is silver and is overcoated with silver iodide. Such a system provides good transmission for wavelengths above 2 µm. These components have advantages over conventional clad fibers. First, they can handle high power and exhibit no end reflections. Flexible versions can be used to perform remote imaging functions for infrared wavelengths. Bore sizes from 50 to 320 µm have been reported.2 They exhibit lower attenuation at larger bore sizes. When Gaussian beams are injected into a hollow guide, only HE1m modes are created. This technology has some disadvantages as well. Reference 2 points out that, for hollow waveguides with small bores, attenuation can be quite high. To manage this attenuation, more complex dielectric coatings can be put on the metal layer that lines the tube. Alternating layers of high- and low-index dielectrics have been modeled as shown below. Commonly, AgI and Ge are used as the dielectric films, because there is a substantial difference in their indices (about 1.9). The equation in the rule can be used with the more complex values of f that apply to multilayer coatings. First is the case of an odd number of layers. Note that if m = 1 in the following equation, we get the result shown in the rule.
256
Chapter Thirteen
2
n1⎞ 2 p – p n1 1 p ⎛ ---- C f = --- C 1 + ----------------------1 ⁄ 2⎝n ⎠ 2 2 2 ( n1 – 1 )
2
2
n1 – 1 C = -----------2 n2 – 1 m = 2p+1 where n1, n2 = refractive indices of dielectric layers p = number of pairs of high/low dielectric layers C = ratio of the refractive indices in the two layers For an even number of layers, f is found to be 4
2
n1 – 1 ⎛ n2⎞ 2 p p 1 n1 ⎛ n2⎞ 4 p – p - ----- C 1 + ---------------------------- ----- C f = --- ---------------1 ⁄ 2⎝n ⎠ 2 2 2 ( n2 – 1 ) ⎝ n1⎠ 1 n1 ( n2 – 1 ) 1
2
2
n1 – 1 C = -----------2 n2 – 1 m = 2p+2
References 1. R. Nubling and J. Harrington, “Launch Conditions and Mode Coupling in Hollow Glass Waveguides,” Optical Engineering, 37(9), p. 2454, September 1998. 2. V. Gopal and J. Harrington, “Coherent IR Bundles Made Using Hollow Glass Waveguides,” Fiber Optic Sensor Technology II, Proc. SPIE, Vol. 4204, pp. 216–223, 2001. 3. M. Miyagi and S. Kawakami, “Design Theory of Dielectric-Coated Circular Metallic Waveguides for Infrared Transmission,” Journal of Lightwave Technology, LT-2, pp. 116–126, 1984. 4. M. Mohebbi et al., “Silver-Coated Hollow-Glass Waveguide for Application at 800 nm,” Applied Optics, 41(33), pp. 7031–7035, November 20, 2002.
HYPERFOCAL DISTANCE The hyperfocal distance (in meters) can be approximated as 900D Hyperfocal distance = ------------δ where D = aperture in meters δ = IFOV in milliradians
Optics
257
Discussion The hyperfocal distance is the range (and beyond) at which items are in focus. If the target is beyond the hyperfocal distance, the focus can be fixed at infinity. Targets closer than this distance will require active focusing. The equation assumes that the permitted defocus results in a spot that is 10 percent larger than the focused pixel field of view, and it is adjusted for units of milliradians in the IFOV and meters for the aperture and range. Generally, a good optics design will result in a reasonably symmetric and gradual blurring of the spot as it is defocused. Generally, a spot 10 percent larger than a pixel has minimal image effect. Note that, for low f/# or infrared systems, the spot often is much smaller than a pixel, so substantial defocus blur can occur with minimal degradation to image quality. The hyperfocal distance can be rewritten as fD ------b where f = focal length D = aperture diameter b = permitted blur spot size
THE LAW OF REFLECTANCE When light encounters a smooth, specular reflective surface, it is reflected at an angle equal to its incident angle from the normal to the surface (see Fig. 13.2). The incident, reflected, and surface normal are in the same plane.
FIGURE 13.2
Light reflected from smooth, specular reflective surface.
258
Chapter Thirteen
Discussion When a light bundle encounters a reflective surface, most of the reflected light is reflected at the same angle from the normal as is the incident ray, but directly opposite to it. This rule is based on Fermat’s principle, simple geometric optics, and empirical observations. This is valid throughout the electromagnetic spectrum. The rule does assume that the surface is large as compared to the wavelength of light and is valid for highly polished reflective surfaces that are smooth as compared to the wavelength (specular). This forms the basis for all reflective ray tracing and reflective optical design.
LIMIT ON FOV FOR REFLECTIVE TELESCOPES An all-reflective (mirror) telescope can have a field of view of no more than about 10 (or maybe 20) square degrees (°°) before heroic measures are required to make it work, such as curved focal planes, field correction elements, extra mirrors, and super-expensive compound-curvature mirrors.
Discussion This is based on the limitations imposed by geometrical optics on fields of view coupled with current optical manufacturing technology. Schmidt telescopes frequently have fields of view approaching or exceeding 20°°. However, they do this with a refractive “corrector plate” as the first optical element and may also employ curved focal planes. This rule is valuable for underscoring the fact that reflective telescopes are narrow-FOV devices, and also as a sanity check on a wide-FOV reflective design. The term square degrees refers to the product of the two angular dimensions of the field of view. Often, you must use all-reflecting optics and tolerate a small field of view. Don’t whine about this. It doesn’t happen just to make your job harder. It’s geometry and physics.
LINEAR APPROXIMATION FOR OPTICAL MODULATION TRANSFER FUNCTION The modulation transfer function (MTF) of an optical system can be estimated by f MTF diff ( f ) = 1 – 1.13⎛ -----⎞ ⎝ f c⎠ where MTFdiff = MTF for a diffraction-limited system fc = spatial frequency cutoff f = spatial frequency in question
Discussion A spatial frequency as sampled on a focal plane is a function of f/#. However, the maximum spatial frequency (best resolution) is determined by aperture and wavelength for
Optics
259
diffraction-limited systems (sampling the Airy disk more than a few times is of no practical use). When rigorously calculated, the MTF seems to vary approximately linearly for diffraction-limited systems when the spatial resolution is smaller than the size of the object of interest. The above equation provides excellent correlation for ratios of 0.4 or less. When the object (or line pair) of interest approaches the diffraction limit, the MTF becomes nonlinear and actually better than the above approximation. Beyond a ratio of 0.7, the error in this approximation is large. For a slightly better accuracy, substitute 1.27 for the 1.3 in the equation in the rule. In the LWIR, one can assume that the cutoff spatial frequency is 2 cycles/mrad for a 1in aperture and 5 cycles/mrad in the MWIR and 40 to 50 cycles per milliradian per aperture inch for a visible-bandpass sensor. The modulation transfer function of a lens will be somewhat less than the perfect case considered above. Nevertheless, when rigorously derived, it is almost linear with spatial frequency and can be approximated by the above relationship—but the cautious would degrade it slightly. Alternatively, in the visible wavelength spectrum, the MTF of a perfect circular lens can be approximated by f /#ν ν* MTF ≈ 1 – ------------ or 1 – ------1500 3A where MTF = modulation transfer function f/# = ratio of focal length to diameter of the lens ν = spatial frequency in lines per millimeter ν* = spatial frequency in lines per micron A = numerical aperture This rule results from a linear approximation to an actual diffraction-limited MTF and approximation of diffraction theory and curve fitting MTF calculations. It assumes that the MTF is limited by diffraction from a circular aperture. Sometimes, MTF is limited by components other than the optics (e.g., the detector or focal length). The rule also assumes a single wavelength of operation (but the relation tends to hold pretty well for narrow bandpasses). The equation in the rule is valid for an MTF greater than 0.15 and incoherent illumination.
References 1. G. Waldman and J. Wooton, Electro-Optical Systems Performance Modeling, Artech, Norwood, MA, p. 125, 1993. 2. L. Levi, Applied Optics, Vol. 1, A Guide to Optical System Design, John Wiley & Sons, New York, pp. 487–488, 1968. 3. Private communication with W. M. Bloomquist, 1995.
ANTIREFLECTION COATING INDEX Single-layer antireflection coatings have minimal reflectance when their index of refraction equals the square root of the index of refraction of the substrate and they are onefourth of a wavelength thick.
260
Chapter Thirteen
Discussion Antireflection coatings should be applied to the surface of optics and focal planes. For high index of refraction materials (such as germanium), such coatings can almost double the transmission of light into the material (by minimizing reflection). If the conditions in the above rule are met, the first surface reflection is reduced to a minimum at a given wavelength. The formalism goes like this: Minimum reflectance is obtained when (1)
n f = no ns where no = index of refraction of external medium ns = index of refraction of the substrate nf = index of refraction of the film on the substrate When the film thickness is λ/4, 2
⎛ n f – no ns ⎞ -⎟ R = ⎜ -------------------⎝ n2f + no ns⎠
2
(2)
Therefore, if the conditions in Eq. (1) are met, R = 0.
MAXIMUM USEFUL PUPIL DIAMETER The maximum effective pupil diameter (assuming a f/1) is approximately limited to Ds D ≤ -----θ where D = maximum entrance pupil diameter Ds = linear size of the detector θ = angular instantaneous field of view in one direction from the detector pixel (sometimes called the detector angular subtense or DAS)
Discussion Very large-FOV systems have difficulty in effectively filling their aperture with radiation that actually falls upon a FPA detector (for a given field angle). For a field of view larger than about 45°, rarely is the useful radiometric aperture the same as the physical size of the aperture. This rule is useful for estimating the effective aperture, IFOV, or detector size of a system that you don’t know much about (e.g., a competitor’s). This rule is useful for quickly estimating the exit pupil (and radiometric effective aperture). This rule is based on an optical system that has an f/1 speed. This is also based on geometrical optics and an approximation to the Abbe sine condition. The rule assumes that the numerical aperture cannot exceed 1 and an f/1 cone, and the maximum pupil decreases linearly with increasing f/#. Obviously, it is possible (in some cases) to produce a system with an effective f/# less than 1. This rule may not strictly apply to complicated optical systems with aspheres, lenslets, binary optics, and condensers.
Optics
261
Assume that you have a 256-pixel array with a 40-µm detector pitch. It must subtend 45° in each dimension (that is, 45° in X and 45° in Y). Your optics must therefore support a resolution of 45° (in each axis) divided by 256 elements, or 3068 µrad per pixel. If this is the case, then you can assume that your maximum useful aperture is –3
5 × 10 ------------------------- = 1.6 cm diameter –3 3.07 × 10 The actual dimension of the first optical surface (aperture) is likely to be larger for such a wide-field system, but the entire aperture does not contribute to the energy collection for a given pixel. Again, the example assumes an f/1 system.
MINIMUM f/# The theoretical minimum f/# for an optical element or telescope is 0.5. The practical limit for an imaging system is about 0.7.
Discussion Consider Fig. 13.3. When an optic is made faster than f/# = 0.5, a flat detector cannot respond to the rays, as they don’t strike its active flat front-facing surface.
FIGURE 13.3
Illustration of lost rays resulting from f-numbers of 0.5.
262
Chapter Thirteen
f/# is defined many ways, with one of the most exact being 1 f /# = ------------------------------------------–1 D 2 sin tan ⎛ ----------⎞ ⎝ 2FL⎠ where f/# = ratio of effective aperture to focal length D = effective aperture diameter (careful, not always the total aperture) FL = effective focal length from the principle surface –1
Often, tan ( D ⁄ 2FL ) is expressed as the angle of the rays α. From the above, one can substitute some numbers, do the arithmetic, and see that, by definition, the minimum f/# is 0.5 (the sine in never larger than 1, so f/# is always larger than 1/2). Smith points out the following:4 The limit on the relative aperture of a well corrected optical system is that it cannot exceed twice the focal length; that is, f/0.5 is the smallest f/# attainable. In a well corrected system, the Abbe sine condition must hold. The sine condition can be expressed as Y= f sin u′ The limiting aperture is given by f/# = f /2Y = f /A = 0.5 If the f/# is substituted, the following theoretical limitation on the relationship of A, D, and α are obtained: Dmin = α A (theoretical limit) where D = detector size Y = aperture radius f = focal length u′ = f-cone half angle A = full clear aperture diameter α = detector half angle of view
References 1. 2. 3. 4.
Private communications with Dr. J. Richard Kerr, 1995. Private communications with Dr. George Spencer, 1995. Private communications with Max Amom, 1995. W. Smith, “Optical Systems,” in the Handbook of Military Infrared Technology, W. Wolfe, Ed., Office of the Naval Research, Washington DC, pp. 427–429, 1965. 5. G. Holst, Electro-Optical Imaging System Performance, JCD Publishing, Winter Park, FL, pp. 459–460, 1995. 6. Taubkin et al., “Minimum Temperature Difference Detected by the Thermal Radiation of Objects,” Infrared Physics and Technology, 35(5), p. 718, 1994. 7. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, p. 180, 1969.
OPTICAL COST Optical costs are proportional to the optic’s diameter raised to a power, divided by the wavelength of operation of the mirror raised to another power, and the f/# raised to yet another power.
Optics
263
n
D Cost ∝ --------------------m x λ ( f /# ) where D = n= λ= m= f/# = x=
diameter of the largest optical element an adjustment constant of ≈ 2.7 (usually 2 < n < 3) bandpass cut-on (since this is relative, this can be in any units that you want) another adjustment constant of ≈ 2 (usually 0.8 < m < 4) f number yet another adjustment constant; x ≈ 0.5 (0.1 < x < 1.5) for f/# ≥ 2.5, x ≈ 2 (1 < x < 10) for f/# ≤ 2.5
Discussion Folklore indicates that a given optical system cost is somewhat proportional to the diameter raised to a power divided by the surface quality (figure and roughness). This also applies to radio telescopes and antennas. The folklore can be refined to include the effects of the speed of the optics. The larger the f/#, the easier it is to make optics of a given quality. Additionally, Hudson2 reports that, for a large f/#, parabolic mirrors are not significantly more expensive than spheres, but for optics with an f/# less than 3, parabolas are more expensive. For the same f/#, mirror costs increase somewhat faster than the square of the increase in diameter. The reader can assume that faster f/# designs require higher curvatures, so they are more difficult to make and have lower yields, meaning higher cost. The cost of an optic is somewhat linear (for small differences) to the final roughness and figure. The scatter from an optical surface varies by the roughness squared and 1/λ2 (see associated scatter rule). This rule is based on empirical curve fitting of current state of the art for custom optics. Also, the lower the bandpass cut-on wavelength, the more accurately the figure needs to be ground, and the less surface roughness can be tolerated, so the optic will cost more. This provides crude approximations for scaling similar optics of the same material (cost is a strong function of the difficulty in working with certain materials). The rule does not account for costs associated with testing, tolerances, and fields of view. This should not be used to compare systems unless all have D, λ, and f/# in similar ranges. N, m, and x above are less for segmented mirror telescopes such as the MMT, Keck, and OWL, as discussed in more detail below The above is useful for a first-cut quick estimate or a scaled comparison of the cost of optics and telescopes. It also serves to illustrate the typical cost drivers. However, the reader should note that a number of technologies offer the potential for breaking this rule. For example, the European Southern Observatory (Garching, Germany) has been studying for some time a concept for a gigantic ground telescope they call the Overwhelmingly Large Telescope (OWL).3 OWL is intended to be 100 m in diameter. They acknowledge the current paradigm of cost scaling with D2.7 but propose to reach a scaling of D1.3. This is to be done by using several technical breakthroughs. The primary mirror will be highly segmented, with the size of each petal chosen to assure minimum cost. Next, the segments are to be identical (the primary mirror is a sphere) so that maximal advantage can be taken of recent advances in optical replication technology. Finally, the most advanced active and adaptive optics technologies will be put to work to assure that the mirror cost is minimal while its performance achieves diffraction-limited performance. A handy rule for estimating the areal density of segmented mirrors is provided below:4 2
δ = areal density[ kg/m ] = 9.7 + 17.8d where d = hexagonal segment diameter (ftf) in meters ftf = refers to the face-to-face distance across the segments
264
Chapter Thirteen
This estimate matches historical data for Cassegrain designs that employ rigid hexagonal segments. It provides the typical estimate of 25 kg/m2 for 1-m segments. Bely5 presents a somewhat different model for space optics, 1.6
D M f D f D' f Cost ∝ ------------------------------------------------1.8 0.2 0.033 ( Y – 1980 ) λ T e where
Y = telescope completion year, indicating some industry-wide learning Mf = a materials-dependent factor that is 1.0 for aluminum, 1.5 for glass and graphite epoxy, 1.3 for beryllium optics, and 1.5 for beryllium structure Df = a design factor that is 1.0 for on-axis, 1.33 for off-axis D’f = lightweighting factor that is 1.0 for solid and 1.3 to 1.4 for lightweight mirrors T = operating temperature in kelvins L = operational wavelength
Finally, cost savings can also be expected when multiple units are manufactured. Bely6 points out that the four ESO very large telescopes (VLTs) cost only about three times the cost of the first and that the second Keck telescope cost 67 percent of the cost of the first. Such a cost savings is captured in the learning curve concept that can be stated mathematically as Total cost for N units = cost first_unit N
a
ln ( 1/s ) a = 1 – ---------------ln 2 In this equation, s is the measure of learning that occurs as new units are created. One can expect s to be about 0.95 for fewer than 10 units, 0.9 for 10 to 50 units, and 0.85 when more than 50 units are to be manufactured. Using these guidelines, one could expect the four VLT unit telescopes to be built for about 3.6 times the cost of the first.
References 1. J. Miller, Principles of Infrared Technology, Kluwer, New York, p. 93, 1994. 2. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, p. 201, 1969. 3. R. Gilmozzi et al., “The Future of Filled Aperture Telescopes: Is a 100 m Feasible?” SPIE Conference on Advanced Technology for Optical/IR Telescopes VI, Kona, Hawaii, 1998. 4. E. Montgomery, “Variance of Ultralightweight Space Telescope Technology Development Priorities with Increasing Total Aperture Goals,” Ultra Lightweight Space Optics Challenge Workshop, sponsored by JPL, March 24–25, 1999. 5. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, Springer, New York, p. 79, 2003. 6. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, Springer, New York, p. 102, 2003.
OPTICAL PERFORMANCE OF A TELESCOPE A blur circle’s angular diameter may be approximated by 2
2
2
θ = θ D + θF + θ A
Optics
265
where θD = diffraction effect θF = figure imperfection effect θA = misalignment of optics effect
Discussion Each of the terms is addressed below: ■
■
■
A typical expression for θD is 3.69 λ ⁄ D for a system with a central obscuration radius of 0.4 of the aperture radius. This value results in about 80 percent of the available energy falling within an Airy disk. θF = 16 x ⁄ D . A typical value of x might be about λ ⁄ 3 or λ ⁄ 4 . This value again assumes the obscuration mentioned above. The effect of misalignment is approximately θA ≈ mD∆/(k1EFL2) where D = diameter of the optical system λ = wavelength of operation (or cut off of a wide bandpass) m = magnification of the telescope ∆ = lateral misalignment k1 = a constant EFL = effective focal length of the telescope
This rule is based on an approximation of diffraction theory and general experience with telescopes and ray tracing. The real performance will depend on the optical design and some nonlinear features that are not represented in the equations. It is important to understand that these imperfections in optics sum via root-sumsquared (RSS) as defined in another rule in this book. Having an element with exceptional figure does not provide a smaller spot if the alignment cannot be done properly.
References 1. NASA Goddard Space Flight Center, Advanced Scanners and Imaging Systems for Earth Observations, U.S. Government Printing Office, Washington DC, pp. 99–10, 1973.
PEAK-TO-VALLEY APPROXIMATES FOUR TIMES THE ROOT-MEAN-SQUARE The peak-to-valley (PV) error of the surface figure of an optical element is generally four times the root-mean-square (RMS) wavefront error.1
Discussion The above results from an analysis of simple aberrations and various high-spatial-frequency wavefront errors. Mahajan2 gives the Zernike polynomials in a form such that they all have unit RMS value and PV ratios that vary. This allows the derivation of the ratios, and the multiplier of 4 is an approximately good choice. The choice of 4 is only an approximation, given that specific aberrations have slightly different ratios for the peak to valley error to RMS (ranging from 3 to 8) as follows: 1. Lower-order (Seidel) aberrations have ratios from 3 to 32 . 2. Crinkly wavefronts (more or less random) have a ratio of 2 2 . This rule also seems to hold for many types of nonstandard aberrations such as circumferential grooves in the wavefront, which vary sinusoidally with radius, grooves varying
266
Chapter Thirteen
cosinusoidally, grooves with a square-wave variation with radius, and a two-level zone plate. In each of the above, the PV-to-RMS ratio is either 2 or 2 2 when there are many periods from center to edge. For higher-order Zernikes, the ratio seems to tend toward 8. Truly random wavefront errors are arguably described by PV ratios somewhere between 2 and 8. Four is between 2 and 8 (as is, say, 5), so 4 was chosen as a reasonable (e.g., never quite right) approximation. This rule allows a common-ground comparison of optics specified in different ways. Often, the PV figure error is available by inspection of an interferogram. Conversion to the RMS error is more tedious than using this rule, as it requires some computation of a two-dimensional integral. However, the effort may be worth it, as RMS wavefront error is useful in computing other quality measures (such as Strehl ratio) as long as it is not too large. Generally, the RMS wavefront error is about one-fourth of the PV error. Any wavefront that can be described from either analysis or direct measurement can be characterized by a PV or RMS deviation from its ideal figure. RMS involves integration over the whole wavefront, so it draws on more information about the wavefront, although it yields a single number. Some specific cases are shown below: Type of aberration
P–V/RMS
Defocus
1.73
Primary spherical
5.66
Astigmatism
4.90
Secondary spherical
2.65
Phonograph record
2.83
When in doubt use
4
References 1. Private communications with W. M. Bloomquist, 1995. 2. V. Mahajan, “Zernike Circle Polynomials and Optical Aberrations of Systems with Circular Pupils,” Applied Optics, 33, pp. 8121–8124, 1994.
PULSE BROADENING IN A FABRY–PEROT ETALON The final pulse width (in seconds) of a laser pulse after N trips through a Fabry–Perot etalon is given by ∆t ≈ 3.5 × 10
–11
Fd N
where F = finesse of the etalon (which can be as large as about 100) d = thickness N = number of trips
Discussion Etalons provide extremely high-performance optical filtering and are employed in lasers, spectral filters, and scientific instruments. Their nice features include the fact that they
Optics
267
have an extremely narrow bandpass (within a range of wavelengths called the free spectral range), and they can be made quite robust mechanically. The equations that govern the performance of etalons are the same as those that define the performance of multilayer coatings used to create antireflection coatings and related optical devices. In typical applications, the etalon is able to resolve wavelengths down to hundredths of nanometers or smaller. In addition, they form the basis of most laser resonators and can be used to control the number of modes that propagate freely in the resonator. Most optics books provide a detailed discussion of the equations that define how etalons work and how to compute their resolving power. Siegman provides an exercise for those using his book (which includes a more complex representation of this formula) to come up with the simpler form presented above. The result applies to the pulse width after many trips through the etalon. Curiously, the more complex form includes the length of the initial pulse. This version does not. This rule provides a simple method for estimating the length of a pulse that has passed through an etalon many times. Note that the pulse continues to grow without bound, because N is present under a square root.
Reference 1. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 360, 1986.
ROOT-SUM-SQUARED BLUR The point spread function blur from an optical train can be estimated as follows: 2
2
2
2
2
2
2
2
Rsys = Rabr + Rdif + Rdef + R jit + Rrgh + R pix + Rks where Rsys = Rabr = Rdif = Rdef =
Rjit = Rrgh = Rpix = Rks =
radius of the system point spread function radius of the blur resulting from geometric aberrations excluding defocus radius of the blur resulting from diffraction radius of the blur resulting from defocus alone (when aberrations are small; if aberrations are large, then the defocus effects should be another part of Rabr) radius of the blur resulting from jitter or the integrated image motion during the effective exposure (integration) time radius of the blur resulting from surface defects (or roughness) that cause near forward scattering “radius” or equivalent measure of the detector pixel width radius of any other blur contributor (usually you should throw everything but the kitchen sink into this term)
Discussion All the blurs are assumed to have at least vaguely Gaussian profiles of the form exp[–(r/ R)2]. Therefore, the total system blur profile is approximately the convolution of the contributing blur profiles. The appropriate radius is partly a matter of judgment. For example, what is the best fit 1/e width of a Gaussian approximation to a “top-hat” function (such as is the case with the FPA pixel width)? Any reasonable fit will serve. Moreover, when there are many contribu-
268
Chapter Thirteen
tions, the system point spread function will tend to be Gaussian, even if none of the contributors are. The sum-of-the-squares process tends to emphasize the influence of one or a few large contributors; therefore, small contributors can be identified and ignored. The electronics, processing, and display may also degrade the overall system point spread function and should also be root-sum-squared if significant. This rule is easy to calculate, is quick to apply, gives an indication of dominating contributors, gives an indication of insignificant contributors that can be ignored, gives an indication of where the time and money should be spent to increase (or restore) performance, and allows a system-wide balancing of errors.
Reference 1. Private communication with W. M. Bloomquist, 1995.
SCATTER DEPENDS ON SURFACE ROUGHNESS AND WAVELENGTH Total integrated scatter (TIS) depends on the surface roughness and wavelength in the approximate form 4πσ cos θ TIS ∝ -----------------------λ
2
where σ = RMS surface roughness θ = angle of the incident light to be scattered, measured from the normal of the surface λ = wavelength
Discussion When light is refracted, diffracted, or reflected from an optic, some (hopefully small) portion will scatter. That is, the rays will sent on an unintended course. The scatter is related to the surface roughness and cleanliness. Total integrated scatter is a measure of the entirety of the light not specularly reflected from a surface. Shorter wavelengths will always tend to scatter more than longer wavelengths (which is why the sky is blue). This scatter can be a driver in system performance for systems that must operate near bright sources and a cost driver for low-scatter, short-wavelength systems. This rule originated through studies of radar scattering off of ocean waves but has been verified to be accurate when applied to optics. This rule assumes a smooth, clean surface that is large as compared to the wavelength (all attributes of a typical optical surface). It also assumes that the height distribution of the roughness is Gaussian and that the surface is much wider than the correlation distance of the roughness. This is useful for estimating the amount of scatter from an optical surface and scaling the scatter between different wavelengths or optical surfaces of different surface roughness. The reader is also directed to the other rules in this book concerning bidirectional reflectance distribution function (BRDF) and cold shields for related discussions.
References 1. J. Stover, Optical Scattering, McGraw-Hill, New York, pp. 17–19, 1990.
Optics
269
SHAPE OF MIRRORS The deflection at the center of a circular plate is proportional to the fourth power of the diameter. For example, a circular plate of radius a clamped at the edge exhibits a center deflection of 4
Pa ---------64D where 3
Et D = ---------------------2 12( 1 – ν ) and ν = Poisson’s ratio E = Young’s modulus t = thickness of the plate D = has the units of force times distance P = the uniform load imposed on the plate For a situation in which differential pressure is imposed, P is the pressure. If gravity is the source of the deflection, P is replaced by mg/πa2 where g is the acceleration of gravity and m is the mass of the plate.
Discussion The deflection as a function of radius for the case mentioned above is 4 ⎛ a2⎞ Pa ---------- 1 – ⎜ ----2-⎟ 64D ⎝r ⎠
2
For the case of a free plate supported at the edge (resting on but not clamped by the edge supports), the center deflection is 4
Pa ( 5 + υ ) -------------------------64D( 1 + υ ) We can use these formulas to compute the self-deflection of a mirror suspended loosely by its edges. We do so by noting that, instead of using a pressure P as the source of deflection force, we use the expression mg/πa2 in the formula for a free plate supported at the edges, 4
2
4
2
4
Pa 5 + υ ρgπa t a 12( 1 – υ ) 5 + ν 3ρga ---------- ----------- = ---------------- ------ --------------------------------- = --------------2- ( 1 – ν )( 5 + υ ) 2 3 64D 1 + υ 1 + υ 16Et πa 64 Et Consider a 2.4-m diameter Pyrex® mirror blank (a = 1.2 m). When suspended by its edges and with an aspect ratio of 8 (t = 0.3 m), the center will sag about 6 µm from the force of gravity. If the edges are constrained, the center deflection is 4
2
4
2
4
ρgπa t a 12( 1 – υ ) 3ρga Pa - ------ ----------------------= --------------2---------- = ---------------2 3 64D πa 64 Et 16Et In this case, the center deflection is about 1.5 µm.
270
Chapter Thirteen
The physical properties of Pyrex® are provided in Table 13.1. TABLE 13.1 Properties of Pyrex® Material
ρ, specific mass, kg/m3
E, Young’s modulus, GPa*
ν, Poisson ratio
2230
63
0.2
Pyrex *1
Pa = 1 newton/m2.
Finally, we find2 that the modulus of rigidity takes on a slightly different form if the mirror is not a continuous sheet but is created from an assembly of two face sheets separated by a foam core. In that case, the modulus is Ef 3 2 3 D foam = ----------------------( t + 6t ( c + t ) + pr c ) 2 12( 1 – υ ) where t = thickness of each face sheet c = core thickness ρr = density of the foam relative to that of the same material in dense form
References 1. http://www.efunda.com/formulae/solid_mechanics/plates/casestudy_list.cfm#cpS, 2003. 2. D. Content et al., “Lightweight Aluminum Mirrors Using Foam Core Sandwich Construction,” Ultra Lightweight Space Optics Challenge Workshop, sponsored by JPL, March 24–25, 1999.
SPHERICAL ABERRATION AND f/# For visible light, a plano-convex lens has a spot size resulting from spherical aberration that goes as the inverse third power of the f/#.1,2 0.067 f spot_diameter = ---------------3 f /# where f/# = f number to ensure that the lens is diffraction limited in performance f = focal length in millimeters
Discussion The plano-convex lens is widely used to focus a parallel beam of light or form a parallel beam from a point source or filament. Another way to use this rule is to equate 2.44λ/D with the spot size in the rule to determine when diffraction and spherical aberration are equal. This results in f /# = ( 0.0275 f ⁄ λ )
1⁄4
For a wavelength of 0.5 µm and f/# presented in millimeters, this results in a common result, f /# = ( 55 f )
1⁄4
Optics
271
The value of 0.067 above derives from the index of refraction of the material in the lens. This is a good value for visible wavelengths and commonly used refractive materials with an index near 1.5. In general, as the index of the material used is increased, the smaller the value of the constant: 0.067 for n = 1.5, 0.0129 for n = 3.0, and 0.0087 for n = 4.0.3 In the infrared, other values are used. For example, at 10.6 µm,4 the following values apply to the materials shown: ZnSe
0.0286
GaAs
0.0289
Ge
0.0295
CdTe
0.0284
Different values apply for other lens types. It should be noted that camera lenses resulting from multiple elements, good design, and good optical glasses can certainly be made that do not follow this rule very closely. An 80-mm Zeiss Planar lens (designed many years ago) used on a Hasselblad camera seems to be diffraction limited at f/2.8 rather the f/8 implied by the rule. The rule applies only to simple lenses like the plano-convex. One should also note that some tinkering with the equation in the rule reveals that, for this simple type of lens, the size of the blur spot caused by spherical aberration depends on the ratio of the cube of the diameter of the optic and the square of the focal length.
References 1. 2. 3. 4.
Melles Griot catalog, p. 1.25, 1999. Argus International catalog, p. 15, 2002. W. Smith, Modern Optical Engineering, 3rd ed., McGraw-Hill, New York, pp. 494, 2000. http://www.ii-vi.com/pages/res-spherical.html, 2003.
STOP DOWN TWO STOPS In photography, to get a good balance between aberrations and diffraction, stop down two stops from the maximum available.
Discussion As is discussed in virtually every optics book, diffraction becomes more pronounced as aperture size is reduced. At the same time, bigger apertures allow more light into the system but at the expense of more aberrations. The rule provides a convenient method for choosing the optimal point at which to take photos. Of course, this rule applies to cameras with lenses. A pinhole camera manages aberrations by limiting the bundle of rays that actually expose the film. By spatially limiting the rays, we end up with a set of spherical waves encountering the film. This tends to lead to field curvature, but the other aberrations are under control. The curvature can be dramatic for objects close to the camera, but it works fine for distant objects. An widely used alternative to this rule is to take all pictures at f/8. The concept of a pinhole for imaging was used early in the history of photography and may have played a role in the invention of perspective in Renaissance art. The interested reader will find a wealth of optical insights in the use of camera obscura in Ref. 1. One is available for the public use at Griffith Observatory in Los Angeles. In Ref. 1, it is argued that the real innovations in perspective and realism came when artists discovered that they
272
Chapter Thirteen
could sit in a darkened room and view an image of the subject through a pinhole in the wall of the room. The artist was literally in the camera! Of course, photography is an old art, so a complete books of rules exist such as the one above. For example, it is often suggested that in bright Sun, the proper exposure is found by the following rule: set the f-stop to f/16 and the shutter speed to the inverse of the film speed. Of course, to be sure of getting a good exposure, bracket the exposures by taking a series of pictures with f-stops 1/2 stop above and below the one you think most likely to work. Some photographers even do a “bracket of five” in which they shoot the same photo with stops at –1, –1/2, 1/2 and 1 stop relative to the one most likely to provide a good picture. Even more flexibility occurs if you (or your photo developer) can “push” the film to have an effective speed higher or lower than that used when setting the camera up for the picture. One of the authors (Miller) can remember hosting a National Geographic photographer at Mauna Kea. The author was amazed at the number of photos taken every time the photographer pushed the button, as he was bracketing both f/# and exposure time. The photographer later explained that this was part and parcel to the art. These images supplied him with different depth of focus and brightness on every exposure. Given his travel expenses and salary, film was cheap. Such considerations are obviously realistic. Most modern cameras have more shutter and exposure control than the consumer even realizes, but the rule is useful in any case. You never know when you’ll have to use Granddad’s old 35-mm rangefinder. Of course, don’t forget that the film package always comes with example scenes and suggestions for proper exposures. Those of you with digital cameras just need to make sure you have enough batteries.
References 1. D. Hockney, Secret Knowledge, Viking Studio, New York, 2001. 2. http://bobatkins.photo.net/info/optics.htm, 2003. 3. www.euro-photo.net/cgi-bin/epn/info/techniqu/fstop-tripod.asp.
Chapter
14 Radiometry
Radiometry is the study of creation, transport, and absorption of electromagnetic radiation, and the wavelength-dependent properties of these processes. The term is also often used to include the detection and determination of the quantity, quality, and effects of such radiation. The term photometry describes these phenomena for the visible portion of the spectrum only. Photometry and its terms and dimensions are a result of normalizing (or attempting to normalize) the measurement of light to the response of the human eye. William Herschel (1738–1822) not only discovered infrared radiation, he attempted to draw the first distribution of thermal energy as a function of wavelength and thus can be considered the farther of radiometry. Johann Lambert (1728–1777) noted that the amount of radiated (and in some cases reflected) energy in a solid angle is proportional to the cosine of angle between the emitter and receiver. Incidentally, Lambert also proved that π is an irrational number and introduced the hyperbolic functions sinh and cosh. A few decades later, Gustav Kirchoff (1824–1887) discovered that the emissivity of a surface is equal to its absorptivity and that the total of reflection, absorption, and transmission of a material always equals 1. Later, Austrian physicist Josef Stefan (1835–1893) determined the total radiant exitance from a source from all wavelengths to be equal to the emissivity multiplied by a constant (the Stefan–Boltzmann constant) times its temperature raised to the fourth power. In 1866, Langley used a crude bolometer to study the radiation of carbon at different temperatures. In 1886, Michelson employed Maxwell’s laws to develop crude blackbody laws. Additionally, two famous rules of thumb (or useful approximations) that led to the Planck function were described by Wien and Rayleigh. Jeans found (and repaired) a numerical error in Rayleigh’s equation, so it is now known as the Rayleigh–Jeans law. Others, such as Lummer and Pringsheim, made important pre-Planckian additions to blackbody theory. However, the main architect of modern radiometry was Max Karl Ernst Ludwig Planck (1858–1947). Max Planck began his scientific career under the influence of Rudolf Clausius (developer of the second law of thermodynamics), giving him a strong background in thermal physics. Planck is most noted for describing the blackbody radiation in a simple equation and for developing the quantum theory of energy (which states that energy is not infinitely divisible but exists in units whose energy is defined by the frequency). He noted that the Rayleigh–Jeans approximation agreed well with experimentation for long wavelengths and that the Wien law worked well at shorter wavelengths. On October 19, 1900,
273
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
274
Chapter Fourteen
he presented his blackbody radiation law, confirming existing measurements to within the accuracy of radiometric measurement at the time. Those measurements derived from industrial physics, for example by observing molten glass in kilns. It was desired at that time to develop techniques for improving the energy efficiency of the glass-making process by discovering why so much energy is lost to radiation. This experimental data helped the early theorists form the foundation for the various early theories of blackbody radiation. Even with continued improvements in measurement techniques, the Planck radiation law continues to provide a “law” of physics. It’s correctness is not in doubt. Planck’s insight was to “quantize” the energy. This led to quantum theory and all of its famous ramifications. Many historians mark Planck’s 1900 paper, explaining the blackbody radiation curve, as the birth of twentieth-century physics. Planck continued to refine his equations until, on December 14, 1900, he presented the law as we now know it. He was given the 1918 Nobel prize for his work. Interestingly, Planck remained skeptical of his own theory up to the time of his death. Often not appreciated today, describing blackbody radiation represents a momentous achievement in science and engineering. Planck’s spark of creativity in radiometry allowed succeeding scientists to explore nature under a new paradigm. Planck’s theories led to the explanation of the photoelectric effect (for which Albert Einstein won the Nobel prize), the Bohr model of the atom (the springboard for all of modern particle physics), quantum mechanics, and quantum electrodynamics. Planck’s equation successfully matched the observed and oddly shaped blackbody curve. His formula allows one to calculate the energy within a given spectral bandpass. The curve must be integrated with the spectral cut-on and cutoff as the limits on the integral. Most mortals find this difficult to do mentally. So, before every engineer had a portable computer and a calculator, there existed a plethora of rules of thumb, slide rules, and nomographs to approximate the Planck function. Some of the slide rules were quite elaborate, although not very accurate. In fact, some books on physics and thermodynamics still include extensive approximations of the Planck function and shortcuts for computing things with it. A few of these rules are included in this chapter to help develop a feel for the blackbody function and to use when it is inconvenient to whip out the portable computer. Recent advancements in the study of radiometry include the many incremental improvements by William Coblentz and Fred Nicodemus. Coblentz (1873–1962) (an American) was the first to experimentally measure both Planck’s and Boltzmann’s constants to accuracies within less than 1 percent of the modern values. Even in this new century, radiometric measurements are plagued by the immaturity of instruments and subtle variations in experimental apparatus, making highly accurate measurements difficult. It is an arduous (and sometimes impossible) task to repeatedly and accurately determine the effects of equipment inaccuracy and instability, stray radiation, and the atmosphere. With the exception of some narrow emission and absorption lines, suitable standards for much of the spectrum are in their infancy, with stability, measurement repeatability, and associated transfer standards limited in accuracy to a few percentage points. As a result, most general measurements are accurate to within only a few percentage points of theoretical calculations. Further advancements in this field are likely to begin with more accurate measurement instruments and techniques and increased accuracy of transfer standards. For those interested in more detailed information on radiometry, the authors suggest Vol. 1 of The Infrared and Electro-Optical Systems Handbook and appropriate chapters in the following books: The Infrared Handbook (W. Wolfe and G. Zissis, ERIM) Burle Electro-Optics Handbook (Burle Industries) Infrared System Engineering (R. Hudson, John Wiley & Sons) Far Infrared Techniques (M. F. Kimmitt, Prion)
Radiometry
275
Electro Optical Imaging System Performance (G. Holst, JCD Publishing) Electro-Optical Systems Performance Modeling (G. Waldman and J. Wooton, Artech) Radiometric Calibration (C. Wyatt, Macmillan) Electro-Optical Systems Analysis (K. Seyrafi, Electro-Optical Research Co.) Journals that often contain papers on radiometry include Optical Engineering and Infrared Physics and Technology. Finally, do not forget to monitor the constant progression of tools and resources that are appearing on the internet. For example, you can do blackbody calculations on line (http://thermal.sdsu.edu/testcenter/javaapplets/planckRadiation/blackbody.html) and obtain state-of-the-art information from the National Institute of Standards and Technology (NIST, formerly the National Bureau of Standards) home page (http:// physics.nist.gov). This seems as good a place as any to remind the reader of the units used when dealing with blackbodies or other emitting surfaces. The tables below provide the details. First, we present the radiometric terms and common abbreviations (Table 14.1). TABLE 14.1 Radiometric Terms and Abbreviations Quantity
Units
Comment
Radiant energy
joules (J)
Energy leaving or reaching a surface or point
Radiant flux
watts (W)
Energy created per unit time
Radiant exitance* watts/meter2 (W/m2)
Radiative flux leaving a point on a surface as measured on the hemisphere centered on that point
Irradiance
watts/meter2 (W/m2)
Radiative flux incident on a surface
Radiant intensity
watts/steradian (W/sr)
Radiative energy per unit time measured per unit of solid angle
Radiance
watts/meter2/steradian (W/m2/sr)
Radiative flux emitted from a single point, per unit of solid angle
*This
is sometimes called emittance.
In the modern parlance, any quantity related to the emittance of an area, such as radiant exitance, is called radiant areance. Similarly, any quantity related to the emittance from a point, such as radiant intensity, is called radiant pointance. Finally, any quantity related to a solid angle, such as radiance, is called radiant sterance. The photometric equivalent of the above table appears in Table 14.2. In all cases, the terms are equivalent except that the discussion focuses on the energy and power conveyed within the spectral sensitivity band of the human eye. TABLE 14.2 Photometric Terms and Abbreviations Quantity Luminous energy
Units lumens/second (lm/sec)
Luminous flux
lumens (lm)
Luminous exitance or emittance
lumens/square meter (lm/m2)
Iluminance
lux (lx)
Luminous intensity
candelas (cd)
Luminance
candelas/square meter (cd/m2)
276
Chapter Fourteen
Finally, we present the equivalent properties for photons in Table 14.3. TABLE 14.3 Photonic Terms and Abbreviations Units*
Quantity Photonic energy
joules (J)
Photon flux
photons/square meter/second (m2/sec)
Photon exitance
photons/square meter/second (m2/sec)
Photon irradiance
photons/square meter/second (m2/sec)
Photon intensity
photons/second/steradian (sec/sr)
Photon radiance
photons/second/square meter/steradian (sec/m2/sr)
*Note that in many references, the term photon is not used in these units. Rather, their presence is captured in a unitless quantity, the number of photons. This has the result of changing the units from, for example, photons/second/sr to second–1 sr–1. We have explicitly shown the units here for completeness.
Radiometry
277
ABSOLUTE CALIBRATION ACCURACY Typically, absolute calibration to a national standard cannot be done to a traceable accuracy better than the following: Longwave infrared (7 to14 µm) Midwave infrared (3 to 5 µm) Shortwave infrared (1 to 3–2.5 µm) Near infrared (0.8 to 1 µm) Visible (0.4 to 0.7 µm) UV (0.25 to 0.4 µm)
5 to 10 percent 5 to 10 percent 2 to 5 percent 2 percent 1 percent 1 percent
Discussion Errors add up. By the time you calibrate your instrument, the transfer errors (e.g., variations in repeatability, stray light, unwanted reflections, variations in emissivity, variations in temperature, temperature measurement accuracy, FPA detector response, nonlinearities, FPA inaccuracies, bandpass uncertainties, and the like) will limit your traceable absolute calibration. These results are based on the state-of-the-art of transfer standards, calibration sources, and facilities. These are constantly being improved. Better accuracy will be possible in the future with additional development of sources, procedures, and test facilities. Although it is theoretically possible to obtain more accurate calibration, Herculean efforts are required to mitigate error at every step in the process. Sometimes better accuracy can be achieved for an extremely narrow bandpass that happens to have an easily transferred source (especially in the UV and visible). This rule is for absolute calibration; relative calibration for short periods can be an order of magnitude better. This rule indicates calibration requirements, specifications, and real-world performance. When desperate, use the above as a baseline for inputs to algorithms and system studies. Macedonio Melloni (1798–1854) was an early researcher of radiometry. It arguably could be said that Melloni was the founder of electro-optics, as he and Leopoldo Nobili constructed the first electro-optical detector (an antimony and bismuth thermopile). Melloni was an experimental genius, well ahead of the theory (which Einstein, Planck, and others would eventually develop) and hardware of his day. Melloni conducted several experiments measuring IR radiation but was vexed by unknown nonlinearities, lack of standard sources, unknown source characteristics, and unknown filter effects. One can’t help think about what great discoveries he would have made had he possessed stable, accurate, and known sources. Of course, the bands given above are open to interpretation. LWIR used to be considered the 8- to 12-µm band, but the advent of uncooled bolometers has extended the practical use of this band to the range of 7 to 14 µm. Generally, the region where silicon responds outside of the visual response of the eye is considered to be near infrared. Shortwave infrared extends from about 1 µm to the 3- to 5-µm atmospheric window, which is called midwave infrared. European E-O designers call the 3- to 5-µm band shortwave infrared.
BANDPASS OPTIMIZATION For maximum performance in the thermal infrared, the bandpass should include the maximum change in photon emission with temperature.
278
Chapter Fourteen
For a background limited in performance (BLIP) case, this means that the bandpass should include the maximum of the derivative ∂Qλ ----------------∂T Qλ For a non-BLIP case, the bandpass should include the maximum of ∂Q ---------λ∂T where Qλ = photon exitance T = temperature
Discussion Bandpass selection to maximize performance is often a trade between maximizing the absolute contrast and maximizing the signal-to-noise (or signal-to-clutter) ratio. Usually, with a single-bandpass system, maximum performance is achieved when the bandpass includes the part of the photon exitance (emission) function that contains the fastestchanging part as a function of change in temperature. If background and target are close to the same temperature, reflectivity, and emissivity, select the bandpass to maximize the derivative of the maximum photon flux per wavelength. If the target and background are different in temperature, then the bandpass should be set to include the maximum photon emission from the target. Interestingly, for thermal emitters, this maximum usually starts just shy of the Planck function peak. This rule assumes that the performance is driven by the temperature differential between target and background. Therefore, this rule does not apply to detecting targets via reflected sunlight (usually in bandpass below 3 µm) or spectral emitters such as chemical emissions (engine plumes). This does not account for technology and application limitations. This rule may indicate that you should have a bandpass that includes 13 µm, but finding a large-scale focal plane that responds to that wavelength is difficult. A similar problem occurs if the indicated wavelength is located where the atmosphere is a strong absorber A composite bandpass is usually best for multiple targets. That is, a target class that may have targets of different temperatures should include the maximum derivative for all targets. Unfortunately, if target and background are close to the same temperature, then this rule will likely maximize clutter as well. A separate signal-to-clutter analysis should be done. The above equations and the plot are for detectors that sense “photons,” not watts. This includes PC and PV semiconductors, Schottky barriers, and quantum wells. This does not apply to bolometers, which respond to the power in all bands. For these type of detectors the rule should be stated as follows: dW λ ■ Bandpass should include the maximum of the derivative ---------- where Wλ is the differendT tial radiant exitance at the temperature of interest. Most infrared sensor systems are built to detect a target via the difference in photon emittance caused by a slight change in temperature between it and its background. According to Planck’s theory, for a given temperature, there is a unique spectral point at which a small change in temperature results in the largest change in radiant exitance. If possible, the bandpass should include this wavelength. For instance, from the plot shown in Fig. 14.1, one can see that if the target is at 320 K, the maximum change in radiant exi-
Radiometry
279
FIGURE 14.1 BLIP vs. non-BLIP as a function of temperature and wavelength. (Courtesy of Dr. George Spencer, 1995.)
tance for a system that is BLIP occurs near 7.6 µm. However, this is a poor wavelength for atmospheric transmission, so a sensor operating in the atmosphere should try to get as close as possible and still be in a transmission band. Sadly, clutter effects also tend to be the greatest at the wavelength where these derivatives are at a maximum for the temperature of the background. The graphic shows two curves plotted by temperature. The top curve is the maximum of the partial derivative of ∂Qλ ⁄ ∂T , which corresponds to a case in which the noise is dominated by conditions independent of the background or target temperature. For this case, the bandpass should include this maximum if atmospherics, technology, and design constraints permit. The lower line is for a BLIP case and plots the maximum of ∂Qλ ----------------∂T Qλ This is the same as the non-BLIP case, except it is also divided by the square root of the photon emittance from the target (or background). A background-limited case would have the photon arrival variations dominating the noise, so it can be easily included.
Reference 1. Private communications with Dr. George Spencer, 1995.
280
Chapter Fourteen
BLACKBODY OR PLANCK FUNCTION Radiant exitance as a function of wavelength is C1 M λ ≤ -------------------------------C 5 2 ⁄ λT λ (e – 1) where Mλ = radiant exitance in watts per unit area per wavelength increment at λ C1 = 2πc2h, sometimes called the first radiation constant, equal to 37418 W µm4/ cm2 λ = wavelength in µm (or in alternative units to match C1 and C2) C2 = second radiation constant (hc/k); k is Boltzmann’s constant (1.38 × 10–23 J/K); C2 equals 14388 µm K for the above units (or 1.4388 cm K) T = temperature in kelvins c = speed of light in a vacuum h = Planck’s constant
Discussion This rule can be rewritten to compute photon flux with a slightly different form (see the “Photons-to-Watts Conversion” rule on p. 294). 2πc M q ≤ -------------------------------4 C 2 ⁄ λT λ (e – 1) This was developed by Planck in 1900. He used an ingenious combination of empirical evidence (e.g., data from Langley) and theoretical statistics to produce the famous equation that now bears his name. It reproduced the known Wien’s and Rayleigh–Jeans equations, each of which was known to work only over limited wavelength ranges. Planck hypothesized that energy is not emitted continuously but in discrete quanta with energy hc/λ. This insight opened the door to twentieth-century physics and led to the quantum theory. Those not familiar with blackbody radiation should become knowledgeable about its applications, but keep in mind that many real-world emitters and absorbers are spectral in nature and do not follow a blackbody curve over broad wavelength ranges. Examples of spectral emitters include plume H2O and CO2 excitation. A quick look at the spectrum of the Sun (available in any physics or astronomy book) shows the superposition of both spectral and blackbody sources. Be wary of the units when using these equations. The neophyte will often get confused when noting that the equation in the rule has the units of watts per cubic meter (W/m3). This seems odd at first, but it really means that the units are watts per square meter per wavelength unit (which could be meters or micrometers or anything else that the user chooses). The radiance (or sterance) of a blackbody is 1/π times the above radiant exitance. This is a fundamental equation governing the nature of thermal emission and is used throughout the photonic disciplines. It is used for determining the blackbody emittance or graybody emittance (by multiplying the above by the lower emissivity). The in-band radiant exitance Mq,
Radiometry
281
λh
∫ M λdλ
λl
can be easily calculated using the above equations and a spreadsheet via numerical integration. The “≤” is used in the above equations to indicate that real-world objects are graybodies with an emissivity of less than 1, so their radiant output is always something less than what would be predicted for a perfect blackbody. It is interesting to realize that the average number, n, of photons per degree of freedom emitted by an object at a temperature T is 1 ---------------------------hv exp ⎛ ------⎞ – 1 ⎝ kT ⎠ Hence, it appears in the above equations. When the first equation is integrated from 0 to infinity, the Stefan–Boltzmann law results, which gives the total radiant exitance (σT4). Any number of on-line tools provide methods for computing the blackbody radiance within a specific band. The following analysis1 provides a stand-alone method that can be used with a calculator or spreadsheet. First, we define a parameter ν as C2/λT. We define two regimes, ν ≤ 2 and ν > 2.
∑
–mν
15 e ---------{ [ ( mν + 3 )mν + 6 ]mν + 6 } F 0 – λT = -----44 π m = 1, 2 , 3 , … m 2
4
6
for ( ν ≥ 2 ) 8
15 3 1 ν ν ν ν ν F 0 – λT = 1 – -----4- ν ⎛ --- – --- + ------ – ------------ + ------------------- – ---------------------------⎞ ⎝ 3 8 60 5040 272,160 13,305,600⎠ π
for ( ν < 2 )
The number of terms included in the series of the first equation is selected to obtain the desired accuracy. This function calculates the fraction of the total blackbody radiation emitted up to the indicted value of λT. By doing two calculations, one can find the power in a particular band. The break point of “2” is equivalent to T = 7.2 × 10–3/λ, where wavelength is expressed in meters. For example, if we are dealing with a blackbody of temperature 3000 K, then the first formula applies for wavelengths shorter than 2.4 µm. For cooler bodies, say 300 K, the first formula applies shorter than 24 µm. As an example, consider a blackbody with a temperature of 300 K. What fraction of its total emissive power (σT4) falls between the wavelengths of 50 and 51 µm? Using the second of the equations, we compute F0–λT for both 50 and 51 µm. We find that the defined band represents 0.156983 percent of the total output of the blackbody. This result compares very well with the value obtained from an on-line blackbody calculator,2 which gives the result of 0.1565362 percent. While not as accurate as a computation using Planck’s integral, the result is certainly adequate for a quick assessment.
References 1. R. Siegel and J. Howell, Thermal Radiation Heat Transfer, Appendix A, McGraw-Hill, New York, 1972. 2. http://thermal.sdsu.edu/testcenter/javaapplets/planckRadiation/blackbody.html, 2003.
282
Chapter Fourteen
BRIGHTNESS OF COMMON SOURCES 1. 2. 3. 4. 5. 6.
The Sun near the equator at noon has a brightness of 105 lux and emits about 1026 W. A full Moon is 500,000 times dimmer (0.2 lux). A super-pressure mercury lamp emits about 250 W/cm2 sr. A 60-W bulb emits approximately 50 lux, which is 50 lm/m2 at 1 m. A 4-mW HeNe laser emits about 106 W/cm2 sr. The power produced by a quasar is about 1040 W, assuming that the quasar emits uniformly in all directions.
Discussion It is interesting to compare these brightnesses to the sensitivities of some common visible sensors. Note that a fully dark-adapted eye, under ideal conditions, can produce a visual sensation with about 3 × 10–9 lux (lm/m2). This can be converted into photons per second by noting that the dark-adapted eye has a peak sensitivity at 510 nm and that, at that wavelength, the conversion from lumens to watts is 1725 lm/W. Thus, the dark-adapted (scoptic) eye, having an entrance area of about 13 × 10–6 m2, can sense about 22 × 10–15 W. Each photon at 510 nm carries 390 × 10–21 joules, so the eye is sensitive to about 58 photons/sec. This is confirmed by the Burle E-O Handbook,2 which quotes work showing the sensitivity of the dark-adapted eye at 58 to 145 photons/sec. In contrast, photographic film grays if 2 × 10–3 lux is imposed for 1 sec. At a film resolution of 30 lines per millimeter, we could see the effect on 10–9 m2 or, with 10–11 lumens, about 3,000 photons at 510 nm.
References 1. M. Klein, Optics, John Wiley & Sons, New York, p. 129, 1970. 2. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 121, 1974, http:// www.burle.com/cgi-bin/byteserver.pl/pdf/Electro_Optics.pdf, 2003.
CALIBRATE UNDER USE CONDITIONS The calibration of an instrument for a specific measurement should be conducted such that, to whatever extent possible, the results are independent of instrument artifacts. Moreover, the calibration should be conducted under conditions that reproduce (or closely approximate) the situations under which field measurements are to be made.
Discussion Sometimes, calibrations of sufficient accuracy may be done in environments that differ slightly from their use environment, but this approach generally should be avoided. The radiometric and environmental conditions should simulate those that are likely to be experienced in use. This rule is useful when designing calibration facilities, determining calibration requirements, and understanding the usefulness of a previous calibration (see Fig. 14.2). The purpose of a calibration is to establish a known relationship between the output of an electro-optical sensor and a known level of incident flux, which can be traced to pri-
Radiometry
FIGURE 14.2
283
Example of a multiple blackbody test facility. (Courtesy of FLIR Systems Inc.)
mary standards (e.g., NIST in the U.S.A.). As stated above, when the flux is to be measured at different times and places or with different instruments, the results should be the same (or of a known function of each other). Sensors need to be fully characterized so that their contribution can be estimated, allowing for appropriate corrections. Electro-optical sensors have quirky attributes that make their output a complex, nonintuitive function of the total input, which includes spectral radiant flux from the background, the temperature of the sensor, the expected radiant flux on the aperture, similar target geometry, background, polarization, vibration, and so on. Also, measurements must be made a sufficient number of times to determine repeatability and an estimate of calibration uncertainty.
EFFECTIVE CAVITY EMISSIVITY The effective emissivity of a cylindrical cavity is1,2 ε εeff = ----------------------------A A ε ⎛ 1 – ---⎞ + --⎝ S⎠ S where ε = emissivity of the material A = area of the exit port of the cavity S = total surface area of the cylinder
284
Chapter Fourteen
Discussion Just about any shape that has a large surface area compared with the surface of the exit aperture will create a high effective emissivity, regardless of the inherent emissivity of the materials. For example, a cone with an aspect ratio (ratio of length to aperture diameter) of 6 will have an effective emissivity of 0.995.1 Even a stack of shiny razor blades, when viewed looking at the sharp edges, is extremely black. Reference 2 provides an equation that includes an additional level of detail and applies to general closed shapes. In this formulation, ε(1 + k ) εeff = ----------------------------A A ⎛ ε 1 – ---⎞ + --⎝ S⎠ S A A where k = ( 1 – ε )⎛ --- – -----⎞ ⎝ S So⎠ So is the surface area of a sphere whose diameter is equal to the depth of the cavity. That is, So is defined by the distance from the exit plane to the deepest point of the cavity. For example, consider a sphere of diameter 50 cm and an opening of 1 cm. Because S and So are nearly the same, k ≈ 0, yielding the equation in the rule. Using the rule and the dimensions of the sphere mentioned above, we can estimate the emissivity of the cavity as ε ε′ = ---------------------------------------------------ε( 1 – 0.0004 ) + 0.0004 If ε ≈ 1, then ε′ = 1, as expected. However, suppose that the material is very reflective and ε ≈ 0.1. Then, 0.1 εeff = -------------------------------------------------- = 0.9964 0.1( 0.9996 ) + 0.0004 This very high emissivity is consistent with our experience with blackbodies, which is that the surface material is unimportant if the shape is approximately a closed surface with a small hole. Yet another approach, but with less adaptability, appears in Ref. 3. For a typical cylindrical cavity (one end open and the other closed) with a diameter-to-length ratio of about 1:2, the cavity’s effective emissivity can be estimated from 2
εcavity = 0.8236 + 0.43ε – 0.367ε + 0.113ε
3
where ε = surface emissivity This polynomial equation allows the computation of emissivity for a cylinder. In any other case, the effective emissivity depends on the cavity shape and surface properties. Note that an ε of 0.8 results in a cavity emissivity of about 0.99. These results apply in the wavelength range of 2.2 to 4.7 µm. Figure 14.3 illustrates the functional form of the polynomial equation.
References 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 68–69, 1969. 2. W. Wolfe and G. Zissis, Eds., The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 2–3. 3. R. Bouchard and J. Giroux, “Test and Qualification Results on the MOPITT Flight Calibration Source,” Optical Engineering, 36(11), p. 2992, November 1997.
Radiometry
FIGURE 14.3
285
Functional form of polynomial equation.
THE MRT/NE∆T RELATIONSHIP For IR systems, the minimum resolvable temperature (MRT) can be forecast from the noise equivalent delta (or differential) temperature (NE∆T) by NE∆T MRT ≈ k ------------------------MT F sys ( f ) where MRT = minimum resolvable temperature k = proportionality constant (Holst indicates that this should be 0.2 to 0.5) NE∆T = noise equivalent delta temperature MTFsys(f) = system-level modulation transfer function at spatial frequency f
Discussion This is an approximation only, accurate for mid-spatial frequencies, and it applies to thermal imaging IR sensors only. It assumes that operator head movement is allowed in the testing. MTF and MRT must be at the same spatial frequency, and MTF should account for line-of-sight (LOS) stability and stabilization. This rule can be used to estimate a hard-to-calculate quantity (MRT) from two easily calculable (or determined) quantities (NE∆T and MTF). Conversely, it may be used to estimate the MRT when an MRT test (or its data) is not convenient. The sensitivity of thermal imaging sensor systems seems to fall off linearly with a decrease in the modulation transfer function. For IR systems, this can be related to the NE∆T. The MRT is usually somewhere between 1/2 and 1/5 times the NE∆T.
Reference 1. G. Holst, “Minimum Resolvable Temperature Predictions, Test Methodology and Data Analysis,” Infrared Technology XV, Vol. 1157, SPIE Press, Bellingham, WA, pp. 208-216, 1989.
286
Chapter Fourteen
THE ETENDUE OR OPTICAL INVARIANT RULE In a lossless optical system, the etendue is constant at all planes crossed by the light rays. Etendue is the product of the area of the bundle of rays crossing a surface and the solid angle from which those rays come (or to which they go). 2
C = n AΩ where C = a numeric constant for a given detector pixel size and wavelength equal to about λ2/2N, where N is the number of pixels (This value is called the etendue of the system.) A = area of the optics n = index of refraction of the medium in which AΩ is measured Ω = solid angle field of view
Discussion The numerical value of the constant depends on the optical system. A and Ω could be the area of a pixel and the solid angle of the light falling on the pixel, or they could be the area of the optical pupil and the solid angle of the instantaneous field of view. The term n is the refractive index of the medium in which the AΩ product is measured. Often, but not always, n is unity. Detectors or microscope objectives immersed in a high-index medium, and light going from air into an optical fiber, are examples of the need to account for n. For the rest of this discussion, the above equation is true when the index of refraction of the medium in which the rays are traveling is the same throughout the system. The etendue relationship is a basic property of radiometry and optics. The expression above, simplified to AΩ = λ2, can be derived from the diffraction theory of optics. The etendue of a system is defined by the least optimized segment of the system. Once it is known, the limit of performance of the entire optical train is determined. Additionally (with reference to Fig. 14.4), we can write 2
Ad Ωi = As Ωo = Ao Ω′ ≈ Ao ( IFOV ) ≈ C d λ where Ad = Ωi = As = Ωo =
2
area of the entire detector (or pixel in an array) solid angle subtended by the detector in image space area of interest in the scene solid angle of the scene in object space
FIGURE 14.4 Etendue defines the efficiency with which light is conveyed from place to place in an optical system.
Radiometry
Ao = Ω′ = IFOV = λ= Cd =
287
area of the optics solid angle of the detector projected into object space instantaneous filed of view of a detector (FPA) pixel wavelength (for a broad bandpass system, use the midpoint for this) a constant determined by the pixel geometry’s relationship to the blur diameter (see below) (Generally, for imaging systems, this is from 1.5 to 10, although it may be higher for systems in which it is advantageous to oversample the blur, such as star trackers).
In a diffraction-limited system, the blur diameter is equal to the Airy disk or 2.44 (λ/D)f, where D is the aperture diameter, and f is the focal length. If a square detector is matched to the blur, its area is the square of this, or 5.95(λ2/D2)f2. The solid angle seen by the detector is Ao/f2, so we have the product ⎛ λ2 ⎞ 2 ⎛ Ao⎞ 2 Ad Ωi ≈ 5.95⎜ -----2-⎟ ⎛ f ⎞ ⎜ -----2-⎟ ≈ 6λ ⎝ ⎠ ⎝D ⎠ ⎝f ⎠ In systems that are not diffraction limited, the “6” (or Cd) is replaced by a larger number, but the important λ2 dependence remains. Conversely, if the blur spot is oversampled (large compared to the detector pixel size), this will be smaller (e.g., 1.5 λ2 if the radius of the Airy disk is matched to that of the detector pixel). Similarly, a pixel (again, matched to the Airy disk) projected onto the scene has an area of [2.44(λ/D)R]2, where R is the range, and the solid angle is [2.44(λ/D)R]2]2/R2, or simply 5.95λ2/D2 and (for an unobscured circular aperture) ⎛ 5.95λ2⎞ -⎟ Ao Ω′ = ⎜ --------------⎝ D2 ⎠
2
2 ⎛ πD ----------⎞ ≈ 4.7λ ⎝ 4 ⎠
The etendue works regardless of aperture and pixel shape (and is frequently used by spectroscopists working with slits). When properly applied, the rule allows one to estimate another system’s (e.g., a competitor’s) useful aperture, f/#, or detector size. It provides a determination of collection aperture for radiometric applications. It can be used for estimates when coupling light into fibers, because there is a small cone, defined by the equation, that allows acceptance of light into a fiber. This rule goes by many different names. Spectroscopists like etendue, ray tracers like Lagrange theorem, and radiometry buffs like to say optical invariant. The important relationship is that the useful aperture and the field of view are inversely related. The numerical value of their actual relationship depends on the optical design, the chosen paraxial ray, and the height of the object or detector. The numerical constant is not important (you can choose that based on your design or assumptions). What is important is the understanding that increasing one must result in a decrease in the other. Hence, large multiple-meter astronomical telescopes have small fields of view, and wide-angle warning systems have very small apertures. You can have one but not the other. Additionally, a zoom lens will have a larger effective aperture when viewing its narrow field (resulting in a brighter image) as compared to its wide field. As Longhurst puts it, In paraxial geometrical optical terms, the ability of an optical system to transmit energy is determined by a combination of the sizes of the field stop and the pupil in the same optical space; it is measured by the product of the area and the pupil in the same optical space; it is measured by the product of the area of one and the solid angle sub-
288
Chapter Fourteen
tended at its center by the other. This is the three-dimensional equivalent of the Helmholtz–Lagrange invariant or the Sine Relation.6
Given the size of a detector and the loose approximation that it can “sense” energy from one steradian of solid angle, the upper limits of the field of view and capture area in any associated optical system are immediately calculable. The energy captured (using a small angle approximation) is approximately Ωo or Ao/R2. For a “fast” system, this is on the order of unity, so the energy captured is approximately Ad/Ao. In any event, a small Ad implies small capture area for a given FOV (hence, low energy capture). A large IFOV implies a small capture Ao area for a given detector area (Ad). Hobbs8 provides a particularly nice discussion of the limits of use of the concept of etendue and points out that the etendue of a Gaussian beam is (π2/16)λ2. Any fully coherent beam has an etendue of exactly λ2/2. This follows very neatly from the reciprocity theorem or the conservation of energy.
Reference 1. Private communications with Dr. J. Richard Kerr, 1995. 2. Private communications with Dr. George Spencer, 1995. 3. R. Kingslake, Optical Systems Design, Harcourt Brace Jovanovich, Orlando, FL, pp. 36–38, 43–44, 1988. 4. A. Siegman, Lasers, University Science Books, Mill Valley, CA, p. 672, 1986. 5. C. Wyatt, Radiometric System Design, Macmillan, New York, pp. 36, 52, 1987. 6. R. Longhurst, Geometrical and Physical Optics, Longman, New York, pp. 465–467, 1976. 7. I. Taubkin et al., “Minimum Temperature Difference Detected by the Thermal Radiation of Objects,” Infrared Physics and Technology, 35(5), p. 718, 1994. 8. P. Hobbs, Building Electro-Optical Systems: Making It All Work, Wiley Interscience, New York, p. 27, 2000.
IDEAL NETD SIMPLIFICATION One approximation for the noise equivalent temperature difference (NE∆T or NETD) is 2
NETD*ideal ( λco ) ≈ kT D*ideal@λco where
NETD*ideal = ideal noise equivalent temperature difference achievable from a 300 K nominal for a given spectral cutoff λco k = Stefan–Boltzmann constant (1.38 × 10–23 J/K) D*ideal@λco = specific detectivity of an ideal photoconductor with a cutoff wavelength λco T = temperature in kelvins
Discussion As photodetectors get better and better, background-limited performance (BLIP) conditions are more often achieved. The above rules give a simple approximation for the minimum (best) NETD achievable. Anyone claiming to have a system that can do better is uniformed, observing at a different temperature, or using multiple bands. Although the authors cannot think of a single system that achieves the incredibly small NETD calculated above, they are probably just beyond your current calendar.
Radiometry
289
These rules are based on definitions of NETD and basic radiometric principles applied to a condition in which the noise is dominated by the variation in the arrival of the photons (a BLIP condition). To a first approximation, the NETD*ideal can be approximately scaled by D*ideal ------------------D*actual However, as always when using D*, one must beware of the D* measurement parameters (does it include 1/f noise, readout noise, cold shield, and so on?) and be conscious of well size (detector readout capacity can limit integration time). The reference provides a simple expression of the minimum noise equivalent temperature difference achievable (BLIP conditions) from a 300 K target is 5.07 × 10
–8
300 --------T
(K cm s
1/2
)
This result is normalized for an integration time (t) of 1 sec, a photodetector area (A) of 1 cm2, and exposure to radiation coming from a hemispheric solid angle around the detector. This result is analogous to the standard definition for detector performance, D*. For conditions different from those just stated, divide the result above by At . If the exposure solid angle is not π, divide the above by Ω ⁄ π .
Reference 1. I. Taubkin et al., “Minimum Temperature Difference Detected by the Thermal Radiation Of Objects,” Infrared Physics and Technology, 35(5), p. 718, 1994.
LABORATORY BLACKBODY ACCURACY When used in a real-world setup, laboratory blackbodies are radiometrically accurate to a only few percent.
Discussion When blackbodies are incorporated into test facilities, several practical constraints limit the radiometric accuracy. First, there is a small temperature cycle caused by the control bandwidth. This can be on the order of one-tenth of a percent of the temperature. Second, a minor temperature uncertainty results from the separation of the emitting surface from the heating and cooling mechanisms and the temperature measurement devices, all contained within the blackbody (but not on the radiating surface). The resultant temperature gradients are small, but small changes in temperature can result in significant changes in radiant exitance. For example, a 1 K bias (at 300 K) alone causes a 3.7 percent radiometric error in the 3- to 5-µm band and a 1.7 percent error in the 8- to 12-µm band. Third, black coatings are not perfectly black; rarely is their emissivity greater than 0.95. In fact, after measuring 16 “black” surfaces, one of the authors (Miller) could not find a single instance of a reflection lower than 0.06 in the LWIR. There tends to be a slight emissivity variance across the aperture and a small reflectance. Fourth, the blackbody may have contaminants on it or be slightly (yet unknown to the user) damaged. This rule is based on empirical observations of the state of the art. Most commercial blackbodies have a few percentage points variation across their aperture because of (1) re-
290
Chapter Fourteen
flections, (2) emissivity that varies with wavelength and viewing angle, and (3) temperature inaccuracies. Rarely are common commercial blackbodies traceable to a National Radiometric Standard, which should be attempted in all cases. Blackbodies employing phase change materials and laboratories that exercise extreme care can claim better accuracy. Conversely, poor radiometric facilities and blackbodies used outside their intended temperature range, or aged ones, can be much worse. Radiometric accuracy tends to decrease as blackbody temperature decreases. The plot in Fig. 14.5 demonstrates the sensitivity of photon flux to minor changes in temperature, emissivity, and reflection. The plot was made by comparing the calculated photon flux from a perfect blackbody (emissivity of 1, exact temperature, and no reflection) to that with an emissivity of 0.9985, a 0.15 percent temperature reduction, and a reflection of 0.0015 of a 300 K background. MWIR refers to a 3- to 5-µm bandpass, and LWIR is an 8- to 12-µm bandpass. The slight bumpiness of the LWIR results near 300 K is a result of the reflection term, which slightly offsets the reduced radiance resulting from reducing the temperature and emissivity. Note that a blackbody with an emissivity that deviates from perfection by only 0.0015 is very good indeed.
FIGURE 14.5 Calibration error in percent for temperature error of 0.15 percent and emissivity error of 0.15%. This figure shows the performance of blackbodies for different temperatures of operation (see text for details).
LAMBERT’S LAW Most surfaces can be considered to be diffuse (or “Lambertian”), and their emittance and reflection follow the following law: M L = ----π where L = radiance in watts per square meter per steradian (W/m2/sr) M = radiant exitance in watts per square meter (W/m2)
Radiometry
291
Discussion In terms of radiant intensity, I θ ∝ I cos θ where Iθ = radiant intensity in watts per steradian (W/sr) from a surface viewed at an angle θ from the normal to the surface I = emitted radiant intensity in watts per steradian (W/sr) θ = the viewing angle between the normal to the emitting surface and the receiver The second expression says that as the angle from normal incidence increases, the radiance projected toward a viewer goes down as cosθ. This rule assumes that the surface is not specular, which is a good assumption for the visible spectrum unless you know differently. In reality, most materials have a specular component to their reflectivity, along with a Lambertian component. For shiny surfaces, the specular component dominates. This simple rule explains why the full Moon looks uniformly bright right up to its edges, even though the surface area per unit angle increases with cosθ. It is true that you are viewing dramatically more surface area per unit angle near the edge. But, at the same time, the Lambertian properties of the surface reduce the radiation from that area that is directed toward your eye. The increased area per angle is cancelled by the smaller radiation into that angle, and all locations on the disk seem to have the same intensity. As the angle of incidence decreases, the surface is less likely to exhibit Lambertian properties. At grazing incidences, most surfaces exhibit a specular quality. Unless polished, most surfaces reflect and emit as a diffuse surface in the visible wavelengths, and at high angles of incidence. This is because most surfaces are “rough” at the scale of the wavelength of light. Therefore, they reflect and emit their radiation following Lambert’s cosine law. Mirrors do not follow this law, but, rather, the laws of reflection for geometrical optics. Although simple, the first equation represents a powerful rule. It enables one to quickly change from a spectral emittance defined by the Planck function to a spectral radiance merely by dividing by π. This works for reflection as well. Note, that the expression is M/ π, not M/2π. The conversion factor is just one π. The projection of the hemisphere (2π) in which the object is radiating into onto a two-dimensional flat surface yields a disk (π). As wavelength increases, a given surface is less likely to be Lambertian, because the ratio of surface roughness to the wavelength becomes smaller, and random scattering decreases. This effect goes as the inverse of the wavelength squared, as shown in a rule in Chap. 13, “Optics.” A diffuse surface in the visible is frequently specular in the IR, and most surfaces are specular in the millimeter-wave regime, which can sometimes make imaging challenging—although this does facilitate manufacture of high-quality millimeterwave mirrors. Said another way, the metrics used for mirror quality in the visible and infrared wavelengths (say, λ/20) apply as well at much longer wavelengths. Because the wavelength is longer, the tolerance for surface roughness increases.
LOGARITHMIC BLACKBODY FUNCTION When a blackbody’s output is plotted versus wavelength on a log-log graph, the following is true: 1. The shape of the blackbody radiation curve is exactly the same for any temperature. 2. A line connecting the peak radiation for each temperature is a straight line.
292
Chapter Fourteen
3. The shape of the curve can be shifted along the straight line connecting the peaks to obtain the curve at any temperature.
Discussion The spectral exitance of a blackbody can be determined quickly using the following method. First, trace the general shape of the Planck function in log-log units (examples appear in Fig. 14.6). Place it on the graph, matching the peak of your trace to the tilted peak line. The spectral exitance can be determined by moving the trace up and down that line, setting the peak to the desired temperature. This is illustrated in the figure, which shows the blackbody spectral exitance for a range of temperatures. The plots cover the spectral range from 100 to 2000 nm (2 µm) and temperatures from 5000 to 12,000 K. A quick look shows that the curves indeed have the same shape (as stated in item no. 1 above). The straight line illustrates item no. 2 of the rule. A little imagination will convince the reader that no. 3 is correct as well. Properly moved, each curve will lie on top of its neighbor; the curves have the same shape. It should be noted that the straight line on the curve is a representation of Wien’s displacement law. It shows that the product of the wavelength at which the curve has a peak and the temperature is a constant.
Reference 1. C. Wyatt, Radiometric Calibration, Theory and Methods, Academic Press, Orlando, FL, pp. 32–33, 1978.
FIGURE 14.6 Illustration of Wein’s law and the assertion that a single curve, duplicated and moved, can represent any blackbody function.
Radiometry
293
NARROWBAND APPROXIMATION TO PLANCK’S LAW Martin1 gives a narrowband approximation to the Planck radiation law as follows: 2c∆λ –1.44 ⁄ λT -e Φλ = -----------4 λ where Φλ = flux in photons per square centimeter per second per steradian at the center wavelength c = speed of light (3 × 1010 cm/sec) ∆λ = difference in wavelength across the bandpass (in centimeters) (λh – λl ), “h” meaning high wavelength and “l” meaning low wavelength λ = median wavelength (in cm) T = temperature of interest (e.g., background or scene temperature) in kelvins
Discussion Planck’s blackbody integral law becomes algebraic when the difference between the upper and lower wavelengths’ difference is less than about 0.5 µm. This closed-form expression of the Planck function (with units of photons per second per square centimeter per steradian) varies exponentially as Φλ ( T ) = Φλ e
– X ( λ )/T
where Φλ(T) = photon flux at a given wavelength for a given blackbody temperature Φλ = 2c ∆λ λ–4 X(λ) = hc/kλ T = temperature This rule works well for hyperspectral and multispectral systems where each bandpass is less than 0.25 µm. The narrower the strip of the spectrum, the more accurate this approximation becomes. Typically, the accuracy is the ratio of the width of the strip to the wavelength. This rule assumes a narrow band, for bands of 1 µm or less. Generally, this provides an accuracy of a few percent for bands less than 1 µm in width. This is good for a narrowband approximation to the Planck’s radiation law when calculated in photons. This is very useful in normal IR practices when attempting to calculate the amount of background flux on a detector (which may limit integration time by filling up the wells). For wide bands, the photon flux can be found by adding up successive pieces of the bandpass using the above equation (e.g., numerical integration on a spreadsheet). This narrowband approximation is important, as its derivative at a specified temperature can be taken in closed form to allow the contrast function to be found for a given temperature. Aficionados will realize that the 1.44 relates to the classic radiation constant derived from hc/k. Consider the solar spectrum and assume the Sun to be a 5770 K blackbody source (see the rule, “Blackbody Temperature of the Sun,” p. 33), and let’s calculate the photon flux for a 1-µm wide bandpass at 0.6 µm. In this case, 2c∆λ –1.44/λT 27 –4.16 25 2 -e = 4.63 × 10 e = 7.23 × 10 photons/sec cm ∆λ(cm) Φλ = -----------4 λ
294
Chapter Fourteen
If we multiply 7.23 × 1025 by a 1-µm bandpass (10–4 cm), we get 7.2 × 1021 photons/ sec cm2. Using standard theory, the flux is 7.33 × 1021 photons/sec cm2, so the rule provides reasonably accurate results.
Reference 1. Private communications with Dr. Robert Martin, 1995.
THE PEAK WAVELENGTH OR WIEN DISPLACEMENT LAW The peak wavelength (in micrometers) of a blackbody expressed in watts is approximately 3000 divided by the temperature in kelvins.
Discussion According to Planck’s law, a blackbody will have an energy distribution with a unique peak in wavelength. For a blackbody, this peak is solely determined by the temperature and is equal to 2898/T. This assumes the emitter is indeed a blackbody and not a spectral emitter. Hudson1 points out that about 25 percent of the total energy lies at wavelengths shorter than the peak, and about 75 percent of the energy lies at wavelengths longer than the peak. Additionally, Hudson gives the following shortcuts: To calculate the wavelengths where the energy is half of the peak (half power or at the 3-dB points), divide 1780 by the temperature in kelvins for the lower end and 5270 for the higher end. You will then find: Four percent of the energy lies at wavelengths shorter than the first half power point. Sixty-seven percent of the energy lies between the half points. Twenty-nine percent of the energy lies at wavelengths longer than the longest half power point.1
Of course, there are other ways to describe blackbodies, and each has its own version of the Wien law. For example, the maximum occurs for photon emission from a blackbody when the product of wavelength and temperature equals 3670 µmK.
Reference 1. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, New York, pp. 58–59, 1969.
PHOTONS-TO-WATTS CONVERSION To covert a radiometric signal from watts to photons per second, multiply the number of watts by the wavelength (in micrometers) and by 5 × 1018. Photons per second = λ (in micrometers) × watts × 5 × 10
18
Discussion The actual conversion can be stated as watts = ( hc ) ⁄ λ photons/sec, so photons per second = ( watts × λ ) ⁄ ( hc ) = watts × λ (in meters) × 5 × 1024 for all terms with the dimension of meters. Note that the term 5 × 1024 derives from the inverse product of h and c. There are
Radiometry
295
one million micrometers in a meter, so the constant is 106 smaller if one uses micrometers. If you require more than two significant figures, the constant is 5.0345 × 1018. Actual results are only for given wavelength or an infinitesimally small bandpass. Typically, using the center wavelength is accurate enough for lasers or bandwidths of less than 0.2 µm. However, if doing this conversion for wide bandpasses, set up a spreadsheet and do it in, for example, 1/20-µm increments. The rule is valid only if the wavelength is expressed in micrometers. The constant must be adjusted for wavelengths expressed in other units.
QUICK TEST OF NE∆T If an infrared system can image the blood vessels in a person’s arm, then the system is achieving a NE∆T or MRT (whichever is proper) of 0.2 K or better.
Discussion Because they are transporting hot blood, vessels in a person’s arm or head tend to have a temperature difference of several degrees above the outside skin. However, the thermal and optical transmission through even thin skin is low, so it usually appears that veins have 0.1 to 0.3°C higher temperatures than the outside skin temperature. If your camera can image them, then it has a NE∆T of this amount or better (smaller). This phenomenon can be observed with any infrared camera that has this level of performance. Keep in mind that human skin has a temperature of about 93°F (not 98.6°, the temperature deep in the body). Using the Wien displacement law, we find that the peak of the blackbody spectrum of skin is about 9.3 µm. The emissivity of skin is widely reported to be around 0.97 or above in the infrared. A rule in Chap. 17, “Target Phenomenology,” provides more information on the signature of the human body. This rule provides crude approximations only. It does not account for adverse effects such as focusing, atmospheric transmission, and the like. It doesn’t tell you how good the system is for subpixel detection—just that it is better than a NE∆T of 0.2 K. It seems to be more difficult to image the vein in a women’s arm than in a man’s. This is probably due to the extra layer of fat and less muscle (and therefore smaller veins). Imaging a women’s arm veins would often indicate an even better NE∆T. This is useful for quick estimates of system performance. This is especially useful when the camera is not yours and is publicly set up at a conference, lab demo, or some other function.
THE RULE OF 4f/#2 From an extended background, the scene power on the detector can be represented as Ms E d ∝ -----------------24( f /# ) where Ed = irradiance at the detector Ms = radiant exitance of the scene f/# = effective f/# (effective focal length/effective aperture)
296
Chapter Fourteen
Discussion This rule is based on basic radiometry and is an outgrowth of associated rules in this book. It can be shown that, for an extended object (e.g., the background), the power on the detector depends only on the f/# and not the aperture size. Consider that the per-pixel energy entering the aperture (ignoring the atmosphere and expressed in watts) is 2
NπD 2 2 ------------- ( IFOV ) ( R ) 2 4R where
N = source exitance in watts per square centimeter per steradian (W/cm2/sr) D = aperture size IFOV = angular size of a detector (the focal length divided by the detector linear dimension) R = distance to the target
Rewriting, we get 2
ND π detector dimension 2 -------------- ⎛ ---------------------------------------------⎞ ⎠ 4 ⎝ f where f is the focal length of the optics, so the irradiance in the focal plane is M 2 -----------------2- ( detector dimension ) 4( f /# ) because M = πN, and f/# = f/D. From this, we obtain the formula in the rule by dividing both sides by the square of the detector dimension. This handy rule allows easy use and estimation of the relationship between the power from the scene and the expected power on the detector. It also provides for calculations of the effect of f/# on detector power. This can help answer questions such as, “Will the new f/# cause my detector pixel’s wells to overfill?” An extended source will result in a flux on the detector that is governed by the f/# of the optical system only. Again, the actual calculation will need to include optical transmission, any losses in the intervening medium, blur spot effects, and consideration for spectral filtering.
Chapter
15 Shop Optics
The industry is full of rules of thumb for the shop manufacture and test of optics. Included here are a select few that should be available to anyone involved in electro-optics. Shop optics began many centuries ago, when the first optical hardware was made. To early investigators, the world was full of rules of thumb and principles explained by thought process alone. Certainly, shop optics developed earlier than 425 B.C., when the first known written account of making lenses was written (The Clouds, by Aristophanes). Glass was discovered about 3500 B.C., and crude lenses dating from 2000 B.C. were found in Crete and Asia Minor. Euclid, in his 300 B.C. publication Optics, may have been the first to write about curved mirrors. The Romans polished gemstones, and, by the eleventh century, glassblowers were making lenses for spectacles and research. Optical shop manufacture and test was flourishing shortly after A.D. 1600 with the invention of both the telescope and the microscope and the popularity of spectacles. Roger Bacon wrote of magnification and applied lenses to look at celestial objects. His reward, a gift from the religious fanatics of his time, was imprisonment. It was during the Renaissance that the likes of Galileo, Lippershey, and others applied optics to telescopes, microscopes, and other visual aids. An interesting note1 (which the reference cautions may be “partly or wholly fiction”) is that there is evidence that Lippershey applied for a patent for his telescope and sent one as a gift to the Prince of the Netherlands, Mauritz of Nassau. The prince showed it to many luminaries of the time. Lippershey was eventually denied a patent because, he was told. “too many people have knowledge of this invention.” This was indeed true, given that toy telescopes were already for sale in Paris, and other spectacle makers were also claiming to have invented the telescope. The publication of Isaac Newton’s Opticks in 1704 was a milestone in the science and engineering of optics. His work was a compendium of theories to explain the optical phenomena that he observed in his optics shop. Thus, spherical polishing has a rich 300-year history at a minimum. Spherical polishing entails using a tool, of about the same size as the optical element, that oscillates and rotates against the surface of the element while flooded with abrasives and water. This technique works very well for spherical optics but not does not apply to aspheres. Today, aspheres are usually made by diamond turning or computer-controlled subaperture polishing, although magnetorheological finishing and ion polishing are viable alternatives.
297
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
298
Chapter Fifteen
Jean Foucault (1819–1868) gave mirror makers another powerful tool called the knifeedge test. A straight edge, often a razor blade, is used to block the light at the focus of an optic. The appearance of the mirror, when viewed from near the knife edge, is a powerful indication of the figure of the optic. However, you should be careful not to nick your nose while performing this test. The nineteenth century saw the development of slurry-based lapidary grinding and polishing techniques that are still employed today (with machines such as shown in Fig. 15.1). This is how almost all lenses and mirrors were made until the late 1980s. The 1980s and 1990s witnessed several key advancements, including near-net shaping, diamond turning (diamond turning machines such as shown in Fig. 15.2), and precision molding. Today, a new technique called deterministic microgrinding (DMG) is being explored and holds promise for additional commercial use, as does the more advanced technology of growing optics in deposition chambers. Key to modern shop optics are modern materials and testing techniques. Many very precise testing techniques use interferometers. Hence, some of the rules in this chapter address fringes—the light and dark patterns produced by interferometers. The interferometer is one of the most accurate and finest measurement devices known, with measurement accuracy of better than half a millionth of a centimeter (with a tenth of a wave readily achievable). Interferometers can easily display wavefront deformation much smaller than a wavelength, so they are often used to measure the shape of an optical surface. The Fizeau interferometer is one of the most common for optical metrology. Usually, the optical surface under test is compared with a “known” flat or spherical surface. The light waves reflected from the two surfaces produce interference fringes whose spacing indicates the difference of the two surface’s profiles. If the reference surface is truly flat, the fringe spacing directly indicates the figure (or shape) of the test surface. Dan Malacara’s Optical Shop Testing, T. Izumitani’s Optical Glass, and H. Karow’s Fabrication Methods for Precision Optics are stalwart texts on practical shop optics. Occasionally, papers can be found on optical manufacturing techniques in the journals and
FIGURE 15.1
Traditional lapidary grinding process. (Figure courtesy of FLIR Systems Inc.)
Shop Optics
FIGURE 15.2
299
Modern diamond turning facility. (Figure courtesy of FLIR Systems Inc.)
trade magazines, but skills are usually still handed down from one master optician to the next. There are several good web sites and organizations such as sponsored by the University of Rochester.4 Also, the publications and web site of the American Precision Optics Manufactures Association (APOMA)5 should be checked regularly by those with an interest in the subject.
References 1. “History of the Telescope,” 2003, from www.stormpages.com/swadwa/hofa/ht.html, 2002. 2. D. Golini, “Improved Technologies Make the Manufacture of Aspherical Optics Practical,” OE Magazine, August 2001. 3. W. Wolfe and G. Zissis, The Infrared Handbook, ERIM, Ann Arbor, MI, and the Office of Naval Research, Washington, DC, pp. 3-129 to 3-142, 1978. 4. www.APOMA.org, 2003. 5. www.opticsexellence.org, 2003.
300
Chapter Fifteen
ACCURACY OF FIGURES 1. The “Rayleigh Quarter-Wave” rule states that an optic whose output wavefront is accurate to one-quarter of a wavelength (λ/4) is sufficient for most applications. 2. A figure of λ/15 is required for interferometric, phase sensitive, and critical imaging applications. 3. λ/10 is often sufficient for imaging optics requiring low beam distortion (especially with multiple elements).
Discussion This rule is based on simplified diffraction theory and Strehl ratios and backed by empirical observations. The “Laser Beam Quality” rule (p. 174) shows that a Strehl of 80 percent is considered to be diffraction limited. This is equivalent to a wavefront of λ/4.5, hence the more general from in the rule. The rule provides a quick estimate of the figure required for a given optical element based on the specific application and a useful criterion for estimating the allowable aberrations in a typical image-forming system. The rule refers to the maximum-to-minimum or peak-to-valley (PV) wavefront error. This is valid for normal applications. Super-high-power lasers, extremely wide fields of view, and other exotic implementations will require more stringent wavefront control. The critical parameter here (λ) should be measured at the wavelength of interest (thus, this is a more difficult specification to meet for a UV system than an imaging millimeter wave system). An optical surface’s figure is the shape of the surface. Its quality is measured as the departure from an ideal desired surface measured as a fraction of the wavelength at which the optics will be used. It is usually quoted as a PV measurement. Sometimes it is quoted as a root-mean-square (RMS) wavefront error, which is smaller than a PV measurement by about a factor of 4. Anyway, when someone says “λ/x,” the larger the value of x, the smaller the departure from the ideal and the better the quality. The appropriate quality requirement depends on its final use. One can afford to be sloppier with the figure for plastic toy binoculars than for an interferometer or a space telescope. In some advanced applications, the figure is defined in terms of the spatial frequencies over which specifications are written. For example, a model for density of surface figure variance as a function of spatial frequency (k) is that the power spectral density varies as A PSD( k ) = -------------------3k ⎛ 1 + -----⎞ ⎝ k o⎠ 2
2
k = 1 ⁄ Λ = ( k x + k y ) cycles/cm x, y = the dimensions of the mirror Λ = spatial dimension across the surface A = a constant that sets the amplitude of figure variance ko, kx , ky = spatial frequencies
where
For an example of a very high-quality mirror,1 intended for use in a space-based coronagraph, the following parameters were used: A = 2.4 × 105 Å2 cm2 (with a goal of 6 × 104 Å2 cm2 between Λ = 40 cm and Λ = 2 cm) ko = 0.040 cycles/cm
Shop Optics
301
Rayleigh found that when an optical element had spherical aberration to such an extent that the wavefront of the exit pupil departs from the best fit by a 1/4 of a wavelength, the intensity at the focus is diminished by 20 percent or less. He also found that this could be tolerated and was difficult to notice. Subsequent workers also found that when other common aberrations reduce the Gaussian focus intensity by about 20 percent or less, there was little overall effect on the quality of the image. This is the genesis of item 1 of the rule. When manufacturers invoke this rule to describe their product, they usually mean 1/4 of the HeNe laser line at 0.6328 µm. However, the wise engineer will find out exactly what test wavelengths were used and will also note if specifications are quoted as PV, peak-topeak, RMS, or whatever. For transmissive optical elements, the equivalent surface figure error (SFE) resulting from index of refraction variations in the optical element can also be represented as follows: WFE SFE = -----------------( ∆n – 1 ) where
∆n = index change across the surface (such as from air to glass) WFE = wave front error
References 1. Jet Propulsion Laboratory Technology Announcement, Terrestrial Planet Finder Technology Demonstration Mirror, June 13, 2002. 2. Private communications with W. M. Bloomquist, 1995. 3. Private communications with Tom Roberts, 1994. 4. M. Born and E. Wolf, Principles of Optics, Pergamon Press, New York, pp. 408–409, l980. 5. J. Miller, Principles of Infrared Technology, Kluwer, New York, p. 64, 1994.
APPROXIMATIONS FOR FOUCAULT KNIFE-EDGE TESTS A Foucault knife-edge test may indicate the following: 1. Spherical aberration is indicated if the shadow shows more than one dark region. 2. Coma is indicated if the shadow pattern consists of rectangularized hyperbolas or an ellipse. 3. Astigmatism is indicated if the shadow is a straight line with a slope and can be made to rotate by using different placements of the knife edge about the optic axis. 4. Additionally, the center of curvature or a defocus error can be tested by observing the pattern as the knife is placed first inside and then outside the focus. The shadow will change sides.
Discussion This set of rules is based on diffraction theory, plain old geometrical optics, and empirical observations of the patterns from a knife-edge test. Foucault’s knife-edge test consists of cutting the image of a point source off with a straight edge and observing the shadows on the mirror. This test is a quintessential optical shop test for aberrations. It is useful for measuring the radius of curvature and as a null test. It is easy to implement, and the experienced optician can derive a wealth of knowledge about the surface being tested. The accuracy of the Foucault knife-edge test can be impressive. It has been estimated that with the eye alone (assuming a 2 percent contrast), that wavefronts can be tested to λ/600. It is frequently employed as an in-process test to determine the status of the figure to determine the need for additional polishing.
302
Chapter Fifteen
When conducting the knife-edge test, the shadows (sometimes called a Foucaultgram), look like a very oblique illumination of the surface. If the source is to the right of the knife edge, the apparent illumination is from the right. A lamp shining on the surface looks bright toward the light (right, in this case) and dark on the other side. A divot looks dark on the right and light on the left. If the wavefront is perfectly spherical, the Foucaultgram appears as a uniform gray. Some common indications of imperfect optics include ■ “Lemon peel,” which indicates local roughness and poor polishing ■ A bright edge toward the light, which indicates a turned down edge ■ A dark edge toward light, which indicates a minor miracle—a turned up edge The reader should note that the knife-edge test is sensitive to knife-edge placement and test setup. Astigmatism may escape detection if only one knife orientation is used.
Reference 1. D. Malacara, Optical Shop Testing, John Wiley & Sons, New York, pp. 231–253, 1978. 2. Private communication with W. M. Bloomquist, 1995.
CLEANING OPTICS CAUTION Dirty optics should be cleaned only after great deliberation and with great caution, because 1. Most surfaces, and all fingers, have very fine abrasive dirt on them that will scratch an optical surface. 2. A few big areas of dirtiness are less harmful (scatter less light) than the myriad long scratches left behind after removing the hunks. 3. Small particles can adhere very strongly (in proportion to their mass) and cannot be blown or washed off easily. 4. Washing mounted optics just moves the dirt into mounting crevices where it will stay out of reach, waiting for a chance to migrate back to where it is harmful.
Discussion Sometimes it is necessary to clean optics, especially when the contaminant is causing excessive scatter or if a laser will burn them into the coating. The longer the wavelength, the more valid the above rules become (e.g., UV systems need to be cleaner than IR systems). Additionally, optics near a focal plane need to be cleaner than optics located at the aperture. It is often surprising how well optical systems work when they are dirty. There are legends about tactical FLIR systems working fine on a mission. When inspected afterward, the crew is surprised to find the window splattered with mud and dead bugs. Additionally, many older telescopes (e.g., the Palomar’s 200-in primary) have numerous surface nicks, cracks, and gores, and yet the optics seem to work fine (once the flaws are painted black). At least one space optic has been observed to be marked with the fingerprint of one of the last technicians to work on it. Most observatories wait until a few year’s worth of dust accumulates on their primary mirrors before washing them. Again, optics near a focal plane (e.g., reticles, field stops, and field lenses) must be kept cleaner. This is because a particle or defect on a surface near the focal plane, projected back to the front aperture, could be a large portion of the collecting aperture. There are several reasons for this apparent contradiction. First is that the human eye is especially good at seeing imperfections (dirt, pits, and so forth) on smooth surfaces. Second is that the dirt usually does not amount to a large fraction of the surface area, so the
Shop Optics
303
diffraction, MTF, and transmission losses are surprisingly small. Third is that these particles are far out of focus. Often, the most important effect of the dirt is scatter. Optics should be stored in containers in a laminar-flow bench. When in use, hanging upside-down helps. When you do clean, be very careful of the water and cloth that you use. Soaps often have some sandy grit in them to add friction for easier dirt removal. Additionally, alcohol-based perfumes are frequently added to cleaning products and may remove optical coatings.
References 1. Private communications with W. M. Bloomquist, 1995.
COLLIMATOR MARGIN It is good design practice to make the diameter of the collimating mirror in a test setup at least 10 to 20 percent greater than that of the optics to be tested and to make the focal length at least 5 to 10 times that of the element under test.
Discussion This rule should be adhered to whenever you consider using a collimator, or you will spend many hours hunched over an optical table wondering why it isn’t working right. It should be considered whenever performing test system design, determining collimator specifications, collecting test fixtures, and so on. Generally, it is wise to have the test apparatus larger and more accurate that the item to be tested. The collimator’s useful exit diameter should be significantly larger than the diameter of the element under test so that placement of the lens is not critical (10 to 20 percent minimum). This also allows for some slop in pointing, assures that the optics under test will be completely filled with collimator light, and reduces the deleterious contributions from off-axis sources. Moreover, it is good design practice to make the focal length of the collimating mirror ten times the focal length of the lens under test. Under some highly controlled conditions, accurate measurements can be taken with a collimator that is only slightly larger than the entrance optics. In a really tough situation, the collimator can be undersized. If so, much data must be taken, and the information can be stitched together. However, this requires consideration of schedule and budgets. Also, because off-axis collimators must turn the beam, they need to be even larger than those used in on-axis testing.
Reference 1. Private communications with Max Amon, 1995.
DETECTION OF FLATNESS BY THE EYE The naked eye can detect a lack of flatness having a radius of curvature up to about 10,000 times the length of the surface being viewed.
Discussion Johnson1 states, “The test of a flat surface by oblique reflection is so sensitive that even the naked eye will detect quite a low degree of sphericity, if near grazing incidence is employed such that light from the entire surface enters the eye.”
304
Chapter Fifteen
The surface area must therefore be at least several times the diameter of the eye’s pupil (e.g., some multiple of about 5 mm, which is the typical size of the pupil in room light). If the surface is not flat, then an image of a distant object will appear fuzzy as a result of the astigmatism introduced by the surface.
Reference 1. B. K. Johnson, Optics and Optical Instruments, Dover Publications, New York, pp. 196–197, 1960.
DIAMOND TURNING CROSSFEED SPEED When diamond turning an element, the crossfeed rate should be varied as 1/r to maintain a constant removal rate, where r is the radius of the element.
Discussion Diamond turning leaves marks on the surface of the optic. For example, the Hubble telescope’s primary mirror has groove depths of about 20 nm. To reduce the groove depth and enhance the consistency of finish (including scratch and dig blemishes as discussed in another rule), the removal rate of bulk material should be constant. Unfortunately, when grinding, the tool will experience different speeds as it traverses the element—high speeds at the edge and lower speeds at the center. The velocity should be adjusted to compensate for this. As stated in Reference 1, When constant spindle speeds and crossfeed rates are used to contour grind a rotationally symmetric surface, the material removal rate changes dramatically as the tool moves from the edge to the center of the part, causing both surface form and finish variations across the part . . . . In contour grinding, if a constant tool crossfeed rate is maintained across the part surface, a decrease in volumetric removal per unit time occurs as the tool is moved from the edge to the center of the part. It is this decrease in volumetric removal that usually produces a distinct v-shaped removal error at the center of the workpiece. This is the result of the increased loads (and hence larger tool deflections) near the workpiece edges. Adjusting the crossfeed speed as a function of the radial position has been demonstrated to maintain a constant volumetric removal rate and mitigate the central v-shaped deformation. For constant crossfeed speed Vc and depth of cut dc, the volumetric removal rate dV/dt increases linearly with radial distance r from the center of the part as:
dV /dt = 2πrV c d c Therefore, to maintain a constant removal rate, the crossfeed must be varied as 1/r.1
The crossfeed speed can be either increased or reduced. Increasing it results in an infinite speed at the center. Thus, in practice, the speed increase reaches a maximum and will still result in some unfortunate effects. Starting the grinding with the maximum velocity and reducing the speed solves these problems but results in a longer processing time.
References 1. S. Gracewski, P. Funkenbush, and D. Rollins, “Process Parameter Optimization during Contour Grinding,” Convergence, 10(5), pp. 1–3, September-October 2002. 2. P. Funkenbush, et al., “Use of a Non-dimensional Parameter to Reduce Errors in Grinding of Axisymmetric Aspheric Surfaces.” International Journal of the Japan Society for Precision Engineering, Vol. 33, pp. 337–339, 1999.
Shop Optics
305
EFFECT OF SURFACE IRREGULARITY ON THE WAVEFRONT OPD = 0.5( n′ – n )( number of fringes ) where
OPD = optical path difference ( n′ − n) = change in index of refraction across the surface number of fringes = height of the irregularity or surface bump in fringes (there are two fringes to a wavelength, one dark and one light)
Discussion The number of fringes is a common term referring to the height of a bump or depression in the interferogram as expressed in the deviation across fringes. Smith states that “in most cases, irregularity takes the form of an astigmatic or toric surface, and a compromise focus usually reduces its effect by a factor of 2.”1 The above equation relates to surface irregularities. The difference in surface radius corresponding to N fringes of departure from a test plate is given by 2R ∆R = Nλ⎛ ------⎞ ⎝d⎠
2
where R = nominal radius λ = test wavelength d = diameter over which the N fringes are observed
References 1. W. Smith, Modern Lens Design, McGraw-Hill, New York, pp. 432–433, 1992.
FRINGE MOVEMENT When testing an optic on a reference flat, 1. If you gently press (e.g., with a pencil eraser) on the edge of the upper optic and the fringes move toward the point of pressure, then the upper surface is convex. Conversely, it is concave if the fringes move away from this point. This effect is the result of the differences between the two surfaces—the flat and the test element. 2. Press near the center of the fringe system on the top optic and, if the surface is convex, the center of the fringe will not change, but the diameter will increase. 3. If the source is white light and pressure is applied to a convex center, the first fringe will be dark and the first light fringe will be white. The next fringe will be tinged bluish on the inside and reddish on the outside. A concave surface will have a dark outer fringe, and the color tingeing will be reversed. 4. When fringes are viewed obliquely from a convex optic, the fringes appear to move away from the center as the eye is moved from normal to oblique. The reverse occurs for a concave surface.
Discussion These relationships assume that the test optic is referenced to a standard flat and that the measurements are made in air, with the air gap less than 6λ (the surface under test sits well
306
Chapter Fifteen
with the test plate). They are based on analysis of the optical path difference caused by a varying air thickness between the optic and test flat. For instance, if you apply pressure near the center of a concave optic, the air is forced out, leaving a smaller optical path difference.
References 1. D. Malacara, Optical Shop Testing, John Wiley & Sons, New York, pp. 8–11, 1978.
MATERIAL REMOVAL RATE References 1 and 2 state that, for loose abrasive grinding (lapping), the surface microroughness induced by the material removal process scales (varies) with 1/H1/2, and the material removal rate increases with increasing E ------------2K cH where E = Young’s modulus H = hardness (in the same units as Young’s Modulus) Kc = fracture toughness
Discussion This rule illustrates that hardness is not just a theoretical issue with optical materials. We also make the distinction between slurry grinding (loose abrasive grinding) and deterministic microgrinding wherein the abrasive is permanently attached to a tool that is fed into the material at a constant rate. Reference 1 also provides a microroughness estimate for deterministic microgrinding. Surface microroughness increases with increasing ductility index (Kc/H)2 (a complementary concept to the brittleness index sometimes encountered). Some work shows that the ductility index has the units of nanometers. The following additional important facts come from Refs. 1 through 4: 1. E/(KcH2) is an approximation for E7/6/(KcH23/12) a relationship developed empirically. 2. For both loose abrasive grinding (lapping) and deterministic microgrinding, the subsurface damage increases linearly with increasing surface microroughness. 3. When grinding typical optical glasses using the same abrasive sizes, deterministic microgrinding produces 3 to 10 times lower surface microroughness and subsurface damage than loose abrasive grinding (lapping). Table 15.1 shows some typical values of E, H, and Kc. Typical uncertainty for the hardness value is ~5 percent and for fracture toughness is ~10 percent. TABLE 15.1 Typical Values E (GPa)
H (GPa) (at 200-g force)
Kc (MPa m1/2)
Fused silica
73
8.5
0.75
BK7
81
7.2
0.82
Material
It is also important to understand the hardness scales that are used in these types of calculations. Hardness is a well known physical parameter, but many different methods have
Shop Optics
307
been derived for the measurement and classification of materials on a hardness scale. The values are derived from the Knoop scale. Others that you might find in the literature include Moh, Vickers, Rockwell, and Brinell. Knoop values are found by using a pyramidal diamond point that is pressed into the material in question with a known force. The indentation made by the point is then measured, and the Knoop number is calculated from this measurement. The test has been designed for use on a surface that has not been work-hardened in the lattice direction in which the hardness value is being measured. Even the Knoop number varies slightly with the indenter load as well as with the temperature. A material that is soft (i.e., potassium bromide) might have a Knoop number of 4, whereas a hard material such as sapphire has a Knoop number of 2000. The Knoop number for diamond is 7000. Note that, even within the optics community, there is not a consensus to use Knoop numbers. The values on the Moh scale are arrived at by measuring the relative empirical hardness of selected common materials by observing which materials are able to scratch other materials. The Moh scale, which is not linear, is limited by the softest material, talc (Moh = 1), and the hardest material, diamond (Moh = 10). This scale is frequently used by geologists and mineralogists. The Vickers scale is determined by pressing a pyramidal indenter into the material in question and dividing the indenter load (in kilograms) by the pyramidal area of the indenter (in square millimeters). Rockwell and Brinell hardness are not often quoted. The Rockwell figures for materials are relative to a specific measuring instrument, and the Brinell hardness is analogous to the Vickers scale except that a spherical indenter is used.
References 1. http://www.opticsexcellence.org/InfoAboutCom/InformationBrief/grindingbrief.htm, 2003. 2. J. Lambropoulos, S. Xu. and T. Fang, “Loose Abrasive Lapping Hardness of Optical Glasses and Its Interpretation,” Applied Optics, 36(7), pp. 1501–1516, March 1, 1997. 3. M. Buijs and K. Korpel-van Houten, “Three-Body Abrasion of Brittle Materials as Studied by Lapping,” Wear, Vol. 166, pp. 237–245, 1993. 4. M. Buijs and K. Korpel-Van Houten, “A Model for Lapping of Glass,” Journal of Material Science, Vol. 28, pp. 3014–3020, 1993. 5. http://www.crystran.co.uk/optics.htm, 2003.
OVERSIZING AN OPTICAL ELEMENT FOR PRODUCIBILITY The radius of a lens or mirror should be oversized by 1 to 2 mm to ease the grinding that will result in a good figure and coating in the usable area.
Discussion Like it or not, an optical element must be handled during manufacture. Often, the least expensive way to handle and mount the element is by the edges. This implies some small but finite region to clamp, support, and secure the element. Additionally, every optical element must eventually be mounted. Allowing a millimeter or two of radial oversize for glue or encroachment of mounting fixtures eases the construction of the entire optical assembly. There are even more reasons to slightly oversize the optics. A small chamfer is needed, or edge chipping is inevitable. This requires a little extra space. Additionally, rays close to the edge are also close to becoming “stray light” by scattering from the cylindrical edge of
308
Chapter Fifteen
the lens, so it is wise to mechanically block off the edges in any event. Most modern diamond turning machines and coating equipment can ensure proper specification to within a millimeter or two of the edges. However, the figure is usually poor in this region. This rule assumes that the optical piece can be mechanically supported by the extra 1 or 2 mm. Large heavy optics will require a larger mounting area. This rule is useful when determining a specification for optics that can really be made. It should also be considered when doing mechanical layouts (e.g., that cold filter in the dewar is actually a little larger than the useful diameter). It is also wise to include a chamfer (generally a cut of a 45° angle on the edge of each polished surface) to avoid chipping and cracking during handling, coating, and assembly.
PITCH HARDNESS When pressing hard, if you press your thumbnail into the pitch and an indentation can just be made, the pitch probably has the correct hardness.
Discussion Even in these days of automatic diamond turning machines, pitch is frequently used to mount optical elements. In earlier days of optical design, the optical technician would judge the hardness of the pitch by forcing his thumbnail into it. Should an impression just barely be made, he would assume it to be the correct consistency. Pitch is also used in the still-common lapidary grinding process. The hardness of the pitch laps employed in the polishing process is a critical concern. If the pitch is too soft, then the lap will rapidly deform (go out of shape) because of the flow of the pitch. Conversely, if the pitch is too hard, the glass surface being polished will become scratched because any small particles falling on the pitch will not be absorbed in the bulk of the pitch before damage to the element occurs.
References 1. B. K. Johnson, Optics and Optical Instruments, Dover Publications, New York, p. 208, 1960.
STICKY NOTES TO REPLACE COMPUTER PUNCH CARDS FOR ALIGNMENT For adjusting optical alignment, Hobbs1 suggests using “sticky notes,” which seem to be close to 0.1 mm thick. Alternatively, shards from aluminum pop cans (the latter from 0.07 to 0.1 mm thick) may be used with less precision.
Discussion It seems like optical mounts and tables can never get the optical element to the right height or tilt required by the lab rat. There is an existing and growing need for a cheap, solid, object of 0.1 to 0.5 mm thick that can be inserted to adjust the height and tilt in such increments. In the olden days, we inserted IBM computer punch cards to raise, lower, or tilt optical components (the authors have no idea what was used before punch cards). Well, punch cards are not available anymore. One of the authors (Miller) remembers one optical engineer in the early 1990s, in a panic and patronizing garage sales, corporate going-out-of-
Shop Optics
309
business sales, and government surplus sales in an effort to buy enough punch cards to complete his career. He proudly succeeded. As a interesting side note, punch cards were invented to quicken the census calculations, as it was feared that the 1890 census would take more than ten years to tabulate. “Hollerith invented and used a punched card device to help analyze the 1890 U.S. census data. Hollerith’s great breakthrough was his use of electricity to read, count, and sort punched cards whose holes represented data gathered by the census takers.”2 Hobbs1 suggests the above replacement materials. He notes that such sticky-notes (especially Post-It® notes, are surprisingly consistent at 0.1 mm. The glue is negligible in thickness, several can be easily combined and then easily removed, and one can use a paper punch to make mounting holes. Hobbs states that there is “no other shim material that makes to so easy to adjust beam positions in 100 micron increments.” He also notes that aluminum pop can shards tend to buckle, so they are not very good for stacking and not as consistent in thickness. Incidentally, the glue used in sticky notes is interesting. It is strong enough to hold the note but leaves little residue when removed and can be reused many times. It gets these properties from its microscopic structure. According to the 3M web site,3 “It was an adhesive that formed itself into tiny spheres with a diameter of a paper fiber. The spheres would not dissolve, could not be melted, and were very sticky individually. But because they made only intermittent contact, they did not stick very strongly when coated onto tape backings.”
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, p. 381, 2000. 2. www.About.com, 2003. 3. www.3m.com, 2003.
PRESTON’S LAW The volume of material removed during loose abrasive microgrinding can be related to several significant parameters by Preston’s law.1 1 ∆V ∆h --- ------- = ------ = C p pV rel , A ∆t ∆t where
A= ∆V = ∆t = ∆h = Vrel = Cp =
nominal component (tool contact) area volume of material removed time in which ∆V was removed corresponding height reduction relative speeds of the optical component and the tool Preston’s coefficient, in the units of volume removed per work unit (cubic meters per joule) p = nominal pressure
Discussion In the 1920s, Preston related the removal of material to the tool pressure, time, and a coefficient that includes the optical material’s hardness, abrasive properties, and many process parameters. Reference 2 states, “The effects of any coolant used, abrasive size, backing plate, and all material properties are absorbed within Preston’s coefficient, which is not a
310
Chapter Fifteen
material property but a process parameter.” Preston’s coefficient (using an Al2O3 abrasive and typical pressures and grinding speeds) is generally between 5 × 10–10 and 1 × 10–11 m3/J for most optical glasses (e.g., for BK-7, it is 8.8 × 10–11). Reference 3 points out that removing material during lapping by fracturing the optical surface is ten times more effective (in terms of specific energy) than removing material by means of ploughing and plastic deformation during lapping. Thus, polishing not withstanding, for efficient and quick grinding, the lapping process should be set to levels and conditions that remove material by microfracturing the material (as opposed to conditions that induce ploughing and plastic deformation). This rule can be used to scale and estimate the grinding time (and thus cost) for various volumes of removal for a given material and process. This rule applies to lapping for grinding but not diamond turning or fine polishing of the optic.
References 1. F. Preston, “The Theory and Design of Plate Glass Polishing Machines,” Journal of the Society of Glass Technology, Vol. 11, pp. 214–256, 1927. 2. J. Lambropoulos, S. Xu, and T. Fang, “Loose Abrasive Lapping Hardness of Optical Glasses and Its Interpretation,” Applied Optics, 36(7), pp. 1501–1516, March 1, 1997. 3. M. Buijs and K. Korpel-van Houten, “Three-Body Abrasion of Brittle Materials as Studied by Lapping,” Wear, Vol. 166, pp. 237–245, 1993. 4. M. Buijs and K. Korpel-Van Houten, “A Model for Lapping of Glass,” Journal of Material Science, Vol. 28, pp. 3014–3020, 1993.
PROPERTIES OF VISIBLE GLASS The types of optical glasses available in the visible wavelengths are limited in their refractive index to between 1.4 and about 2, and dispersion values (v) between about 20 and 85.
Discussion The above is based on empirical analysis of available glass and applies to visible-wavelength glass optics only. For example, Ge has an index of 4 and is commonly used for infrared optics. Crown glasses (a type of alkali-lime silicate optical glass) tend to have low indices and low dispersions, whereas flint glasses (crystal glasses with dopants) have higher indices and dispersions. Dispersion is the change in the index of refection as a function of wavelength and is an important consideration for large-spectral-bandwidth optical systems. For historical reasons, with visible glasses, the dispersion is often quoted as a unitless ratio called the Abbe number (v). The numerator is the index of refraction of the glass at a wavelength of the D line of sodium (at 0.589 µm) minus 1 to accommodate the index of refraction of the atmosphere. This is then divided by the change in the index from the Balmer alpha line of hydrogen (0.6563 µm) to that at the Balmer beta line (at 0.4861 µm). The following equation describes the Abbe number (v): nd – 1 v = --------------nβ – nα where nd = index of refraction at the sodium D line nβ = index of refraction at the Balmer beta line of hydrogen nα = index of refraction of the Balmer alpha line of hydrogen
Shop Optics
311
This ratio was conceived by Ernst Abbe, who was a professor at the University of Jena and one of the early owners of the Carl Zeiss Optics Company. This rule is useful for quick and crude estimates of the available index of refraction and dispersion. In another chapter, we present a discussion of Cauchy’s equation, which deals with numerical modeling of dispersion.
SCRATCH AND DIG Commercial-grade IR optics usually have a scratch-and-dig specification of about 80/50 to 80/60, which is pretty poor. The specification is usually 60/40 for better-quality visual optics; 40/20 for low-power active applications, and 10/5 for high-power lasers.
Discussion When manufacturing an optical surface, various sizes of grit are used. Unfortunately, it is unavoidable to have grit of one size contaminate the polishing with grit of a smaller size. This leads to “scratches” and “digs” on the optical surface. When these scratches are small enough and few enough, no noticeable degradation in spot size or MTF is noticeable. It is therefore wise (to keep costs down) to specify the scratch-and-dig specification at a level where the errors are slightly less than expected from diffraction and aberrations. The first number of the above is the “scratch,” and the second is the “dig.” These refer to two graded sets of surface quality standards drawing on military standard MIL-0-13830 and MIL-C-48497, 45208, 45662, and so on. The units of the scratch-and-dig specification are normally excluded for some reason. A scratch is a marking along the polished surface. It is defined in MIL-0-13830A1 as follows: “When a maximum size scratch is present, the sum of the products of the scratch numbers times the ratio of their length to the diameter of the element or appropriate zone shall not exceed one-half the maximum scratch number.” Generally, the scratch is measured in units of ten thousandths (1/10,000) of a millimeter; thus, an “80” is a scratch 8 µm in width. However, the definition of scratch is very subjective, and the units tend to have little meaning when compared from one manufacturer to another and are usually rated using visual methods by subjective humans. Additionally, a scratch is any blemish on the optical surface. Scratch types are identified as the following:2 ■ Block reek chain-like scratch produced in polishing ■ Runner cut curved scratch caused by grinding ■ Sleek hairline scratch ■ Crush or rub surface scratch or a series of small scratches generally caused by mishandling For digs, the units are different. They are expressed in units of 1/100 mm. A dig is defined as a rough spot, pit, or hole in the surface, and the number represents the diameter. Thus, a rating of “60” means the dig is a whopping 0.6 mm diameter, and a “10” means 0.1 mm. Irregularly shaped digs are calculated as (length × width)/2. Usually, digs of less than 2.5 µm are ignored. This rule provides the current conventional wisdom on the level needed. It is based on empirical observations and simplification of scatter theory as outlined in the U.S. military specifications. Some high-resolution, low-scatter optics will require more stringent specifications. Conversely, some low-cost, high-volume production applications may require less stringent specifications. Alternatively, surfaces near images (e.g., reticles) require a finer scratch-and-dig specification.
312
Chapter Fifteen
The above is valuable to understand the scratch-and-dig specs that are appropriate for various applications. It is also useful for specifying the scratch-and dig-spec and to get a feel for what to expect from a vendor of a given quality.
References 1. 2. 3. 4. 5.
U.S. Military Specification MIL-O-13830A, 1994. www.davidsonoptronics.com, 2003. Private communications with Tom Roberts, 1995. J. Miller, Principles of Infrared Technology, Kluwer, New York, p. 64, 1994. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, pp. 447–449, 2000. 6. www. Abrisa.com, 2003. 7. http://www.crystran.co.uk/optics.htm, 2003.
SURFACE TILT IS TYPICALLY THE WORST ERROR Surface tilt does more damage to an image than any other manufacturing error.
Discussion Surface tilt manifesting itself as element wedge causes image degradation more frequently than other common manufacturing error. Wedge is normally removed during the centering operation of manufacturing. If the tolerance is not known from a detailed tolerencing program, it should be kept to a minimum; Kingslake suggests keeping it to less than 1 arcmin. Similarly, element decenter is often the most important tolerance in a lens assembly and is the result of tolerance buildup element diameter and the bore of the lens housing.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, p. 381, 2000. 2. R. Kingslake, Lens Design Fundamentals, Academic Press, Orlando, FL, p. 192, 1978.
Chapter
16 Systems
The ultimate goal of most EO research and development is to create systems that generate quality information, often (but not always) in the form of images with high information content. Systems represent the integration of myriad optical, structural, electronic, and mechanical systems into a group of things that work together to execute some function. An amazing and fun attribute presented to managers and project engineers is to “get it all right” across a multitude of disciplines. As we see in the set of rules presented in this chapter, some key and underlying factors are reliable aids to the system design analysis and optimization process. Although optical systems have been developed since the sixteenth century, EO systems experienced little development before WWII. This is the result of immature component technologies and the fact that commercial and scientific instruments seemed to concentrate on film technology. Film technology was not well suited for broadcast television, and visible electro-optical systems were always pushed by the television industry. (Please review the introduction to Chap. 18, “Visible and Television Sensors,” for a history of television cameras and to the introduction to Chap. 7, “Displays,” for a brief discussion of the standards.) In fact, it was not until the Vietnam War that electro-optic missile seekers and laserguided bombs first proved their utility. Later, the Soviet-Afghan war again underscored the usefulness of night vision and low-cost electro-optic seekers on low-cost missiles. By the time of Desert Storm, warfare was being largely fought and won at night, placing the priority for electro-optics as high as more traditional technologies such as radar and communications. Even early-warning satellites were used to detect incoming SCUD missiles. Operation Enduring Freedom in Afghanistan tied disjoint platforms performing remote electronic sensing together for the first time in warfare. Smart munitions and cruise missiles relied on electro-optic input. In 2002, a remotely controlled unmanned combat aerial vehicle (UCAV) successfully fired missiles at vehicles containing al Qaeda murderers, using electro-optical sensors to relay images to the control center and electro-optical sensors on the missiles. These concepts were expanded, and network-centric sensing, multiplatform battle management and remote targeting was successfully used in Iraqi Freedom in 2003. The worldwide concern for protecting domestic resources and people against terror is looking heavily toward electro-optical systems to deal with border intrusion, face recognition, nuclear, biological and chemical identification, concealed weapon detection, physical security, perimeter security, and so forth. The military and security list goes on and on.
313
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
314
Chapter Sixteen
While all these wars were waging, interplanetary space probes, orbital spacecraft, and nuclear testing further augmented the development of EO systems. Today, thanks to television, wars, and space probes, the table has turned on film, and digitized EO sensors are always the system of choice for the professional and now the consumer. The advent of highresolution CCDs and CMOS imagers has now enabled EO systems to dominate still photography, with traditional film being relegated to art. The movie industry is rapidly moving from “filming” a movie to digitally taping and even composing the entire entertainment feature from computer graphics. Lastly, modern camcorders are digital EO systems. The system designer is challenged on many fronts. The designer’s role is to evaluate the needs of a customer and derive the engineering requirements that are a satisfactory balance of the following four key topics (which are related by a flippant rule): 1. Performance 2. Risk 3. Cost 4. Schedule The design process usually starts with a detailed assessment of the performance requirements, conversion of them to concepts, and the eventual elimination of the weaker concepts. Throughout the process, consideration must be given to these four key criteria. Among other duties, the system engineer has the responsibility for explaining the concept in enough detail that the detailed design process can be undertaken. General design rules, often in the form of rules of thumb, are used to judge the effectiveness of a concept before the design process can begin. The system engineer needs the skills to communicate with all of the members of the design team and must be able to facilitate communication between the team members and know when communication is urgent. In particular fields, it can be very important to develop rules of thumb that all designers can understand. That way, the specialists in various fields can anticipate, with reasonable accuracy, the performance characteristics that will affect other design areas. For example, the controls designers should have an easy way to estimate the mass of the motors used to move some part of the system that is controlled by his part of the design. That way, even though he is not an expert on actuators and motors, it is possible to know whether the design meets the mass budget that has been allocated by the systems engineer. Clearly, this book does not address all of the rules of thumb that might come up in designs of electro-optical systems, because they are so widely varied. However, the types of rules presented here can act as a guide for the rule-development process and the eventual creation of general guidelines upon which all designers can rely. The interested reader can find any number of texts that provide details on how various EO systems function. Of course, the reader who is interested in a specific topic will have to resort to books and technical journals that are dedicated to systems that apply in those cases. Often, EO systems are described in the journals Optical Engineering and OE Reports, and in SPIE and MSS conference proceedings. Additionally, the journal Applied Optics presents a description of EO systems in each issue, at a fairly brief and sophisticated level. Anthony Smart1 wrote a short but great paper giving several common-sense checklists for consideration in the development of an optical system.
References 1. A. Smart, “Folk Wisdom in Optical Design,” Applied Optics (suppl.), December 1, 1994. 2. J. Miller, Principles of Infrared Technology, Kluwer, New York, pp. 3–51, 1993. 3. L. West and T. Segerstorm, “Commercial Applications in Aerial Thermography: Powerline Inspection, Research and Environmental Studies,” Proc. SPIE, Vol. 4020, Thermosense XXI, pp. 382–386, 2000. 4. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, pp. 354-377, 2000.
Systems
315
BAFFLE ATTENUATION Sensors need baffles to work properly. A standard cylindrical sunshade can have an attenuation (or stray light rejection) factor of 105. Two-stage baffles usually have an attenuation factor of 108 or higher.
Discussion The attenuation factor is defined as the ratio of the irradiance from an “out of field” radiation noise source at the entrance aperture of the baffle to the irradiance produced by the scattered radiation at the exit aperture of the baffle. The actual attenuations from baffles and sunshades are complicated functions of the geometrical design, coatings, wavelengths, distance off-axis of the source, and so on. A scatter program or detailed ray trace considering the surface bidirectional reflectance distribution function (BRDF) should be conducted to assess performance. Several programs can calculate complicated scatter, including APART/PADE, ASAP, ZEMAX, and OPTICAD. The above assumes baffles with a length-to-diameter ratio of at least 1, with internal fins. Aspect ratio is a powerful factor in the performance of the baffle. For some specialized optics, thermal gradients across optical elements can be a concern, and baffles are also important to control the input radiation to these optical elements. See associated rules in other chapters on BRDF, Lambertian vs. specular, and emissivity. Unwanted radiation from scatter can also occur off the optics, especially if they are dirty. Thus, for low-scattering systems, it is important to keep the optics as clean as possible and within the minimum scratch/dig specification. Moreover, this requirement is even more demanding at shorter wavelengths. Ultraviolet optics, for example, have to be scrupulously clean to obtain their performance potential.
EXPECTED MODULATION TRANSFER FUNCTION For typical sensors and a good display, the diffraction and the detector’s modulation transfer function (MTF) will dominate the system’s MTF as follows: ■ Diffraction MTF is typically 0.7. ■ Detector MTF is typically 0.6 to 0.7. ■ Processor MTF is close to 1. ■ Display’s MTF is 0.6 to 0.7. Thus, the entire sensor system MTF will be in the range of 0.20 to 0.4.
Discussion The rule is based on empirical observations and current technology. It is also important to note that these values are given at fo, which is defined as 1/(2IFOV), where IFOV is the instantaneous field of view. The total MTF of a system is the product of the MTFs of all of the subsystems.
Reference 1. G. Hopper, “Forward Looking Infrared Systems,” in Passive Electro-Optical Systems, Vol. 5, S. Campana, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 128–130, 1993.
316
Chapter Sixteen
BLIP LIMITING RULE There is little to no benefit in having a sensor more sensitive than the noise imposed by the background.
Discussion BLIP is common terminology for background limited in performance (or sometimes noted as background limited infrared photodiode). When the noise from the background is much larger than other noise sources, the sensor is operating in the BLIP regime. A significant noise source for very low-light-level TVs, infrared sensors, and millimeter wave (MMW) imagers is the noise caused by the inconstant arrival rate of the photons from the background. This fluctuation in the arrival rate of the photons is an unavoidable feature of the radiation source. Such photon flux is characterized by Poisson statistics in which the mean is equal to the variance. For high backgrounds (large IFOVs, bandpasses, and integration times), this is frequently the driving noise source. When BLIP limited, extending money and effort to obtain more sensitive detectors will not increase overall system sensitivity.
DAWES LIMIT OF TELESCOPE RESOLUTION In the blue part of the visible wavelength spectrum, up to the limit imposed by the atmosphere, objects are resolvable if they are separated by 4.5 arcsec divided by the diameter of the telescope (in inches), or2 4.5 arcsec θr > -----------------------D where θr = angular separation of the objects D = diameter of the telescope in inches
Discussion William R. Dawes developed this for use with astronomical telescopes in the visible portion of the spectrum. However, it can be approximately applied to other applications. The basic equation approximates the Rayleigh criterion, except the Dawes limit gives an answer of about 2.2 times better resolution than the Rayleigh criterion in the reddish visible part of the spectrum. Because the Dawes limit does not accommodate different wavelengths, it should be used with extreme caution outside the visible part of the spectrum. In fact, some claim that it is valid only at 0.463 µm.2 The rule assumes good quality optics and alignment, good weather, and good seeing. This rule does not account for special or exotic signal processing (such as microscanning), which can increase effective resolution. For example, let’s assume we have a 15.3-cm telescope (about 6 in). According to the Dawes limit, in the visible region, this telescope can separate 4.5/6 or 0.75 arcsec, or objects with a little less than about a 3.6 µrad separation. Using the Rayleigh criterion of 1.22( λ ) ⁄ d , we also would get –4
1.22( 5 × 10 ) --------------------------------- = 4 µrad 15.3
Systems
317
which is close to the 3.6 as calculated by the rule. However, if we wished to use the same telescope at 10 µm, the Dawes’s limit would still predict 4 µrad, but the Rayleigh criteria would yield a more reasonable 80 µrad.
References 1. http://www.stkate.edu/physics/phys104/curric/telescope.html, 2003. 2. http://www.palmbeachastro.org/planets/PlanetObserving/section3.htm, 2003.
DIVIDE BY THE NUMBER OF VISITS Specifications in a data sheet are accurate to the numbers advertised divided by the number of visits that you have had to the supplier.
Discussion Do not believe what you read in marketing sheets. If you are interested, contact the vendor yourself. If you are really interested, buy one of the products and test it yourself (otherwise, you’ll be disappointed). The truth is that commercial data sheets stretch the truth. When you are on the edge of technology in this hurried world, marketing data sheets and catalog descriptions frequently are released before the product is completely designed or the mass production of it is proven. Also, product improvements and changes are not included in old data sheets and product specifications. As a result, the specifications sometimes are downright wrong. Additionally, sometimes overzealous marketeers stretch the truth to an extent that would have Gumby screaming in pain. Of course, the specification can usually be met with an increase in cost and schedule. This rule applies to figures of merit that have the property that the higher they are, the better they are (e.g., D*, optical transmission). The inverse applies when the figure of merit improves when the number is lower (e.g., NEP, cost, weight).
GENERAL IMAGE QUALITY EQUATION The general image quality equation (GIQE) predicts, from instrument design parameters, the value of the National Image Interpretability Scale (NIIRS) when photointerpreters view a scene. A numerical estimate of such a response is1 NIIRS = 10.251 – alog
G
10 GSDGM + blog 10 RERGM – 0.656H GM – 0.344 ---------SNR
where GSDGM = geometric mean ground sample distance RERGM = geometric mean of the normalized relative edge response (RER) HGM = geometric mean-height overshoot caused by the edge sharpening G = resulting noise gain resulting from the edge sharpening SNR = signal to-noise ratio The coefficient a equals 3.32, and b equals 1.559 if RERGM > 0.9; a equals 3.16, and b equals 2.817, if RERGM < 0.9.1.
318
Chapter Sixteen
Discussion The prediction of performance of imaging systems demands the evolution of predictive algorithms. Among the most important are those that algorithmically capture the likely response of expert image analysts viewing a particular scene. The attempt to devise an algorithm for a human response to an image is a decades-old issue, just like quantifying the performance of an electro-optical system (e.g., automated NETD and automated target recognizers). This rule relates to the human interpretation to the NIIRS system. A numerical scale for such prediction is the National Image Interpretability Scale (NIIRS). A rule in another chapter provides additional detail about this scale. The scale is summarized in Table 16.1 (from Ref. 1). We also note a slightly different form from Ref. 2, NIIRS = 11.81 + 3.32log
10 ( RERGM /GSDGM ) – 1.48H GM – G/ ( SNR )
Note that NIIRS changes by 1.0 if the GSD is halved without changing SNR (such as would occur if the distance from target to camera is halved, ignoring atmospheric effects). But, if GSD is reduced by changing the integration time, SNR will suffer, and NIIRS will not improve as much as in the former case. A simple way to relate GSD and NIIRS is offered by Ref. 3. RER GSD = ------------------------------------------( NIIRS – 11.80 )/3.32 10 In this case, the reference suggests that RER around 0.53 provides good matching with the standard NIIRS criteria, as shown in Chap. 1.
References 1. R. Fiete and T. Tantalo, “Image Quality of Increased Along-Scan Sampling for Remote Sensing Systems,” Optical Engineering, 38(5), pp. 815–820, May 1999. 2. R. Driggers et al., “Targeting and Intelligence Electro-optical Recognition Modeling: A Juxtaposition of the Probabilities of Discrimination and the General Image Quality Equation,” Optical Engineering, 37(3), pp. 789–797, March l998. 3. R. Driggers, P. Cox, and M. Kelley, “National Imagery Interpretation Rating System and the Probabilities of Detection, Recognition, and Identification,” Optcal Engineering, 36 (7), pp. 1952–1959, July l997.
GOOD FRINGE VISIBILITY Good fringe visibility from a disk source occurs when πhθ --------- = 1 λo where h = distance between the two slits (or mirrors) forming the fringes θ = angular separation between two point sources λο = wavelength of a narrow bandwidth
Discussion For a narrowband source of wavelength λo and diameter D, projecting light at a distance of 2 R, there is an area of coherence at the source π ( h ⁄ 2 ) over which pairs of slits will pro-
Systems
319
duce fringes. Viewed at the point at which fringes will form, the angular size of the disk is θ = D ⁄ R , and the transverse correlation distance is 0.32 ( Rλo ) ⁄ D . A set of apertures separated by h (or closer) will produce fringes. This is useful for determining when a source will produce visible fringes such as required to measure the angular size of a disk of some object (e.g., a star). This is also useful for experiment design in a teaching setting and in the design of a Michelson stellar interferometer. Finally, take note that use of closely spaced slits might produce fringes, which may not be desirable. This rule is based on the Van Cittert–Zernike theorem and has been verified by any number of experiments. The fringe visibility is related to the degree of coherence of the optical field at each mirror of the interferometer and can be modeled as a Bessel function for circular apertures. This is valid for a narrow spectral band only, and it assumes a disk source of nearly constant intensity. Meeting the criteria above leads to a fringe visibility of 0.88.
References 1. E. Hecht, Optics, Addison-Wesley, Reading, MA, p. 532, 1990.
LWIR DIFFRACTION LIMIT The diffraction limit of an LWIR (8- to 12-µm) telescope in milliradians is approximately the inverse of the optic diameter in inches.
Discussion This is very useful to quickly estimate the angular diffraction limit of a CO2 laser or an 8to 12-µm LWIR imager in some meeting. The angular diffraction limit is defined by 2.44λ ------------D where λ = wavelength D = optic diameter If D is expressed in centimeters and λ in micrometers, then the diffraction limit expressed in milliradians is equal to 0.244λ/D. Coincidently, the conversion from centimeters to inches is 2.54, almost 2.44. Thus, if λ is 10.4 µm, then the rule holds to three decimal places. Obviously, this rule can be extrapolated to an MWIR system by including a factor of 1/ 2, and to a visible system by including a factor of about 1/20.
Reference 1. S. Linder, from S. Weiss, “Rules of Thumb, Shortcuts Slash Photonics Design and Development Time,” Photonics Spectra, pp. 136–139, October 1998.
OVERLAP REQUIREMENTS In a step-stare (or step-scanned) pattern, the overlap from one step to another should be 50 percent and never less than about 10 percent.
320
Chapter Sixteen
Discussion When a system employs a step-stare or scanning, it is advisable to overlap the scan or steps. The amount of this overlap is determined by the step-to-step correlation and processing. Conservatively, it is advisable to overlap each new step by 50 percent of the area of the previous step to ensure that the sensor motion does not conflict with the Nyquist criterion. This requirement provides for confident registration and adequate coverage, and it allows sufficient oversample for advanced image processing to be used. However, in an accurate system, requiring no more than stitching the scene, and with high-quality inertially registered data, this can be reduced to a few percentage points. In such a system, postprocessing will register the frames properly. Another approach is the inclusion of line-of-sight control using a fast-steering mirror pointing system that relies on inertial data for mirror control or uses a reference point in or near the target to stabilize the pointing system. Frequently, the image processor will require information from some pixels from prior scans to properly execute its algorithms. Imagine a 9 × 9 spatial filter with its values being set by the surrounding 12 × 12 box. To properly execute such an algorithm for the edge pixel of a scan or step-stare, the pixels of the previous scan must be known from either overlap or memory. This rule is based on empirical observations and the need for computer algorithms to register the images and find their positions. The requirement for overscan really depends on the accuracy of the scan-to-scan correlation.
PACKAGING APERTURES IN GIMBALS It is difficult to package a gimbaled aperture in a volume where the ratio of aperture diameter to package diameter exceeds about 0.65. It is difficult to package a nongimbaled aperture in a volume where the ratio of aperture diameter to package diameter is 0.80 or more.
Discussion A gimbal has motors, encoders, structure, and other nasty hardware stuff that simply must be accommodated by the design and must be external to the optical path. This leads to the gimbal being substantially larger than the clear aperture. In addition, most gimbaled EO users demand capability in several spectral regions or with a multitude of sensors (see Fig. 1.2 in Chap. 1). Each aperture for these multispectral sensors must share the same gimbal and result in even lower ratios. When attempting to package an electro-optical instrument, the ratio of the aperture to the total size tends to follow the above rule. Although there have been some systems that may violate this rule, they required enormous levels of engineering ingenuity and complexity, such as off-axis apertures feeding the optical train through the arm of a gimbal or fiber optic imaging. This, of course, translates into added cost for the system. We usually find that additional impacts occur such as the need to place nearby structures in precarious positions or limit the operational range of the sensor. This rule is a generalization, of course, because individual designs and requirements should be fully analyzed and traded. Optical systems may press the limits for these ratios, but the presence of cryogenic cooling lines, electrical cables, and other connections to the outside world unavoidably take up space.
PICK ANY TWO A developing system (e.g., at the preliminary design review) can be any two of the following: low cost, high reliability, high performance, and fast schedule (see Fig. 16.1).
Systems
Fast delivery
High performance
Low cost
High reliability
321
Pick any two (well, maybe three if you are lucky) FIGURE 16.1 This figure attempts to convey that full optimization of a system is impossible.
Discussion Often, the requirements (or attributes) of an electro-optical project compete with each other and may even be contradictory. For instance, it is usually difficult to increase performance and reliability while reducing cost. One requirement may act like nitro to another’s glycerin; just moving the combination can cause the whole thing to explode. Usually, an astute engineering/manufacturing organization can fuse at least two opposing requirements together and satisfy both simultaneously for a system of mature design. For a developmental system, they can usually promise three of the above dreams with straight faces. Managers and executives had better be smart enough to detect this situation (usually by the odor). This rule is founded on tongue-in-cheek empirical observations but illustrates an important truth in systems design: low cost, high reliability, high performance, and fast delivery are all relative. Several million dollars may be considered low in cost for a given system at a certain reliability, performance, and delivery for a space-based instrument. On the other hand, a few thousand dollars may seem costly for a physical security camera. Increase the reliability, and the cost will increase relative to its initial cost. The same is true for performance and schedule. This rule assumes comparison of the same type of cost, reliability, and so forth. For instance, an increase in reliability may cause a decrease in life cycle cost (but rarely development cost). The above does not account for disruptive changes in technology or production techniques. Said another way, it may take a revolution in technology to invalidate this rule. Finally, this assumes the design wasn’t seriously flawed such that a minor design change will affect several attributes favorably.
PROCEDURES TO REDUCE NARCISSUS EFFECTS Lloyd1 suggests the following five design procedures to reduce cold reflections (sometimes known as Narcissus effects) that cause a detector to see reflections of itself: 1. Reduce the focal plane effective radiating cold area by warm baffling. 2. Reduce lens surface reflections by using high-efficiency antireflection coatings (on both sides of the optical elements).
322
Chapter Sixteen
3. Defocus the potential cold return by designing the optical system so that no confocal surfaces are present. 4. Cant (or tilt) all flat windows. This means that rays traveling parallel to the line of sight of the detectors will be diverted out of the sensor line of sight. 5. Null out the cold reflections with advanced electronic image processing.
Discussion Often, with IR system design, reflections from cold surfaces onto the FPA cause an image flaw of low levels in those areas. Lloyd offers us the above five techniques to reduce this unwanted effect. Sometimes no. 1 cannot be done. For LWIR systems, no. 2 should always be done if budget and throughput requirements allow. No. 3 should be done whenever the optical requirements allow. No. 4 is an effective way of reducing this effect. It should be implemented whenever possible. This method is used in virtually all LWIR imagers.
References 1. J. Lloyd, Thermal Imaging Systems, Plenum Press, New York, p. 281, 1975.
RELATIONSHIP BETWEEN FOCAL LENGTH AND RESOLUTION The IFOV of a system can be estimated from 1000( pm ) IFOV = ---------------------EFL where IFOV = instantaneous field of view in microradians pm = pixel size in micrometers EFL = effective focal length in millimeters
Discussion This is a version of a fundamental optical equation modified for focal plane pixels. The focal length of a lens (commonly given in millimeters) determines the “scale” of angular subtense to linear dimension at the focal plane. The focal length is equal to the detector size divided by the field of view. The above equation includes a factor of 1000 to convert units of micrometers for the pixel size to units of millimeters for the focal length, giving the pixel’s field of view in microradians. In most optical designs, the units of millimeters are commonly used for focal length, and the units of micrometers are commonly used for pixel pitch. The above equation can be modified to represent the total field of view by merely replacing the pixel size with the FPA size or by multiplying the pixel size by the number of pixels along the appropriate direction.
SIMPLIFIED RANGE EQUATION The probability of detection (Rp) of a target at a range of R is approximately
Systems
323
e ∆C ------ln 1 0.7 ----- ------------------k p MRC R p = ------------------------------------N xx βatm + --------βsys Dc where ∆C = contrast differential, which is either the difference in temperature for a FLIR (∆T) or the difference in reflected contrast for a visible sensor (∆C) MRC = minimum resolvable contrast [Minimum resolvable temperature (MRT) can be substituted for MRC if ∆T is used instead of ∆C.] βatm = average bandpass-integrated atmospheric extinction coefficient for the path length (range) involved in 1/km units e = length-to-width ratio of the bar pattern used to determine the MRC or MRT; if you don’t know it, assume it to be 0.7 so that the radical becomes unity kp = normalized signal-to-noise ratio for the probability of detection p; some examples are given below βsys = slopes of the regression line fitted to the values of the MRC (or MRT) in the specifications of the particular sensor (This relates to the sensor’s resolution. If unknown, just base it on the detector’s instantaneous field of view.) Nxx = Johnson criterion for the given probability (xx) of correctly performing an observation task Dc = target’s critical dimension
Discussion A global range equation for electro-optical systems does not exist in a form that humans can easily comprehend. To this end, there are several official computer models (such as NVTHERM) that attempt to give an approximate target range for a given myriad of inputs. Additionally, almost every company in this field has its own model calibrated to its own equipment. The best of these homegrown models are based on theory normalized to realworld performance; the next best are based on averages of MRTs or MRCs taken from real instruments. Obviously, this is just an approximate range based on the most rudimentary values, but it can be useful in a pinch or when comparing one system to another. The authors of this book refrained from placing such a range equation in the first edition. After several suggestions, we decided to include a few such models, as they illustrate critical physical relationships between the hardware and real-world performance. The reader is cautioned that, although the basic physics of the this equation is sound, any atmospheric or target statistics are questionable, and production floor engineering inputs can be derived only from statistics. Reference 1 gives the following table for some values relating the probability of kp. TABLE 16.1 Probability of detection, ρ
kp, normalized SNR for probability, ρ
0.90
1.5
0.5
1.0
0.1
0.5
324
Chapter Sixteen
References 1. L. Biberman, “Alternate Modeling Concepts,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 11-8 to 11-13, 2000. 2. NVTHERM Users Manual, ONTAR (www.ontar.com), 2000. 3. FLIR 92 Users Manual, U.S. Army Center for Night Vision.
SYSTEM OFF-AXIS REJECTION It is difficult, but not impossible, to create sensors that can perform (and in some cases, not be damaged) when the Sun or Moon is within about 10° of their line of sight. In addition, it is difficult to make the sensor function as desired when sunlight is falling on the any of the optical surfaces, even if it is not within the field of view. It is very difficult to design a system that will not be damaged when its focal plane is in direct continuous exposure to the Sun.
Discussion A review of the stray light rejection of various telescope designs shows that the point source transmittance of most telescopes is about 10–3 at 10° angle from optic axis.1 Although not true of all designs, it is still a typical value. Because the irradiance at a sensor from even a small part of the Sun or Moon is higher than that of most stars and all man-made objects (except nuclear weapons), rejections of 10–5 to 10–6 may be required. Such rejections are easily achieved only at off-axis angles of 30° or higher. Chaisson2 reports that highly sensitive astronomical telescopes, such as Hubble, cannot tolerate the Sun within 50° of the line of sight or the Moon within 15°. Unless otherwise designed, security system cameras, and even commercial camcorders, cannot be pointed at the Sun for long without damage. Even advanced cameras can have problems. On Apollo 12, the lunar surface camera was accidentally, and only momentarily, pointed at the Sun, which burned out the vidicon tube that was forming the images. Millions of disappointed Earth-bound viewers witnessed this event. A critical part of the stray-light performance of a telescope is the cleanliness of the optical surfaces. Even the smallest surface contamination will cause light from out of the field of view to be scattered in a way that today’s modern and very sensitive detectors will see. Therefore, while sensors are now more capable of sensing subtle features of the things they are pointed at, they also see undesirable sources of light. The exact optical design, baffle design, sunshade design, and radiometric system parameters will determine the true performance of the system. The guidelines in the rule provide a good first estimate of what can be expected. We do note, however, that there are some important exceptions. For example, laser warning, missile warning, and some forward-looking infrared sensors can function with the Sun in the field of view, albeit with careful design. Key to the tolerance of the Sun is the baffle and sunshade design along with its “black coatings.” Generally, these coatings should have a reflectance of less than 20 percent in the bandpass of interest and be Lambertian. However, these requirements depend on the details of the design. Some baffle and cold shield designs actually work better with specular coatings of low reflection. See the “Baffle Attenuation” rule in this chapter (p. 315) and, in other chapters, rules concerning BRDF, emissivity, and Lambertian versus specular reflectance.
References 1. W. Wolfe, “Imaging Systems,” in W. Wolfe and G. Zissis, Eds., The Infrared Handbook, ERIM, Ann Arbor, MI, pp. 19–24 and 19–25, 1978.
Systems
325
2. E. Chaisson, The Hubble Wars, HarperCollins, New York, p. 72, 1994. 3. www.ligo.caltech.edu/~ajw/40m_cdr/40m_AOS_DRD.ppt, 2003. 4. Y. Yakushenkov, Electro-Optical Devices, MIR Publishers, Moscow, pp. 125–127, 1983.
TEMPERATURE EQUILIBRIUM An optical system is in thermal equilibrium after a time equivalent to four to six times its thermal time constant.
Discussion When an optical system is exposed to a different temperature, thermal gradients will cause spacing and tilt misalignments and distort the image. After four thermal time constants, the difference in temperature is less than one percent. Newton’s law of thermodynamics shows that thermalization is an exponential process with the reduction following a 1/e falloff for each time constant. Obviously, 1/e4 to 1/e6 is a very small number. This originates from an equation that states that the change in temperature is proportional to 1 – e–tτ, where τ is the thermal time constant, and t is the time. When the product of t and τ is 4, the temperature deviates from equilibrium by 1.8 percent.
TYPICAL VALUES OF EO SYSTEM PARAMETERS When the details of a particular electro-optic system are unknown, one can assume the following typical values for performance calculations: Overall transmission through the optics
0.65
Electrical efficiency
0.85
Scan efficiency
0.8
Frame time
1/60 to 1/30 sec for imaging systems
Atmospheric transmission
0.88/km in a transparent wavelength region
Typical laser reflectivity of targets not intended to be “stealthy”
40 percent
Temperature of objects in space
270 K
Detector D*
1011 Jones (cm Hz1/2/W)
Temperature of objects on the ground
300 K
Optical MTF
Diffraction limited with a 1/4 wavelength defocus
Discussion Often, one needs to perform quick calculations of sensitivity or determine the range of some system attribute (e.g., how big does the aperture need to be?) when complete design information is, unfortunately, unknown. Although this can lead to dangerous design decisions, when necessary, the above can be substituted into the equations.
326
Chapter Sixteen
The above values are based on empirical approximations based on experience. Much of the above data were contributed by Seyrafi from his 1973 book, and they still generally apply today. These are typical values, and any given system may have drastically different values; use these only when no other information is available. Rarely will any of the above be off by more than a factor of ten. However, if more than one of these numbers is used, the actual result may be off considerably, and care must be taken to ensure that the end result is correct. On the other hand, if a large number of guesses are made, it is entirely possible that the errors will average out, and you’ll get about the right answer. This rule allows first-guess calculations when design details are unknown. This set of parameters also allows the designer to begin the design process and set up the design equations while awaiting revelation of the actual details. These guidelines usually add sufficient margin so that the hardware can actually be made to perform to expectations.
Reference 1. K. Seyrafi, Electro-Optical Systems Analysis, Electro-Optical Research Company, Los Angeles, CA, pp. 238 and 294, 1973.
WIND LOADING ON A STRUCTURE The wind force on a telescope (or other structure, such as a dome) can be estimated from 1 2 F = --- ρν AC 2 where ρ = density of air v = wind velocity A = area of the object projected in the wind direction C = a factor derived from the wind direction and the specific surface features of the telescope; often referred to as the drag coefficient Note that the dimensions of the elements of the equation need to be selected to give the force in the desired units.
Discussion It is always desirable to know the factors that can lead to pointing errors and other disturbances. For instance, wind loading on a sensor or telescope will create a number of effects, including a rocking of the structure and stimulation of bending modes. Mountain tops, where most telescopes are deployed, are places of high wind exposure. The reference suggests that an additional correction factor, Λ, should be used as well, thereby changing the equation to 1 2 F = --- ρν AΛC 2 where Λ = dimensionless factor derived from the aspect ratio of the object (A typical telescope will have a value of 0.6.) In general, C will be around 0.5 (for rounded objects) to 1.0 for blunt objects. Streamlined objects can have a very low value of C. A flat plate perpendicular to the wind direction has a value of 2. From these estimates, we might guess that a cylindrical telescope dome will have a value of C no larger than about 0.5.
Systems
327
The authors of the reference suggest the following wind-direction-dependent values for C; for the force perpendicular (or normal) to a dome opening use ⎧ C ( β ) = ⎨ 0.1 sin 2β β < π ⁄ 4 π/4 ≤ β ≤ π ⁄ 2 ⎩ 0.1 where β = angle between the wind direction and the normal to the entrance of the dome (For the force parallel to the dome opening normal, use 1.)
Reference 1. H. Jakobsson, “Modeling and Analysis of the Large Earth-Based Solar Telescope Dynamics,” Optical Engineering, 37(37), pp. 2432–2448, September l998.
LARGEST OPTICAL ELEMENT DRIVES THE MASS OF THE TELESCOPE The mass of a telescope assembly is directly proportional to the mass of the largest optical element.
Discussion A telescope’s mass depends on its design, materials, size, required strength, and the lightweighting techniques applied. An estimation of the mass of an unknown telescope assembly can quickly be scaled based on the known mass of a similar element. Telescope assembly masses usually track the mass of the heaviest optical elements approximately linearly, as the secondary and tertiary mirrors are usually of much smaller size and mass. Usually, this is valid for telescopes from a few centimeters to a meter or two in aperture, but it does not include exotic systems. If used as a comparison, optical element masses should be within a factor of 3 of each other, and telescopes must be of the same type and material (e.g., two on-axis, reflective Cassegrains made of aluminum). In addition, telescopes should have the same number of elements and have similar environmental, stability, stiffness, and slewing specifications to apply this rule. Finally, the off-axis stray-light rejection specifications should be comparable. The rule is useful (with the other rules on optics and telescope mass) for system tradeoffs when comparing the mass impact of changing the optics size and estimating whether a given telescope requires technology advancement to meet the mass goals. One should be sure to use the heaviest optic element. Usually, this is the largest, but it may not be in so in refractive designs with thick lenses. Usually, the largest and heaviest element is the first objective or the primary mirror. However, in off-axis reflective systems and some wide field of view designs, the largest element is usually not the primary or objective. If the telescope is a Schmidt, the primary is larger than the clear aperture or the correcting plate.
This page intentionally left blank
Chapter
17 Target Phenomenology
Generally, the properties of targets and their signatures, such as are summarized in this chapter, fall into the domain of the military designer. However, increasingly, many of these rules apply to nonmilitary segments of the EO market such as security cameras, paramilitary organizations, search and rescue, homeland defense systems, environmental monitoring, general surveillance, remote diagnostics, remote sensing, and industrial security. This chapter provides a brief look into the short-cut characterizations that were largely developed to assess the signatures of various potential targets for typical EO systems. Regardless of their heritage, several of these rules are applicable in the generic business of assessing what a sensor might be able to detect, recognize, or identify. Although most of these rules were developed for the infrared spectrum, they illustrate important principles that may be applied (albeit with caution) to other parts of the spectrum, including UV, visible, and millimeter wave. Often, targets of interest to the EO sensor designer consist of the metal body and frame containing some kind of engine (such as your car). Such a target can be detected by sensing the metal hardbody (e.g., the roof of your car), the hot engine compartment (the heat dissipated from under the hood), or the spectral engine emission (e.g., the hot CO2 coming out of your tailpipe). The emission of hot gases and particles is generally called a plume. Although all man-made engines produce significant plumes, those of jet engines and rockets draw the most attention. Rocket and jet plumes have long been of interest to EO designers, as these provide bright signature-to-background ratios. Much early work was done in the 1950s and 1960s in remote plume diagnostics by electro-optical instruments for jet and rocket engine development. At least one major contemporary sensor company evolved from a support group charged to develop sensors to characterize its company’s rocket engines. Much effort was expended in the 1960s on large and small rocket plume signatures, thanks to the space and arms races. The signatures of tactical air-to-air and surface-to-air missiles were investigated in the hope of providing effective countermeasures. Plume investigations of large rockets continued in support of early warning efforts. Maturation of this study and the perceived need were formalized during the United States’ Strategic Defense Initiative (SDI) era in which significant effort was expended in refining the plume and hard-body signatures of large missiles and warheads. This tradition is continuing as a result of the increased emphasis on homeland defense (by many nations worldwide) and the unfortunate
329
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
330
Chapter Seventeen
recent proliferation of missiles of all types, all able to carry a chemical or biological weapon of mass destruction. This requires defensive weapons to protect against such weapons, which in turn requires accurate target characterization. In the 1990s, significant effort was devoted to determine the signatures of smaller tactical missiles for platform missile warning systems and targeting systems. Ironically, one of the most challenging features of characterization of a threat signature is determining the reflectivity and transparency of the plume. The most difficult of these is the liquid-fueled rocket, as the emissions may consist primarily of water and other hot gases. These appear nearly transparent in many bands, especially in the visible wavelengths. The space shuttle, for example, produces a large opaque plume from its solid rocket motors, but the plumes from the hydrogen-oxygen engines are nearly transparent in the visible. In fact, using visible wavelengths, one can see directly into the engine housings (during launch) on the three engines built into the orbiter. Although much less intense than the infrared signature, the visible and UV signatures of plumes are of interest because of the availability of more mature hardware technologies and smaller diffraction, and because the plume is more confined in spatial extent and is located in a predictable place with respect to the rocket’s hardbody. Available computing power has become sufficient to spur various governments and companies toward developing computer codes that allow system engineers to estimate the magnitude of the signatures in a given spectral bandpass. Some of the most frequently used ones include the Joint Army, Navy, NASA, Air Force (JANNAF) plume radiance code (PLURAD), the Standard Plume Flow Field (SPF-2), and Standardized IR Radiation Model (SIRRIM). These codes were created to estimate signatures from tactical missiles based on complex chemistry and flow fields. The Composite High Altitude Radiance Model (CHARM) is a complex model intended to estimate large rocket plume signatures and shapes by assessing the chemistry of the rocket fuels used and their interaction with the rarefied atmosphere. The signature code craze is not limited to the plumes, as complicated codes have been developed to predict laser cross section, reflectivity, and hardbody signatures throughout the spectrum [e.g., the Spectral Infrared Thermal Signatures (SPIRITS) and the Optical Signature Code (OSC)]. As expected, there has been considerable interest in the reflectivity of target surfaces, because this determines the amount of laser tracker radiation that can be expected to come back to a receiver, determines the visible-band signature, and affects the IR signature. The emissivity can be estimated by subtracting the reflectivity from 1. For the reader interested in more details, the Infrared and Electro-Optical Handbook is a good starting point. It contains information on almost every topic related to targets and backgrounds. Other sources that should not be overlooked are the older versions (similar compilations) called The Infrared Handbook and the older Handbook of Military Infrared Technology. In fact, the older books cover a number of topics more completely than the newer eight-volume set. Look for the older versions in used bookstores and in the offices of older employees. Red, blue, and green cover versions were produced. Be wary, though, because the earlier versions had some errors. If possible, try to find a colleague who has a copy of the extensive errata sheet that has been developed over the years. Many specific signature handbooks are available from governments and corporations that are active in the field, and these can provide valuable insight. For up-to-date detailed measurements, explanations of phenomena, and code development, do not overlook the frequent publications of the Military Sensing Symposium, IEEE and SIE conferences and proceedings, as well as the IRIA web site (http://www.iriacenter.org).
Target Phenomenology
331
BIDIRECTIONAL REFLECTANCE DISTRIBUTION FUNCTION Nicodemus1 suggested the bidirectional reflectance distribution function (BRDF) as differential radiance dPs ⁄ dΩs Ps ⁄ Ωs BRDF = ---------------------------------------------------- ≈ ---------------------- ≈ -----------------differential irradiance Pi cos θs Pi cos θs where Ps = power of the scattered light, generally at a given angle (Generally, the angle is fixed for each plot of a BRDF.) Ωs = solid angle into which the light is scattered (or the instrument’s receiver solid angle) Pi = power of the incident light θs = angle (from normal to the sample surface) that the light is scattered (or the angle at which the receiver is positioned) (Generally the BRDF is plotted as a function of this angle and has units of 1/solid angle.)
Discussion In the 1950s, it became apparent that the total integrated reflection from a surface did not describe the correct property for critical applications such as baffle design, high-quality optics, illuminated targets, or targets sensed via reflected light (see Fig. 17.1). Nicodemus1 suggested the BRDF, which has been widely used and modified for more FIGURE 17.1 Illustration of surface scatter. exacting tasks. Basically, it defines the reflectance at any angle resulting from an input (From http://ciks.cbt.nist.gov/appearance/.) source at some other angle (see Fig. 17.2). BRDF provides more information than total integrated scatter or simple reflectance or emissivity, as it defines the reflectivity of the surface at all possible combinations of incidence and reflected angles. It can be integrated to achieve integrated scatter and reflectance and, after subtracting from unity, the emissivity according to Kirchoff’s law for opaque surfaces. BRDF is becoming increasingly important in target phenomenology and frequently is used for satellite remote sensing applications (e.g., the BRDF of a mangrove swamp as opposed to urban development or open sea). As active sensing increases in popularity, the BRDF will see more importance as a target parameter. A perfectly specular source (e.g., a perfectly flat, perfectly reflective mirror) would have a function that is zero at all points, except its Snell reflection angle, at which point it would be a delta function. Figure 17.3 shows the BRDF of a (low-quality) gold mirror at 10.3 µm. Note that the BRDF varies by a factor of 1 million from 15 to 60° when the incident beam is at 60°. A perfect Lambertian source has a BRDF proportional to ρ/π per Ω, where ρ is the reflectivity of the surface, and Ω is the solid angle of the receiver. Figure 17.4 shows a good Lambertian black surface at 10.3 µm. It is exposed to incident beams at –10° and –60°, and the BRDF varies less than about two orders of magnitude across the plotted angles. BRDFs are frequently used to select low-reflectivity coatings with given specular/Lambertian characteristics for cold shields and baffles. A material’s BRDF is almost always
332
Chapter Seventeen
FIGURE 17.2 Geometry for BRDF measurements. (From http://ciks.cbt.nist.gov/appearance/.)
FIGURE 17.3 Example of BRDF of a specular surface. This measurement used 10.3-µm infrared radiation, and the incident beam is plotted for 10° (diamonds) and 60° (triangles). (Courtesy of The Research Triangle Institute.)
measured at room temperature, yet it is applied to cryogenic surfaces. One of the authors (Miller) has taken numerous measurements over decades and often finds that the cryogenic reflectivity of the coating is higher than calculated from the room-temperature measurements. It is surmised that one cause of this is that the reflectivity (emissivity/scattered) of black surfaces is almost always measured and quoted at room temperatures—the lab environment is the only convenient and affordable venue to take such measurements. Yet many of the Lambertian and absorptive properties of a surface depend on the surface morphology, which can be a function of temperature. Generally, the surface should be rough and cone-like at a scale greater than the wavelength to trap the photons. When cooled, most surfaces contract. A given structure that is very “black” at room temperature because
Target Phenomenology
333
FIGURE 17.4 Example of BRDF of a Lambertian surface, RTI/OS black paint. The BRDF with a 10° incidence is the lower line (diamonds), and the measured BRDF with a 60° incidence is the upper line (triangles). (Courtesy of The Research Triangle Institute.)
of its surface morphology will contract when cooled to cryogenic temperatures. This surface morphology change can result in the surface being more specular and/or reflective at the wavelength of interest. The user of BRDFs should be cautious that some measurements include cosine corrections while others do not. Additionally, the angles are often defined differently.
References 1. F. Nicodemus, et al., Geometric Considerations and Nomenclature for Reflectance, NBS (now NIST) Monograph 160, 1977. 2. J. Stover, Optical Scattering, SPIE Press, Bellingham, WA, pp. 19–22, 1995. 3. http:www.iriacenter.org, 2003. 4. J. Conant and M. LeCompte, “Signature Prediction and Modelling,” in Vol. 4, Emerging Systems and Technologies, S. Robinson, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 318–321, 1993. 5. W. Wolfe, “Radiation Theory,” Chap. 1, The Infrared Handbook, W. Wolfe and G. Zissis, Eds., ERIM, Ann Arbor, MI, pp. 1-30 to 1-31, 1978. 6. W. Wolfe, “Optical Materials,” Chap. 7, The Infrared Handbook, W. Wolfe and G. Zissis, Eds., ERIM, Ann Arbor, MI, pp. 7-78 to 7-79, 1978.
CAUSES OF WHITE PIGMENT’S COLOR An appearance of white can be achieved with small pieces of transparent or translucent material as long as they are large enough (as compared with the wavelength of light) to allow for multiple refractions and reflections. This is the case with milk, sugar, snow, beaten egg whites, clouds, and so on.
Discussion It is shown in a number of texts that scatter from particles depends on the relation of sizes of particles and the wavelength of light. Rayleigh showed that, for particles about one4 tenth of the wavelength of the light, scattering goes as 1 ⁄ λ and, for larger particles, the
334
Chapter Seventeen 2
scattering goes as 1 ⁄ λ . This is the root of the famous “blue sky” question that Rayleigh answered by showing that the scattering occurs in molecules that are small with respect to the wavelength of light. Hence, when looking straight up at blue sky, shorter wavelengths are scattered more effectively, so we see scattered sunlight predominantly in the blue part of the spectrum. Particles approaching or exceeding the wavelength of light scatter according to Mie theory, which is not easily captured in a simple equation. One of the authors (Friedman) has also had the experience of using powdered quartz as a means of inducing turbidity into very clear and clean waters. The quartz was added to provide a measurable and repeatable turbidity. Although this work was never published, all of the ocean optics researchers at that time employed this method to create a cheap and widely available source of calibration. Titanium oxide is a visually transparent material occurring in the form of small particles that provide a white pigment for some paints. One of the authors (Miller) knows of an instance of an optical engineer taking some titanium oxide particles from the U.S.A. into Canada (for a test). The Canadian customs demanded to examine the container, as it apparently looked like an illegal drug. While examining it, they spilled some on their dark uniforms and the table. When they tried to brush it off, it just stuck to clothing and tablecloth, causing permanent white streaks. It made a permanent mess of their uniforms and any other object laid on the inspection table for some time thereafter. Since the appearance of “whiteness” depends on matching the size of the scattering material to the wavelength of light, paints that include this type of material do not work over wide ranges of wavelength. Infrared reflectivity may be considerably different from that in the visible regime. In addition, one must be certain that the materials added into the clear matrix have a different refractive index; otherwise, the material will look uniformly clear. Small glass beads, organized into sheets and reflectively coated on the rear half, can increase the apparent reflection coefficient in the direction of viewing by factors from 100 to 1500 over that of white paint. They are actually acting as inefficient retroreflectors and are widely sold for bicycle and automobile applications. This rule offers an easy approach for creating high-reflectivity surfaces without resorting to exotic approaches such as retroreflectors.
Reference 1. E. Hecht, Optics, Addison-Wesley, Reading, MA, pp. 114, 1990.
CHLOROPHYLL ABSORPTANCE Healthy plants tend to have strong absorptance near 0.68, 1.4, and 2.0 µm. Camouflage and distressed plants less absorptance. In extreme cases, the absorptance can approach zero at these wavelengths.
Discussion This is based on the spectra of water and chlorophyll. Plants that rely on photosynthesis must have water and chlorophyll to be healthy. The highly visible absorption of chlorophyll ends at about 0.7 µm, and a high transmission band extends to about 1.3 µm. By making observations in these short-wave infrared (SWIR) bandpasses, the health of plants can be determined. Many diseases, and the onset of autumnal changes, can first be detected by observing the content of chlorophyll and water. The high spectral absorption coefficient of water within healthy tissue produces deep reflectance and transmittance minima near 1.4 and 2.0 µm. Since healthy tissue containing active chlorophyll must also contain some water to permit photosynthesis, the con-
Target Phenomenology
335
current appearance of a chlorophyll absorption band near 0.68 µm and the water absorption bands near 1.4 and 2.0 µm is generally expected. Frequently the change of leaf spectra resulting from plant stress is first made manifest by disruption of the photosynthetic process. The disruption is caused by the destruction of chlorophyll before water has been completely lost by the leaf. Consequently, the water absorption bands may still be present in the leaf spectra after the leaf is dead. 1
The reader should also be aware that healthy plants also absorb deep in the blue and produce fluorescence in the near IR that contributes to the signature that is detected. Hence, if you are attempting to measure the reflectance of leaves in situ, be sure to include the influence of fluorescence that will be detected by your spectrometer.
References 1. W. Wolfe and G. Zissis, The Infrared Handbook, ERIM, Ann Arbor, MI, and the Office of Naval Research, Washington, DC, pp. 3-129 to 3-142, 1978. 2. K. Seyrafi and S. Hovanessian, Introduction to Electro-Optical Imaging and Tracking Systems, Artech House, Norwood, MA, p. 31, 1993. 3. W. Porter and H. Enmark, A System Overview of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), http:www.iriacenter.org, 2003. 4. See the many vegetation spectra found at http:www.iriacenter.org, 2003.
EMISSIVITY APPROXIMATIONS If you don’t know the emissivity of an object, assume 0.2 for metals and 0.8 for anything else. Assume 0.9 for anything that is claimed to be “black.”
Discussion It is important to know an object’s emissivity when doing calculations related to target signatures. The emissivity can cause an order of magnitude change in radiant emittance. Generally, metals are of low emissivity, with optically smooth polished surfaces having an emissivity of 0.01 or less. For most metal objects, the emissivity is closer to 0.2 or 0.3. Emissivity tends to be 0.8 (or at least between 0.6 and 0.9) for most nonmetallic objects. In addition, if the sensor is looking into a cavity, such as a jet or rocket engine or person’s mouth, the emissivity will tend to approach unity no matter what material is being viewed. A rule in Chap. 14, “Radiometry,” describes the role of geometry in defining the effective emissivity of a closed space. Emissivities can vary by wavelength, and this can be a very useful property. Several “thermal” paints are “white” and of relatively low emissivity in the visible but of high emissivity in the infrared. When applied to a sunlit object (e.g., a satellite), these will reduce its temperature, as the solar wavelength will be efficiently reflected, and the thermal emission in the IR will be high as well. The emissivity of an object is also a function of surface morphology and coating. At the approximate size of the wavelength, a rough surface with cavities will have higher emissivity than the same material with a smooth surface. Figure 17.5 is a scanning electron microscope (SEM) image of RTI/OS black, which is a modified paint. The surface has extreme roughness at a scale of a few microns, making it very black from the visible through LWIR. See also Fig. 17.6. The above is useful only for a quick estimate of an objects emissivity when little is known about it. Appendix A has a table of emissivities for common materials. All of these tables are approximate infrared emissivities and should be used with caution, as emissivity
336
Chapter Seventeen
FIGURE 17.5 A rough surface morphology on the scale of the wavelength can create a black surface as shown in this SEM of RTI/OS black paint. (Courtesy of The Research Triangle Institute.)
FIGURE 17.6 A discrete Fourier transform of the SEM of Figure 17.5. Strength of the darkness denotes an increase in power. This shows that the majority of the structure is concentrated at a scale of 10 µm or larger, which indicates that the surface should be black at those wavelengths (which it is). (Courtesy of The Research Triangle Institute.)
Target Phenomenology
337
varies with temperature, wavelength, and surface roughness. There are several emissivity libraries on the Internet including excellent ones found at the following sites: ■ http://www.iriacenter.com/backgrnds.nsf/Emissivities?OpenPag, 2003. ■ http://www.x20.org/library/thermal/emissivity.htm, 2003. ■ http://www.electro-optical.com/bb_rad/emissivity/matlemisivty.htm, 2003.
THE HAGAN–RUBENS RELATIONSHIP FOR THE REFLECTIVITY OF METALS The reflectivity of metals can be estimated by 0.5
3.7ρ R ≈ 100 – --------------0.5 λ where R = reflectivity (in percent) ρ = resistivity of the metal in microhms per meter (e.g., 0.02 µΩ/m for copper) λ = wavelength (in µm)
Discussion The amount of light reflection of a given (solid) object depends on the exterior surface material, roughness, and surface coating. The reflectivity of a metal is related to the complex index of refraction of the surface coating. This can be estimated for a given wavelength by the metal’s absolute magnetic permeability and electrical conductivity. With some substitution and reasonable assumptions, the above rule can also be expressed as 2 4πν R = 100 – 100 --- --------c µσ where R = reflectivity (in percent notation) ν = frequency of the light radiation σ = electrical conductivity c = speed of light µ = absolute magnetic permeability Schwartz et al2 give another variant of the relationship as 2ω 1 ⁄ 2 R( ω ) = 1 – ⎛ -----------⎞ ⎝ πσdc⎠ where σdc = DC conductivity ω = frequency of the light This rule applies to wavelengths from the visible through IR. This rule can be used for a first-cut quick estimate of reflectivity. It also provides an estimate of the way the reflectivity of materials changes with wavelength. This can be useful in estimating the target signature from reflected sunlight or laser illumination if the reflectivity is known at one wavelength.
338
Chapter Seventeen
References 1. D. Fisher, Rules of Thumb for Scientists and Engineers, Gulf Publishing, Houston, TX, 1988. 2. A. Schwartz et al., “On-Chain Electrodynamics of Metallic (TMTSF)2X Salts: Observation of Tomonaga-Luttinger Liquid Response,” Physical Review B, 58(3), pp. 1261–1271, July 15, 1998. 3. H. Lee et al., “Optical Properties of a Nd0,7Sr0,3MnO3 Single Crystal,” Physical Review B, 60(8), pp. 5251–5157, August 15, 1999-II.
HUMAN BODY SIGNATURE 1. The surface of the human body emits about 500 W/m2 and has a radiative transfer to an ambient environment (of 23°C) of about 130 W. 2. In the 3- to 5-µm bandpass, it radiates into π sr approximately 7.2 W/m2. 3. In the 8- to 12-µm bandpass, it radiates into π sr approximately 131 W/m2. 4. The peak wavelength of a human’s radiant emittance is about 9.5 µm.
Discussion The human body emits heat by a variety of means, including infrared emission, evaporation of surface moisture (sweat), and breathing. Generally, the body has a very high emissivity in the thermal infrared (0.97 or more) and a surface area of around 2 m2. Skin is quite antireflective (black and very Lambertian) as discussed in an associated rule. The surface temperature of a human is less than the often-quoted internal body temperature of 37°C. The surface temperature is a complex function of metabolism, state of activity, and health, but at rest one can assume it to be between 30 and 38°C for a person at rest, with 32 to 34°C as a nominal average. The large difference between the above heat transfer and total emittance is a result of radiative input from the ambient environment, which is generally relatively close to that of the body (e.g., a 23°C room is only 3.5 percent colder than the surface of the body). Many web sites and other references confirm the radiative transfer for a relatively hairless, naked human in a 23° environment. They show that the human body tends to transfer about as much heat as a light bulb (net), although we radiate as much as a hair dryer (total). An adult male’s basal metabolism generates about 90 W at rest, and more when active. When more than 90 W is transferred from the body, we feel cold; when less, we feel hot. This is why it feels colder on the ski lifts than the slopes. When skiing, the muscles and brain are working hard, and more heat is generated as opposed to when sitting on a lift. The brain consumes about 20 W, which heats it up, which is why we lose so much heat through our heads and need hats in the winter. If the exterior temperature is the same as that of the body, there is no net heat transfer, yet we still radiate. If the exterior temperature is colder, then the body losses net heat though radiation according to 4
4
εAσ( T B – T A ) where ε = emissivity A = area σ = Stephan–Boltzmann constant TB = body’s surface temperature (in kelvins) TA = ambient temperature (also in kelvins)
Target Phenomenology
339
Body heat is regulated by the hypothalamus. When we radiate much more heat than we produce, the hypothalamus triggers mechanisms to produce more heat. These include shivering to activate muscles; vasoconstriction to decrease the flow of blood (and thus heat) to the skin; and secretion of norepinephrine, epinephrine, and thyroxine to increase heat production. If the exterior temperature is hotter, mechanisms in addition to radiation help cool the body. When the body’s surface temperature reaches about 37°C, perspiration results. The body’s heat production remains about constant, so the only way we can transfer heat is through evaporation of perspiration. In dry air, this evaporation occurs much more quickly than in humid air, so more heat is removed per unit of time. This explains the phenomena of “dry heat”: 38°C in Tucson is more pleasant than 26°C in Orlando. Note that these results assume relatively hairless, naked humans. Clothing and thick hair (such as on many people’s heads, the author’s excluded) reduce the radiation. Also, many animals have different body and skin temperatures, some have more hair, and some don’t sweat.
References 1. 2. 3. 4. 5. 6.
http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/bodrad.html, 2003. http://www.space.ualberta.ca/~igor/phys_224/outline_2.pdf, 2003. http://web.media.mit.edu/~testarne/TR328/node2.html, 2003. http://www.shef.ac.uk/~phys/teaching/phy001/unit7.html, 2003. http://www.tiscali.co.uk/reference/encyclopaedia/hutchinson/m0006045.html, 2003. R. Hudson, Jr., Infrared Systems Engineering, John Wiley & Sons, New York, p. 103, 1969.
IR SKIN CHARACTERISTICS Beyond about 1.4 µm, human skin has high emissivity and is very Lambertian.
Discussion Regardless of the visible characteristics of skin, extreme absorption of light occurs at wavelengths beyond about 1.3 µm, implying a high emissivity and, hence, a bright thermal emission. Moreover, as a result of the surface morphology (many little pits, hairs, and imperfections), skin is extremely Lambertian. In fact, living skin tends to be “blacker” than all but the very best coatings and surface treatments in the MWIR and LWIR. The data presented here (from Miller) do indicate a slight increase in backscatter reflectivity when observing at angles of low incidence, but this is still tiny (being less than a factor of 4) as compared to angles near normal. To illustrate this result, Fig. 17.7 shows the bidirectional reflectance distribution function (BRDF, see the associated rule for a definition) for live human skin taken in the MWIR (4.46 µm) and the LWIR (10.3 µm). The figure shows the forward scatter BRDF in MWIR and LWIR at an incident beam of minus 20°. That is, the beam is incident at –20° from the normal, so the specular in-plane forward scatter peak would appear at 20° on this plot. This reflection peak (following Snell’s law) is present in the LWIR data but is extremely small. For example, bare aluminum has a specular peak about five orders of magnitude above the low points, and a mirror has even larger peaks. The MWIR data are extraordinarily Lambertian. The integrated BRDF data indicate an LWIR emissivity of about 0.98 and a MWIR emissivity of over 0.99. Additional data from multiple human subjects (of both sexes) and relatively flat, hairless portions of the body support these assertions. All of these data have good signal-tonoise ratios (e.g., 8 to 100) and were repeatable to within about a factor of 2. The inflec-
340
Chapter Seventeen
FIGURE 17.7 females.
Forward scatter BRDF from human skin, data averaged from two adult males and two
tions and slight curves should not be taken too seriously, as it is impossible to find perfectly flat skin, so there is some error induced by the angularity of skin and potential minor movements by the person during the measurement. The point is that skin has very high emissivity and is very Lambertian in the infrared bands; thus it is very low in reflectivity. Do you need a quick blackbody? Then tape a temperature sensor on someone’s hand and view the hand. Recent work in human evolution indicates that about 1.6 million years ago, Homo Ergaster exhibited an increased number of sweat glands that kept his brain from overheating.1 This could also explain why humans have such high emissivity in the thermal IR. Having high emissivity in the infrared improves the natural ability of the body to radiate heat, which we do quite well. When really hot, we also sweat to cool by means of evaporating water.
Reference 1. N. Jablonski and G. Chaplin, “Skin Deep,” Scientific American, pp. 74–81, October 2002.
JET PLUME PHENOMENOLOGY RULES 1. The radiance (W/m2/sr) of a plume from a jet aircraft engine at 35,000 ft is about one-half of what it is at sea level. 2. It is better to observe an airplane’s plume in the region of the CO2 emission band at 4.3 µm than in the water band at 2.7 µm. 3. The extent of a plume from an airplane is roughly equal to the length of the airplane. 4. A turbojet engine can be considered a graybody with an emissivity of 0.9, a temperature equal to the exit gas temperature, and an area equal to that of the exhaust nozzle. 5. For a subsonic aircraft, the exhaust temperature after expansion is approximately equal to 0.85 multiplied by the exhaust gas temperature (in the tailpipe) in kelvins. 6. For constant engine settings, plumes are larger at higher altitudes, where the static atmospheric pressure is lower.
Target Phenomenology
341
Discussion A jet engine burns fuel with the atmosphere’s oxygen and produces thrust with strong emissions in the water and CO2 bands. Pressure and temperature broadening of the plume emission bands will cause emissions just short and long of the atmospheric absorption band. Generally, the radiance from a turbofan is less than that from a turbojet. Also, in the LWIR, the heat of the jet engine cavity usually produces a much larger signature than the plume. These rules are based on empirical observations for a number of types of aircraft engines and assume ■ The aircraft is in normal operation; that is, the engine is not set for “afterburning.” ■ The aircraft uses a classical turbojet engine design. Additionally, these assertions are very bandpass dependent.
References 1. R. Hudson, Jr., Infrared Systems Engineering, John Wiley & Sons, New York, pp. 86–90, 1969. 2. R. Barger and N. Melson, “Comparison of Jet Plume Shape Predictions and Plume Influence on Sonic Boom Signature,” NASA Technical Paper 3172, March 1992.
LAMBERTIAN VS. SPECULAR No surface is perfectly Lambertian or specular. Generally, a surface is considered Lambertian if its bidirectional reflectance distribution function (BRDF) peak reflection at angles near normal incidence is less than one order of magnitude above the average. Conversely, a surface is generally considered specular if its peak is four orders (or more) of magnitude above its average.
Discussion A perfectly Lambertian surface emits equally over 2π steradians. A perfectly specular surface emits in an infinitesimally small angle determined by Snell’s law. Nothing is perfect in nature, so surfaces are a combination of the two. Note that these definitions do not have anything to do with total reflectance. Surfaces exist that are very low in reflectance yet still very specular, and vice versa. For example, gloss black paint has an overall reflectance that is quite low, yet it is specular. An automobile with a high-quality black finish will provide a very nice reflective image. Paper is the opposite; it is designed to be of high reflectivity but is also intended to be Lambertian (diffuse). Most readers find that the glossy paper used is some books and magazines is annoying, because one might be distracted by the reflection of the light source. The characteristics that you desire in a target, background, or hardware surface treatment must be carefully analyzed and should be determined by statistical ray traces. Figure 17.8 illustrates the difference between a notional specular (or mirror-like) surface and a notional Lambertian surface. If an incident beam encounters the surfaces at a 45° angle from normal, the Lambertian surface will have about the same level of reflectance at all observing angles. The specular surface will generally have a lower reflectance at all angles except near the Snell reflection angle, where it will be many orders of magnitude greater. This effect had considerable relevance in World War II. The British air forces discovered that their aircraft were less likely to be seen by ground spotters using searchlights when they painted the bottom of their aircraft with highly specular black absorptive paint
342
Chapter Seventeen
FIGURE 17.8 A specular surface has a strong reflection at the Snell angle from an incident beam, whereas a Lambertian one does not. For the Lambertian, the same amount of energy may be reflected, but a many more angles.
as opposed to the more intuitive choice, diffuse absorptive paint. In retrospect, the reason is clear. With specular paint, only spotters that happened to be at exactly the Snell angle would catch a glimpse of the light reflected from the bottom of the aircraft. In the diffuse, or Lambertian case, the light from any illuminating source was scattered into a hemisphere but dimly. The brighter the illuminating source, the more likely that the diffuse reflection will be detected.
LASER CROSS SECTION The laser cross section of a target is about 10 percent of the target area projected toward the laser.
Discussion The laser cross section of objects with complex surface shapes is an important field of study for many types of applications. The classic concern is the detectability of military vehicles, as some targeting technologies use lasers to determine range and generate signals for pointing control. The effective laser cross section of a target is generally much less than the projected area. For example, consider a spherical target (with radius r) with a diffuse (Lambertian) surface. The projected area of the sphere is πr2 but the radiant intensity is the incident irradiance times the reflectivity and divided by π. Because the reflectivity of most targets is less than 80 percent and greater than 20 percent, the effective cross section is between 0.2/ π and 0.8/π times the physical cross section. The first value is 0.06, and the latter value is 0.25. On average, the effective cross section from a radiometric perspective is on the order of 10 percent, and more for high-cross-section targets and less for stealth targets. Moreover, in military applications, it is quite clear that the enemy is going to use materials to suppress the reflectivity of its vehicles. The U.S. Air Force maintains a dedicated test
Target Phenomenology
343
range to measure the properties of targets, paints, and other factors that determine the size of the cross section that will be encountered in the battlefield. The above details assume that the surface is a Lambertian reflector of average reflectivity. Try a rigorous calculation of laser cross section (or a model designed for such purposes) before designing a system. This rule is handy for quick thought problems and what-ifs when other information is lacking.
MORE PLUME RULES 1. A plume’s brightness in just about any band varies approximately linearly with the thrust. 2. The diameter of the plume is approximately F D ≈ ------P where D = diameter of the plume F = thrust in units that agree with P P = ambient pressure in units that agree with F
Discussion The plume from a rocket or jet usually (depending on the bandpass and aspect viewing angle) contains most of the usable signature that will be seen by a sensor. Within a band, for a given resolution at the same altitude, the signature varies (can be scaled) as the thrust is increased as described in another rule in this chapter. The signature varies in a complex fashion but, when all is said and done, it is usually is pretty close to linear. The diameter of the plume can be estimated based on momentum balancing the plume pressure and the effective atmospheric pressure. Most of the signature will be contained within this diameter. This is based on approximation of flow-field calculations and inspired observations. This was devised for rockets. It may be applied to jets, small missiles, and other vehicles with caution. It assumes that the exhaust velocity greatly exceeds vehicle velocity, which is not the case when the rocket has been thrusting for some time and is moving at great speed.
PLUME THRUST SCALING One can scale the signature of a jet or missile’s plume to that of another by the size of the thrust, or I 1 = I 2 ( N 1 /N 2 )
x
where I1 = in-band radiant intensity of plume 1 I2 = in-band missile radiant intensity of plume 2 x = a constant depending on spectral bandpass [The constant, x, is usually between 0.7 and 2. Assume 1.0 (linear) if you don’t have specific data for the engine type and fuel.] N1, N2 = thrust in newtons for the engines producing the plumes
344
Chapter Seventeen
Discussion Signatures from small tactical missiles are typically from 100 to 10,000 W/sr/µm, depending on bandpass and viewing aspect angle. ICBM and payload-orbiting rockets1 typically range from 105 to 107W/sr/µm. The in-band radiant intensity of a missile is proportional to the rate of fuel combustion, and that is proportional to the thrust of the motor. Therefore, the signature (within a defined band) tends to scale close to linear with the relationship to thrust. Do not use this rule to scale across different spectral bands, as missile plumes are strong spectrally selective emitters. A slight change in bandpass prevents accurate scaling. Also, scale only similar fuels, motors, and motor geometries, and only for the same altitudes. Additionally, Wilmot1 gives a scaling law for the viewing angle variations in observed signatures. I θ = I 90 sin ( θ + φ ) where Iθ = radiant intensity of the missile when observed at angle θ I90 = intensity at the beam viewing angle (sideways, 90° from its velocity vector) θ = angle between the velocity vector (frequently the same as axis of the plume) and the observer φ = offset angle, a small correction whose value depends on the geometry of the missile and plume [This may compensate for the difference between the velocity vector and the plume axis (if not aligned). This is an apparent effect depending on the viewing geometry.]
References 1. D. Wilmot et al., “Warning Systems,” in Vol. 7, Countermeasure Systems, D. Pollock, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, pp. 19–21, 1993. 2. R. Peters and J. Nichols, “Rocket Plume Image Sequence Enhancement Using 3D Operators,” IEEE Transactions on Aerospace and Electronic Systems, 33(2), April 1997. 3. E. Beiting and R. Klingberg, “K2 Titan IV Stratospheric Plume Dispersion,” Aerospace Report TR-97 (1306)-1, SMC-TR-97-01, January 10, 1997.
ROCKET PLUME RULES 1. The size of the plume increases in diameter with altitude, and the intrinsic shock structures expand greatly with altitude. For large rockets, the plume can eventually envelop the entire rocket at highest altitudes, just before burnout. 2. A minimum in infrared intensity is observed not far from the time that the missile velocity and exhaust velocity are the same in magnitude. This minimum is generally observed at missile altitudes from 70 to 90 km, in many cases coincidentally close to the time of staging. 3. A solid rocket plume has a temperature of about 2000°C but may have a low emissivity, depending on its density. An example can be found by looking at the plumes of the solid rocket boosters on the Space Shuttle solid rocket boosters. The plumes contain aluminum oxide and are very white, indicating a low emissivity in the visible bandpass.
Target Phenomenology
345
Discussion These rules result from the diminishing effects of the atmosphere as the rocket ascends combined with empirical observations of current rocketry. Although data suggest these rules, they should be cautiously applied. A number of phenomena cause these rules to be approximations only. For example, the “high-altitude trough” causes the signature of a rocket to get brighter with altitude but then diminish for a range of higher altitudes. This is the result of reduced afterburning of fuels that are not consumed in the engine as tends to occur at lower altitudes. The reduced afterburning results from reduced oxygen content in the atmosphere. As the rocket continues to accelerate, brightness returns to the plume, because the unburned fuel, although in a lowoxygen environment, then encounters the available oxygen with enough speed to stimulate burning. These rules are useful in predicting the signatures that will be available for tracking during rocket flight. They allow estimation of the performance of sensors by providing the necessary target spectral characteristics.
References 1. I. Spiro and M. Schlessinger, Infrared Technology Fundamentals, Marcel Dekker, New York, pp. 60–62, 1989. 2. http://code8200.nrl.navy.mil/lace.html, 2002. 3. R. Peters and J. Nichols, “Rocket Plume Image Sequence Enhancement Using 3D Operators,” IEEE Transactions on Aerospace and Electronic Systems, 33(2), April 1997. 4. E. Beiting and R. Klingberg, “K2 Titan IV Stratospheric Plume Dispersion,” Aerospace Report TR-97 (1306-1), SMC-TR-97-01, January 10, 1997. 5. E. Beiting, “Stratospheric Plume Dispersion: Measurements from STS and Titan Solid Rocket Motor Exhaust,” Aerospace Report TR-99(1306-1), SMC-TR-99-24, April 20, 1999.
SOLAR REFLECTION ALWAYS ADDS TO SIGNATURE The Sun is bright. When present, solar reflection always adds signature to a target, regardless of the bandpass. Specific gains depend on the conditions, sensor, and bandpass.
Discussion Imaging systems detect either reflected light, emitted light, or both. Increases in signature from solar reflection are usually great for wavelengths less than ≈3 µm and inconsequential beyond ≈5 µm (see associated rule in Chap. 4, “Backgrounds”). However, beyond 5 µm, the target may absorb solar irradiation, causing an increase in temperature. Solar reflection is usually measurable but usually not significant between these two wavebands. Total solar irradiance at the ground (insolation) (Cs) in (W m–2) can be expressed as 2π( n – 3 ) C S = ( 1353A ) 1 + 0.0338 cos ⎛ ---------------------⎞ ⎝ 365 ⎠ where A = fraction of solar radiation transmitted by the atmosphere (Outside the atmosphere, A is 1. At zero altitude, A varies from essentially 0 in bad weather to about 0.81 for very clear conditions. Reference 1 provides some empirical
346
Chapter Seventeen
relationships between solar radiation and solar angle for different cloud types that define A.) N = Julian day The amount of solar irradiation on a horizontal plane is the sum of direct solar illumination from the disk of the Sun (Sn) and the illumination from diffuse sky caused by scattering (D). For clear sky conditions, the relation between the total irradiance and the direct illumination is given by1 C S = Sn sin ( ϑs ) + D and 3C S – C S sin ( ϑs ) Sn = ------------------------------------2 sin ( ϑs ) where ϑs = elevation angle of the Sun Atmospheric effects must be considered if operating within the atmosphere. In the visible and UV spectral regions, objects are typically viewed by solar reflection only. In the infrared, the object’s own thermal emission is normally used to provide the signal to detect the target. However, in the shortwave and midwave infrared, reflection of radiation emitted by the Sun (and sometimes the Earth or Moon) can contribute significantly to a cold object’s signature. The contribution may be enough to allow smaller optics or a less-sensitive (and cheaper) focal plane. For example, consider a 1-m2 satellite at a temperature of 300 K, a reflectivity of 0.3 (Lambertian), and an emissivity of 0.7. It is desired to observe this target with an 8- to 12µm, a 3- to 5-µm, and a visible bandpass. Assume that the observation is done by another satellite in Earth orbit with no background. By using the Planck equation and solar emission tables from the IR Handbook, Table 17.1 can be generated. This does not consider background effects and the potential heating of the target (the latter also contributing it its emitted signature). TABLE 17.1 Radiant Intensity as a Function of Bandpass (Does Not Consider Heating by the Sun)
Thermal radiant intensity (W/sr)
0.4- to 0.6-µm band
3- to 5-µm band
Essentially 0
1.5
Solar reflection (expressed in W/sr)
35
2.2
Total (W/sr)
35
3.7
Percent of signature contribution by the Sun
100
60
8- to 12-µm band 28 0.7 29 2
One can see that the solar contribution is dominant in the visible. In the IR bands, it contributes significantly to the signature of the MWIR band while being only a minor contributor to the LWIR band. Nevertheless, it does contribute something to every band.
Reference 1. P. Jacobs, Thermal Infrared Characterization of Ground Targets and Backgrounds, SPIE Press, Bellingham, WA, pp. 34–37, 1996.
Target Phenomenology
347
TEMPERATURE AS A FUNCTION OF AERODYNAMIC HEATING The stagnation temperature caused by aerodynamic heating can be estimated as1 (γ – 1) 2 T = T amb 1 + r --------------M 2 where
T = stagnation temperature in kelvins Tamb = ambient temperature of the air r = recovery factor (usually 0.8 and 0.9); for laminar flow, use r = 0.85, and for turbulent flow, use1 r = 0.89 γ = ratio of the specific heats of air at constant pressure to that at constant volume (usually 1.4) M = Mach number
With some assumptions, the first equation can be further simplified for high-altitude flight as 2
T = 217( 1 + 0.164M )
Discussion This is a dual-use rule in that it can be used (a) to estimate the temperature of a high-speed target in the atmosphere and (b) to estimate the temperatures that will be encountered in designing sensor components, such as windows, that will be used in various types of airborne sensors. This provides a basic piece of information about the design of such sensors, given that elevated window (or dome) temperatures may result in emissions in the detection band of the sensor via photon noise, and these may need to be considered. The temperature and the emissivity of the window material determine how much background radiation flux (and therefore photon noise) the window adds to the sensor. The equation gives the stagnation temperature of the air at the surface of the object when moving directly against the air. The actual temperature of the object will be somewhat lower. For instance, the temperature of a dome will fall off rapidly as the position of interest moves away from the center of the dome. This is accounted for by r, the recovery factor. Hudson suggests using 0.82 for laminar flow and 0.87 for turbulent flow. The first equation applies for Mach numbers less than about 6. For higher Mach numbers, particularly above 8, Gilbert2 suggests that the Tamb term should be divided by approximately 2.
References 1. J. Accetta, “Infrared Search and Track Systems,” in Vol. 5, Passive Electro-Optical Systems, S. Campana, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 223, 1993. 2. K. Gilbert et al., “Aerodynamic Effects,” in Vol. 2, Atmospheric Propagation of Radiation, F. Smith, Ed., of The Infrared and Electro-Optical Systems Handbook, J. Accetta and D. Shumaker, Exec. Eds., ERIM, Ann Arbor, MI, and SPIE Press, Bellingham, WA, p. 241, 1993. 3. R. D. Hudson, Jr., Infrared Systems Engineering, John Wiley & Sons, New York, p. 101, 1969.
348
Chapter Seventeen
4. S. Maitra, “Aerodynamic Heating of Ballistic Missile Including the Effects of Gravity,” Sadhana, Vol. 25, pp. 463–473, October 2000. 5. R. Quinn and L. Gong, “Real Time Aerodynamic Heating and Surface Temperature Calculations,” NASA Technical Memorandum 4222, August 1990.
Chapter
18 Visible and Television Sensors
This chapter contains rules relating to sensor systems and detectors operating in the visible portion of the electromagnetic spectrum. This spectral slice has proven to be the technically easiest to implement (thanks to nature’s gifts of phosphors for displays and silicon for detectors), has provided the easiest images to interpret (thanks to our Sun peaking in the visible wavelengths and humans evolving their only imaging sense in this spectrum), and has been able to address the largest market (again, thanks to human familiarity with imaging in this spectrum). After still photography was developed and popularized in the nineteenth century, the idea of moving images was ignited in the minds of many scientists and engineers. While many worked on technology to produce moving photographic images (surprisingly still present in the modern cinema), some early pioneers realized that these moving images would reach a wider audience if electronically acquired and disseminated. Paul Nipkow developed a rotating-disk mechanically scanned television system as early as 1884, and the first CRT appeared as early as 1897. Ever since the dawn of the first moving pictures, the desire to include sound with electro-optical images was paramount. By the time moving film pictures gained popularity, exIdaho potato farmer Philo Farnsworth was hot on the trail of all-electrical television while others were still trying to perfect mechanical television—both with no market success. The world’s first public demonstration of a television system (it was of a mechanical architecture) was presented on January 23, 1926, by John Logie Baird of England. Baird later went on to license the Farnsworth electronic television and receive a British Broadcasting Corporation (BBC) contract. However, technical difficulties and an inopportune fire resulted in his loss of the follow-on BBC contract to EMI. Zworkin, at RCA, filed several patents in the 1920s, 1930s, and 1940s relating to electronic televisions. Within a few years of Baird’s 1926 demonstration, televisions were available to consumers in the U.S. and England. However, they didn’t catch on, because there was little to watch, and they were very expensive. (Not much has changed, as these are the very barriers of HDTV at the time of this writing.) In 1935, a U.S. court affirmed that Farnsworth had the controlling patents on television. This led to RCA licensing Farnsworth’s patents in 1939 (the first time RCA paid royalties rather than collecting them). Farnsworth also envisioned cable television as early as 1937 in an agreement between his company and AT&T.1 In these early days, AT&T was acutely
349
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
350
Chapter Eighteen
interested in visible cameras and displays for a picture-phone that, like mechanical television, also never received widespread consumer use. Color television was promoted by CBS as early as early as 1940, using 343 lines and 120 fields per second.2 Heated wars on standards erupted, delaying the acceptance of color for almost three decades. After the Supreme Court intervened in 1951, CBS actually broadcast color for a few months and then abandoned it, because no one could watch it on a black-and-white TV. Color displays reached maturity in the late 1960s, when a broadcast standard that allowed black-and-white receivers to display the images was developed. Back in the 1940s, mechanical television gave way to the electronic television pioneered by Farnsworth. The work of the National Television System Committee (NTSC), laid the necessary foundations that made monochrome television practical in the U.S., and its 1941 standards (subsequently adopted by the FCC) are still used today. The reader is referred to the introduction of Chap. 7, “Displays,” for a related discussion of the broadcast and display standards of NTSC and phase alternate line (PAL). In the 1950s, the Academy of Television Arts and Sciences decided to give out an award akin to the Oscar. They named it the “IMMY” after RCA’s Image Orthicon Camera (frequently called by that name), based on the Farnsworth patents that they previously licensed. This was later changed to “Emmy” to reflect the female nature of the statue.5 Perhaps it is time to rename it the “Dee” (for the CCD). The era before 1935 can be called the “unsuccessful age of mechanical television.” One can assume that the era from about 1935 until 200X(?) will be considered the successful age of analog television. We are on the precipice of a fundamental change in television to digital architecture and higher resolution. We can’t say how successful or long this age will last, but it is likely to be replaced by holographic, super-interactive, or some other form of visible home electronic imaging that is now hard to imagine. A little history on recording is noteworthy. Ever since 1928, when mechanical television broadcast President Hoover’s acceptance speech, the recording of video became paramount in many video engineers’ minds. Kinescope was used to record video on film, and that became the mainstay of the television recording industry until about 1960. Charles Ginsberg led the research team at Ampex in developing the first practical videotape recorder in the early 1950s. The first ones sold in 1956 for $50,000, and they achieved a modicum of success in the 1960s with studios and industry. Sony introduced the videocassette recorder in 1971 and subsequently popularized the Betamax in the mid1970s. Also in the late 1970s, Matsushita (parent company of JVC) developed the VHS format, which had slightly reduced quality compared to Betamax but could hold two (and later more) hours of video on a single cassette, allowing a typical Hollywood film to be recorded on a single tape. The VHS standard proliferated in the home entertainment and security markets. However, currently, VHS suffers from the fact that it is an all-analog architecture. Digital is all important, and many relevant standards evolved in the 1990s. Currently, MPEG2 is a generic standard allowing for up to 1.5 megabits per second as well as SMPTE292 (which defines several standards including a 60-Hz interlaced 1920 × 1080 digital video). The camera recording the early television images was the vacuum tube-based vidicon. These devices employed coatings of phosphorous materials and produced analog signals. These cameras are bulky and fragile, require high voltages, and are difficult to calibrate. However, they were producible, manufacturable, stable, and provided adequate resolution with low-cost 1940s technology. The charge-coupling principle was devised on October 19, 1969, by Willard Boyle and George Smith and was first described in a 1970 publication by them.3,4 The charge-coupled device (CCD) enabled solid state, video imaging. Strictly speaking, the CCD is not a detector but a readout architecture (Fig. 18.1). By a happy coincidence, both the visible wavelength detector and the readout electronics can be made from silicon devices, in a sin-
Visible and Television Sensors
351
FIGURE 18.1 Typical front-side charge-coupled device architecture. (From www.Chem.vt.edu/chem-ed/optics.)
gle chip. Photons are converted to electrons and holes in the depletion region of the bulk silicon. The charge migrates to a potential well in the CCD structure. The CCD moves this photogenerated charge in a “bucket brigade” from one well in a unit cell to the next by clocking the well potentials, then to wells in the adjacent unit cells, then to a shift register of CCDs that continue the transfer in the other dimension to the output lead pin. Classic CCDs use a high-resistivity n-type silicon substrate with a three-phase, triple-polysilicon gate and buried channels. An alternative technology soon developed to avoid this bucket brigade—the charge injection device or CID. These devices had some market appeal in the 1980s and early 1990s for spectrometers, trackers, and security systems. However, like Betamax, they failed in the marketplace. The CID’s market presence was largely defeated by the CCDs lower cost resulting from larger, more universal markets. Figure 18.2 plots the acceptance of the CCD for the European Southern Observatory (an organization managing several observatories, see www.ESO.org). In the 1980s, the CCD became accepted as the visible focal plane of choice for most scientific imaging, and subsequently military imaging. It then became ubiquitous in professional television cameras and, eventually, consumer camcorders in the 1990s. In the 1990s, the stalwart CCD saw developments through back-illumination and
FIGURE 18.2 The demography of optical sensitive area devoted to observation at the European Southern Observatory, 1973–2000. Data were collected and plotted by G. Monnet (ESO). (From D. Groom, Recent Progress on CCDs for Astronomical Imaging, Proc. SPIE, Vol. 4008, March 2000.)
352
TABLE 18.1 Examples of Advanced CCDs* First light
Camera
Format
Pixel size
Packing fraction
Format
Manufacturer or part number
Telescope
1998
CFH12K
12k × 8k
15 µm
98%
12 × (2k × 4k)
MIT/LL CCID20
1999
Suprime-Cam
10k × 8k
15 µm
96.5%
10 × (2k × 4k)
SITe + MIT/LL†
SUBARU
1999
SDSS
12k × 10k
24 µm
≈43%
30 × (2k × 2k)
SITe
Apache Pt
1999
NOAO
8k × 8k
15 µm
98%
8 × (2k × 4k)
SITe
CTIO 4-m
2000
DEIMOS
8k × 8k
15 µm
97%
8 × (2k × 4k)
MIT/LL CCID20
Keck
Hamamatsu
2000
MAGNUM
4k × 8k
15 µm
96%
4 × (2k × 4k)
2000
WFI
8k × 8k
15 µm
95.6%
8 × (2k × 4k)
2001
UW
2002
OmegaCAM
16k × 16k
≥80%
20 × (2k × 4k)
ARC 3.5m
36 × (2k × 4k)
VST
2002
MegaPrime
>16k × 18k
13–15 µm
>90%
≥ 36 × (2k × 4k)
Megacam
18k × 18k
13.5 µm
>90%
36 × (2k × 4.5k)
2004‡
DMT
Annulus
13 µm
1300 × (1k × 1k)
2004‡
WFHRI_1§
36k × 36k
5 µm
4 × (30 × 30) × (600 × 600)
MIT/LL
2006‡
SNAPsat
≈
15 µm
83%
≈250 × (2k × 2k)
LBNL**
2010‡
GAIA
≈ 109 pix
9 µm × 27 µm
86%
≈240 CCDs
*Source: D. Groom,
pix
2 m, Haleakala MPG/ESO
2002
199
CFHT
CFHT EEV CCD42-90
SAO/MMT DMT 8-m ≈25 × 2.5 m Satellite ESA satellite
“Recent Progress on CCDs for Astronomical Imaging, Proc. SPIE, Vol. 4008, Optical and IR Telescope Instrumentation and Detectors, March 2000. 4 SITe ST-002A and 4 MIT/LL CCID-20. Will add two more MIT/LL to make a full array. ‡This is for the focal plane in one of ≈25 telescopes in the WFHRI array. Each array consists of four chips, each a 30 × 30 array of 600 × 600 OTCCDs. §Proposed. **Commercial foundry licensed by LBNL. †Presently
Visible and Television Sensors
353
the incorporation of gain for high sensitivity and other simplistic signal processing. The market pull of digital video for the Internet, HDTV, and e-cinema spurred advancements regarding on-chip digitization, smaller pixels, and larger formats. Table 18.1 lists some advanced technology CCDs planned for astronomical applications. At the time of this writing, the CCD has become the ubiquitous visible sensor for almost every application, but it has a new-technology competitor. Newer CMOS active-pixel sensors (APSs) and focal planes have significant advantages over CCDs for high-definition imaging and in being able to incorporate complex image processing into the focal plane, plus they have increased radiation hardness. Their origins go back to 1968,6 and they are becoming widely used for scientific, surveillance, and military applications as well as HDTV cameras (Fig. 18.3). CMOS APS devices have a different architecture, allowing the device to have parallel access (like the CID), which allows one detector or any area on the chip to be selected for readout. The signal from the pixel is the difference between the potential on the photodiode before and after the photodiode is reset. These two potentials are stored at the bottom of the column capacitors. The voltages on the capacitors are differentially read out to produce a voltage proportional to the photocharge.7
The only limit in the programmability of the device is that the bandwidth of readout (pixels per second) cannot be exceeded. If just a few pixels are chosen, they can be read at a higher rate than if the entire chip is read out. CMOS APSs tend to exhibit relatively large fixed pattern noise (FPN) as compared to CCDs. However, the CMOS APSs can outperform the CCD for large formats, as CCDs exhibit other difficulties. This is the result of threshold voltage and capacitance variations in the pixels. At present, security cameras use either CCDs or CMOS APSs, with the CCD having the largest market but APSs rapidly growing due to their on-focal-plane image-processing flexibility and better performance for large formats. These will likely replace the CCD for almost all applications and avoid suffering the fate of the CID. APSs are also attractive because they can be manufactured in the same foundry that makes memory chips or other common silicon devices, as they don’t require the overlapping polysilicon layers and nonstandard processing modules inherent to the CCD. Thus, the old CCD demands more “touch time” and can’t enjoy the production efficiency gains that have been shown by CMOS APS visible focal plane arrays.
FIGURE 18.3 Active pixel CMOS HDTV chip. (Courtesy of Rockwell Scientific.)
354
Chapter Eighteen
The reader is cautioned that many commercial CCDs and APS perform an “on-chip” “pseudo-resolution” whereby a row of pixels (row 1) is added to the row below (row 2) to produce a TV line. Then the next line is composed of row 2 and the one below it (row 3), and so on. This makes determining resolution and sampling frequency difficult. Williams8 cautions that lines of resolution is a rather confusing term in the video and television world. This type of metric survives from the early days of analog television. It is poorly understood, and it is inconsistently measured and reported by manufacturers. But we’re stuck with it until all video is digital, at which time we might just possibly change the convention and start reporting resolution in terms of straight pixel counts (as is done in the infrared community). There are some common misconceptions. Lines of resolution is not the same as the number of pixels (either horizontal or vertical) found on a camera’s CCD, or on a digital monitor or other display such as a video projector or such device, and it is not the same as the number of scanning lines used in an analog camera or television system such as PAL, NTSC, or SECAM, and so forth. Additionally, it is important to note that both NTSC and PAL systems are fundamentally analog in nature. Even if you digitize a PAL or NTSC signal from a digital CCD, you are digitizing an analog data stream with all its inherent limitations and noises, and you are typically limited to about 6 or 7 bits of actual useful data. Images may originate with a modern CCD, which can produce 12 to 14 bits of real data, but you’ll never truly recover all the lost and compressed information from the sloppy analog signal. Regardless of the solid state digital advantages of silicon detectors, in this chapter, we also include some rules for image intensifiers (commonly called I2), photomultiplier tubes (PMTs), and microchannel plates (MCPs). All are serious scientific and military visible technologies and are still seeing wide use for niche markets. There are several books on semiconductor physics (such as by Sze9, listed below) that provide detailed technical discussions of the CCD. SPIE and IEEE proceedings and journals also have numerous papers concerning new research and engineering in visible imaging technology.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
E. Schwartz, The Last Lone Inventor, HarperCollins, New York, p. 242, 2002. www.Tvhistory.tv.hmtl, 2002. J. Janesick, Scientific Charge Coupled Devices, SPIE Press, Bellingham, WA, p. 3, 2001. W. Boyle and G. Smith, “Charge Coupled Semiconductor Devices,” Bell Systems Technical Journal, 49(587), 1970. E. Schwartz, The Last Lone Inventor, HarperCollins, New York, p. 291, 2002. P. Noble, “Self-Scanned Silicon Image Detector Arrays,” IEEE Transactions on Electron Devices,” Vol. 15, pp. 202–209, December 1968. S. Miyatake et al., “Transversal-Readout Architecture for CMOS Active Pixel Image Sensors,” IEEE Transactions on Electron Devices, 50(1), pp. 121–128, January 2003. Private communication with George Williams, 2003. S. Sze, Physics of Semiconductor Devices, John Wiley & Sons, New York, pp. 407–426, 1981. WWW.Williamson-labs.com/ntsc-fink.htm, 2002. http://inventors.about.com/library/inventors/blvideo.htm, 2002.
Visible and Television Sensors
355
AIRY DISK DIAMETER APPROXIMATES f/# (FOR VISIBLE SYSTEMS) The diameter of the Airy disk (in micrometers) in the visible wavelengths is approximately equal to the f/# of the lens.
Discussion This is based on diffraction theory and plugging in the appropriate values. However, as detailed below, this is valid only near visible wavelengths of 0.5 µm. The linear size of the first Airy dark ring is, in radius, fλ R = 1.22 -----D where f = focal length of the optical system For the visible spectrum, λ is about 0.4 to 0.7 µm, and f/D is a small-angle approximation for the f/# of the telescope. Thus, the size of the first Airy ring is, in diameter (2 × 1.22 × 0.5 × f/#) µm, which is nearly equal to the numerical value of the f/#, as the multiplication of all the numerals approximately cancel each other.
CCD SIZE Like plumbing pipe, the actual active size of a visible sensor array is only roughly approximate to its nominal (literally, “named”) size. Although the formats vary with time and from one supplier to another, the user can assume the following crude approximations:
Format
Horizontal (mm)
Vertical (mm)
Diagonal (mm)
1/6 inch
2.5
1.8
3.1
1/4 inch
3.6
2.7
4.5
1/3 inch
4.8
3.6
6
1/2 inch
6.4
4.8
8
2/3 inch
8.9
6.6
11
Discussion Unfortunately, the “format size” of a visible sensor (be it a CCD, CID, or CMOS active pixel sensor) has little relationship to its actual size. Historically, formats were defined based on ancient vidicons and are meant to be exchangeable with such video tubes (although the authors can’t imagine anyone doing that today). The size roughly approximates the diagonal dimensions of the active area of the chip. Moreover, the actual chip’s imaging area may be substantially smaller than the chip, as there frequently is nonactive area. Some manufactures include a microlens for every pixel, resulting in a fill factor approaching 1. The above table gives the reader general sizes of the imaging area of the chip for popular formats. Unfortunately, these will vary slightly from manufacturer to manufacturer. There is no universal standard, but the above guidelines give the user a good place to start.
356
Chapter Eighteen
CHARGE TRANSFER EFFICIENCY RULES 1. Charge transfer efficiency (CTE) usually improves as the temperature is lowered. 2. CTE decreases as accumulated total dose (in rads) increases. 3. CTE is generally around 0.997 for commercial devices and as high as 0.999999 for scientific devices.
Discussion A charge-coupled device (CCD) operates by transferring the charge in one pixel across a row through the other pixels and eventually to a shift register. A large CCD may transfer some of the charge several thousand times before it reaches an amplifier. High efficiency of the transfer is critical to prevent shadowing effects and reduced SNR. Several things can happen to the lost electrons. They can be recombined with a hole, reducing signal, or they can be left behind and end up as signal in the neighboring pixel. As a CCD is cooled, its transfer efficiency usually increases, and its inherent noise decreases. This is why most very large and high-sensitivity CCDs operate at reduced temperatures. As CCDs are exposed to nuclear radiation, their performance decreases. There is a total dose deleterious effect. Insulating oxides break down, and shorting may occur. Additionally, the wells tend to fill up with noise-generated electrons, and the charge transfer efficiency is reduced. CMOS structures do not have this problem but are otherwise adversely affected when exposed to radiation. The effects are nonlinear and vary along the shift register, so care should be exercised in using these rules. Generally, there is a decrease in MTF that varies along the array.
CMOS DEPLETION SCALING According to Williams,1 CMOS manufacturing processes are being driven below submicron scale, and the depletion region in CMOS imagers scales as the square root of the process voltage and the doping concentration. This has the effect of reducing the (long wavelength) red response of these imagers.
Discussion Reference 2 states that the absorption of photons in silicon can be modeled by I ( λ,x ) = I 0 ( λ )e
–α ( λ )x
where I(x) = photon flux of wavelength λ at depth x in the silicon α = wavelength dependent absorption coefficient The reader will note that this is a form of Beer’s law (described elsewhere in this book), and the absorption coefficient (α) decreases as wavelength increases. Thus, on average, shorter wavelengths generate electron-hole pairs closer to the surface of the silicon than do red wavelengths. Thus, the depth of the p-n junction influences the spectral response. With the above in mind, as CMOS processes shrink in feature size according to Moore’s law, and process voltages decrease, the CMOS active pixel sensor (APS) imager red response is reduced. The resultant reduction of oxide thickness lowers the threshold voltage, which must be compensated for by increasing the diffusion doping in the channel, drain, and source. As the rule indicates, this is difficult, as it increases only as the square root of
Visible and Television Sensors
357
the process voltage and the doping concentration (not a very strong function). Migration from about 1- to 0.25-µm photolithography results in a need to increase the CMOS doping concentration by an order of magnitude to maintain the quantum efficiency. However, as the photons must be collected within a depletion region, and as red photons are absorbed between 10 and 100 µm deep in the silicon, the red response is reduced. Typically, the CMOS process limits this depletion region to 1 to 3 µm, limiting spectral coverage to less than 0.7 µm. Full depletion of a standard low-resistivity silicon substrate is not technically feasible. Therefore, the technical developments for expanding the wavelength sensitivity of scientific silicon detector arrays have focused on high-resistivity substrates. Some MOS developments are of the deep-depletion type. In these devices, partial depletion of the substrate is achieved to depths of typically 40 to 80 µm. Such devices must still be thinned to 40 to 50 µm to eliminate the free region between the depletion layer and the backside. Thinning unfortunately undermines the long wavelength sensitivity. Figure 18.4 illustrates the great change in absorption layers across the spectrum of silicon. Below about 400 nm, Beer’s law breaks down as surface effects dominate. Above about 900 nm, transparency dominates. Don Groom (the author of Ref. 3 and originator of the figure) likes to point out that this is the most important figure in his CCD/CMOS talks. He adds that the dashed curves approximate the theory, and the solid are experimental
FIGURE 18.4 Absorption length in the depletion region increases for longer (red) wavelengths. The dashed curves are calculated for the phenomenological fits by Rajkanan et al. Absorption length of light in silicon is represented by the solid curve. Except at wavelengths approaching the bandgap cutoff at 1100 nm, essentially all absorbed photons produce electron-hole pairs. The sensitive region of a conventional silicon detector is a 20 micron-thick epitaxial layer, while in the high-resistivity silicon; the fullydepleted 300 micron substrate may be active. (From Ref. 3.)
References 1. Private communications with George Williams, 2003.
358
Chapter Eighteen
2. K. Findlater et al., “A CMOS Image Sensor with a Double-Junction Active Pixel,” IEEE Transactions On Electron Devices, 50(1), pp. 32–42, January 2003. 3. D. Groom, “Recent Progress on CCDs for Astronomical Imaging,” Proc. SPIE, Vol. 4008, Optical and IR Telescope Instrumentation and Detectors, pp. 48–70, March 2000. 4. K. Rajkanan, R. Singh, and J. Shewchun, “Absorption Coefficient of Silicon for Solar Cell Calculations,” Solid-State Electronics, Vol. 22, pp. 793–795, 1979. 5. S. Holland et al., “Fully Depleted, Back-Illuminated Charge-Coupled Devices Fabricated on High-Resistivity Silicon,” IEEE Transactions on Electron Devices, 50(1), pp. 225–238, January 2003.
CORRELATED DOUBLE SAMPLING Correlated double sampling (CDS) is a method employed to improve the signal-to-noise ratio of integrating image sensors. By subtracting a pixel’s dark or reference output level from the actual light-induced signal, static fixed pattern noise and several types of temporal noise are effectively removed from the sensor’s output.
Discussion In an optical sensor, the photo charge is generally collected in a capacitor. The signal amplitude is read as the voltage on that capacitor (V = Q/C). With the CDS procedure, the signal voltage Vs = Qs/C is compared with the “dark,” “empty,” or “reset” level voltage, Vr = Qr/C, that is obtained when all charges of C have been channeled off to a fixed potential. Thus, for each pixel, the final output V = Vs – Vc = (Qs – Qr)/C. Spatial and temporal noises that are common to Vr and Vs disappear from the result. Thus, the following noises almost disappear: ■ kTC noise (or reset noise) of the photodiode’s capacitance, on the condition that this capacitance is not reset in between measuring Vs and Vr. If the capacitor is reset in between the two sampling instants, their noises are uncorrelated, and kTC noise persists. (Sometimes this method of readout is called double sampling, DS, in contrast to CDS. Removal of kCT noise is the main reason most people employ CDS.) ■ 1/f noise ■ Dark level drifts But the following noise sources are not mitigated, and might be even promoted, by CDS: ■ Second-order effects resulting from pixel gain nonuniformity or nonlinearity are not compensated. ■ Uncorrelated temporal white noise originating from before the differencing operation, such as broadband amplifier noise, is multiplied by a factor 1.4 by the differencing operation. ■ All of the downstream noise sources, such as electromagnetic interference (EMI), digitization, system noise, discretization noise, and so on are not affected. ■ Low-frequency MOSFET define noise (1/f noise, flicker noise) is reduced only by a factor that is the logarithm of the associated reduction in bandwidth—typically a factor not more than 1 to 3. In the literature, the reduction of 1/f noise is typically over-estimated or not recognized as such, as the 1/f noise after CDS or DS appears to be “white,” which is the result of aliasing effects. ■ Signal noise, as optical shot noise is, in principle, not affected by CDS.
Visible and Television Sensors
359
CDS was developed by McCann and White of Westinghouse in the 1970s to reduce the reset noise (kTC).1 When the reset switch is operated, there is a residual charge that is an inherent noise to the CCD architecture. This noise charge is the square root of the total reset charge, or (kTC)1/2, where k is Boltzmann’s constant, T is the temperature, and C is the capacitance.
References 1. J. Hall, “Arrays and Charged Coupled Devices,” Applied Optics and Optical Engineering, R. Shannon and J. Wyant, Eds., Academic Press, New York, pp. 373–375, 1980. 2. Private communications with George Williams, 2003. 3. WWW.CCD.com, 2003.
DOMINATION OF SPURIOUS CHARGE FOR CCDS As CCDs are clocked faster, spurious charge noise dominates over charge transfer inefficiencies.
Discussion This is becoming more dominant especially with HDTV because of the large numbers of pixels requiring rapid clocking rates and the fact that spurious charge increases linearly with the number of transfers. Although an academic curiosity in the past, this noise source is becoming a serious issue with images as pixel numbers increase. Spurious charge (mechanism explained below) increases exponentially with the leading edge of the clock rise and clock swing (the change in timing in the intricate CCD clock cycle). The faster the change in clock rise, the more spurious charge results. Assuming the gate voltages are fixed, “wave shaping” the gate clocks and allowing approximately five time constants on the clock overlaps between phases of the CCD readout process will reduce spurious charge (as it allows the holes to return to the channel stops under a lower potential). When CCDs are clocked into inversion, minority carriers (holes) migrate from the channel stops and collect beneath the gate. This results in “pinning” the surface to substrate potential (they are both at the same potential). This process occurs very quickly in CCDs (on the order of a few tens of nanoseconds). Some of the holes become trapped in the Si-SiO2 interface. When the clock is switched to the noninverting state to transfer charge, the trapped holed are accelerated out of the Si-SiO2 interface. To make matters worse, some holes are released with sufficient energy to create additional electron-hole pairs by colliding with silicon atoms (called impact ionization). All of these contribute to spurious noise. Fast moving, high-amplitude clocks increase the amount of impact ionization caused by the electric fields involved. It is important to note that spurious charge is generated only on the leading edge of the drive clock transition, when the phase assumes a noninverting state. Experiments have shown that the falling edge has no effect on spurious charge. Furthermore, and unfortunately, impact ionization has been shown to increase at low temperatures. Spurious charge increases exponentially with the decreasing time available for clock rise and with short voltage swings, as holes are sent back to the channel stops.
References 1. Private communication with George Williams, 2003. 2. J. Janesick, Scientific Charge-Coupled Devices, SPIE Press, Bellingham, WA, pp. 649–654, 2001.
360
Chapter Eighteen
EQUIVALENT ISO SPEED OF A SENSOR 0.8 ISO ≈ ------Em Assume Em to be the noise floor (see below) of the sensor that you are using.
Discussion Williams1 points out that, for film, Em is officially defined as the intersection of the density/exposure curve and the base fog and is given in lux. Assuming that the “base fog” is roughly equivalent to the noise floor of a visible sensor, one can substitute accordingly. To paraphrase Ref. 2, assuming that a signal-to-noise equivalent of 3 is required for raw detection, then a signal-to-noise equal to 10 dB (or an SNR of 3) is the threshold sensitivity of a visible sensor. This roughly corresponds to a flux of 3 × 10–4 lux (assuming a 1/30sec exposure and appropriate spectral weighting). A readout noise of five electrons rms is assumed, and it is assumed that the detectors exhibit no fixed pattern noise or charge transfer inefficiency. Converting units, we find that an equivalent base fog of 1 ×10–5 lux-seconds (which describes the conditions above) is determined for a back-illuminated sensor. An equivalent ISO speed of between 107,000 and 49,000 is determined for high-end CCDs. The reader can compare this with typical high-speed film, which is about 400. Of course, this applies only to a cooled CCD with CDS. Incidentally, the ISO numbers of film (e.g., 200, 400, and 1000) indicate how efficiently the film reacts to light. The higher the number, the quicker the film will form an image at a given light level. This effect is roughly linear between time and ISO rating, as a film with an ISO 400 rating reacts twice as quickly to the same light as ISO 200 film, and so on.3
References 1. Private Communications with George Williams, 2003. 2. G. Williams, H. Marsh, and M. Hinds, “Back-Illuminated CCD Imagers for High Information Content Digital Photography,” Proc. SPIE, 1998. 3. http://photographytips.com/page.cfm/268.
HOBBS’ CCD NOISES 1. A commercial-grade CCD operating at room temperature has a dark current of around 100 electrons per pixel in 1/30 sec integration time. 2. A good uncooled CCD camera has a readout noise of about 30 electrons per readout. 3. A cooled CCD can have noise as low as five electrons per pixel with correlated double sampling. 4. A higher-grade cooled CCD has a dark current of around one electron per second per pixel and an rms readout noise of about five electrons. 5. A scientific-grade, astronomical, cooled multiphased pinned CCD can have dark current below one electron per pixel per second, but the noise is bandwidth dependent, and the bandwidth is smaller for most typical astronomical designs.
Discussion The basic CCD is a linear array of MOS (metal-oxide semiconductor) diodes, which acts as an analog memory and shift register. Electrons are moved across a line of potential wells
Visible and Television Sensors
361
by synchronously varying the potential in the wells associated with each detector location. For example, as one well goes low and its neighbor goes positive, the electrons migrate to the more positive well. This occurs for the next well, and next, and so on. This moving of charge was frequently called a bucket brigade, as it is analogous to the volunteer fire-fighting technique of moving water in a straight line. As in the bucket brigade, charge sometimes slops out and is lost, leading to a charge transfer efficiency of less than 1. Two-dimensional arrays are composed of a series of these linear CCD shift registers reading out the image in one dimension (e.g., just rows or columns) to another (nonphotoactive) linear CCD, which then acts like a shift register in the other dimension. The above rules are based on the state of the art for low-noise MOS CCD imagers. Much of the noise in a CCD is thermally generated, so cooling reduces the noise (see associated rules in this chapter). These rules should approximately apply to the dark current and readout noise of CMOS APS imagers as well. On a per-pixel basis, CMOS imagers tend to be noisier but, with special designs and extra cooling, they approach the performance of CCDs, and their performance may even be better for large pixel counts. The CMOS products are also becoming popular, as they can be manufactured on a standard semiconductor device or memory production line. Correlated double sampling eliminates some of the thermal fluctuations of the reset voltages. This is done simply by sampling the output before and after each readout and subtracting one from the other (see associated rule in this chapter). The multiphase pinning mentioned above acts to reduce or eliminate surface states, resulting in exquisitely low dark current. Surface states cause charge to collect at the surface or metal-semiconductor interfaces. The surface energy level associated with the surface state specifies the level below which all surface states must be filled for charge neutrality at the surface.4 High surface states result in higher noise and higher potentials.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, pp. 39 and 109, 2000. 2. www-inst.eecs.Berkeley.edu, 2003. 3. www.ccd.com, 2003. 4. S. Sze, Physics of Semiconductor Devices, John Wiley & Sons, New York, pp. 407–426, 1981.
IMAGE INTENSIFIER RESOLUTION 1. Modern image intensifiers (I2) can produce images with resolution of about 60 line pairs per millimeter (lp/mm).1 2. Also, by approximate generations,2
Generation
Approximate years of production
Multichannel plate pitch
Nyquist limit
1
Mid 1970s to early 1980s
14 to 15 µm
33 to 36 lp/mm
2
Mid 1980s to early 1990s
10 to 12 µm
42 to 50 lp/mm
3
Mid 1990s to mid 2000s
6 µm
83 lp/mm
3. Bender2 also states that, throughout these periods, the overall resolution has consistently tracked the MCP’s Nyquist limit at about 80 percent of the Nyquist.
362
Chapter Eighteen
Discussion Image intensifiers come in several architectures, generally called generations, or Gen 1, Gen 2, Gen 3, and so on. Gen 1, as described in Reference 3, “refers to image intensifiers that do not use a microchannel plate (MCP) and where the gain is usually no greater than 100 times.” Gen 2 devices employ MCPs for electron multiplication. “Types using a single-stage MCP have a gain of about 10,000, while types using a 3-stage MCPs offer a much higher gain of more than 10 million.”3 Third generation refers to the use of semiconductor materials (e.g., GaAs) as photocathode and come in a filmed and unfilmed type. The “film” is an ion barrier that stops ions from flying back into the photocathode. The unfilmed types are more resistant to some types of damage and are more sensitive. An 18-mm image intensifier assembly can produce 2160 resolution elements, or 1080 cycles across its diameter, and an 11-mm unit can produce 1320 resolution elements or 660 cycles. Generally, image intensifiers are wide-angle devices (10 to 60°), and they require an optic with an f/# of less than 4 or 5 to provide sufficient signal-to-noise to practically accomplish this level of resolution. This rule assumes that the intensifier is the limiting resolution factor. Obviously, if the system in which it is employed has jitter or optical resolution less than the above, the potential resolution will never be achieved. Early image intensifiers would also have significantly less resolution—perhaps as low as 15 to 20 lp/mm. They also suffer from much higher levels of blooming and lower scene dynamic range. Although these problems are still fundamental to image intensifiers, they are greatly improved in newer-generation devices.
References 1. J. Hall, “Characterization and Calibration of Signal-Generating Image Sensors,” ElectroOptical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, p. 7-4, 2000. 2. E. Bender, “Present Image-Intensifier Tube Structures,” Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, 2000, p. 5-33. 3. Hamamatsu Corp., “Image Intensifiers,” available at www.hamamatsu.com, 2003. 4. Private communications with George Williams, 2003. 5. R. Jung, “Image sensor technology for beam instrumentation,” available at www.slac. stanford.edu/pubs/confproc/biw98/jung.pdf, 2003. 6. http://usa.hamamatsu.com, 2003. 7. www.ITTnv.com, 2003.
INCREASE IN INTENSIFIER PHOTOCATHODE EBI WITH TEMPERATURE The equivalent background input (EBI) of a Gen 3 image intensifier (I2) generally doubles for every 2.5 to 5°C rise in photocathode temperature.
Discussion EBI rises with temperature and can become the dominant noise source, ultimately limiting I2 sensor low-light sensitivity for high f/# systems. When used in military applications, image intensified sensors must be capable of operating across a wide ambient temperature range, typically from –40 to +55°C. I2 CCD sensors are sometimes embedded in larger systems, often surrounded by other electronics and mechanical housings, further exacerbating the temperature increase of the photocathode. Photocathode temperature rises of +15°C above ambient are not uncommon and must be accounted for during initial system
Visible and Television Sensors
363
design, modeling, and operation. As the temperature rises, the thermal “dark current” noise associated with the photocathode microchannel plate may become the dominant noise level that limits sensor performance in terms of minimum detectable signal level and measurable scene dynamic range. The detrimental impact on system performance generally occurs in very low-light conditions, as that is where the EBI becomes dominant. Furthermore, high-f/# systems are more susceptible to EBI levels, as the transmitted scene illumination (measured at the photocathode image surface) is lower than with faster (low-f/#) systems. High-performance I2 CCD sensors can employ thermoelectric cooling (TEC) devices to keep the temperature at a desired level.
LOW-BACKGROUND NE∆Q APPROXIMATION A high-quality CCD and good-quality telescope will result in a noise equivalent photon flux density (NE∆Q) of 3 N d + F BΩ N r - + -----2NE∆Q ≈ ---- ---------------------Ω ti t i
where Ω = solid angle of the detector pixel (in steradians) Nd = dark current noise in electrons per second FB = background flux in photons per second per steradian (assume roughly 3.3 × 1011 for low backgrounds) ti = integration time Nr = readout noise in electrons
Discussion The above equation assumes that the combined in-band quantum efficiency of the optics and detector are 0.5, which is achievable but quite good. A high-quality astronomical visible detector has about a one electron per second per pixel dark current noise and a readout noise of less than five electrons per readout. CMOS devices tend to have a higher noise value but don’t suffer from transfer inefficiency, which may be an issue for large HDTV CCD arrays.
References 1. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, pp. 38–39, 2000.
MICROCHANNEL PLATE NOISE FIGURE AND NOISE FACTOR 1. A microchannel plate (MCP) noise figure (Nf) can be defined as follows:1 0.5
Sp N f = 1.03 ----------SNR
364
Chapter Eighteen
where Nf = noise figure (not factor) Sp = sensitivity of the photocathode (generally in terms of microamperes per lumen) SNR = tube signal-to-noise ratio 2. The noise figure for a channel electron multiplier is1,2 1 1 0.5 N f = --- –0.5⎛ 2 + ---⎞ ⎝ d⎠ η where η = effective quantum efficiency (photoelectron detection efficiency) d = first strike yield in electrons; if not known, assume a number between 3 and 4 3. When the noise figure (Nf) in decibels is related to the noise factor (Fn),3 1 1 N f = 20log10 ( F n ) = ( 20 )log10 ---------- = 10log10 -----Pf Pf where Pf = fill factor, or the fraction of the microchannel plate surface area that collects photoelectrons Fn = noise factor, or the factor by which the noise appears to increase as a result of the fill factor being less than unity
Discussion A microchannel plate is an array of curved, hollow, tube-like channels coated with a material that provides electron amplification (typically, an amplification of two or three electrons per bounce). A microchannel plate can provide a two-dimensional intensified image. Microchannel plates need to be used with a photoemissive surface to provide the initial photoelectron that enters the curved tube and becomes amplified. The major contribution to noise from the microchannel plate is a result of its amplification process. A noise figure is generally defined as its input SNR divided by its output SNR. In optical and electronics applications, it is common to use the power SNR. An exception is in the astronomical community, which usually uses electrical current SNR, which is the square root of the power SNR. That choice seems to apply here as well. MCPs can yield a per-pixel SNR dominated by the background and quantum efficiency such that SNR ≤ ηN where n = quantum efficiency of the photocathode N = average number of incident photons (Ref. 4) For imaging application, the nonuniformity can also limit the SNR, and the reader is urged to consider the scene SNR as described in a rule in Chap. 16, “Systems.”
References 1. E. Bender, “Present Image-Intensifier Tube Structures,” Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 5–32. 2. P. Csorba, Image Tubes, Sams, Indianapolis, IN, 1985. 3. E. Dereniak and D. Crowe, Optical Radiation Detectors, John Wiley & Sons, New York, pp. 124–126, 1984.
Visible and Television Sensors
365
4. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, and SPIE Press, Bellingham, WA, pp. 98–101, 2000. 5. http://www.laidback.org/~daveg/academic/labreports/rep3/PMT.html, 2003.
NOISE AS A FUNCTION OF TEMPERATURE The dark current in visible silicon detector arrays doubles for every 8 or 9°C increase in temperature.
Discussion This is why scientific and high-performance visible images will typically have a thermoelectric cooler to reduce the operating temperature by 20 to 40°C. As the detector’s temperature is increased, the dark noise increases. This rule is dependent on the state of the art (which changes) and assumes normal silicon focal planes. Additionally, scaling should be limited to ±40° about the normal operating temperature. Clearly, a temperature is reached for any material in which additional cooling provides no additional system-level sensitivity. When a CCD (or an avalanche photodiode) is cooled, the noise decreases, causing its overall sensitivity to improve. Reducing dark noise by a factor of 2 will lead to an increase in sensitivity of 2 . Other benefits from additional cooling may be increased uniformity, longer wavelength response, and the ability to integrate longer. However, eventually, a temperature will be reached at which further cooling provides minimal gains as the total noise become dominated by background noise, spurious noise, or other sources. Additionally, multiplexer and bias circuitry may fail if operated at temperatures colder than their design limit, and carrier freeze-out will occur at liquid nitrogen temperatures, reducing SNR. Some noise sources (e.g., spurious noise) actually increase with reduced temperature. Finally, sensitivity versus temperature is not linear. It is a curve, so don’t overuse this rule.
NOISE EQUATIONS FOR CMOS APSS AND CCDS 1. The noise from an CMOS APS focal plane array (in terms of electrons) can be calculated as follows: 2
2
2
I lt ⎛ E n∆ f nC s ⎞ ⎛ I n∆ f nC s ⎞ HRt 2 -⎟ + ⎜ --------------------⎟ + ( FPN ) Qn = ⎛ ----------⎞ + ⎛ -----⎞ + ⎜ --------------------2 ⎝ q ⎠ ⎝q⎠ ⎝ ⎠ ⎝ q2 g2m ⎠ q where Qn = H= R= t= q= Il = En =
0.5
(1)
RMS noise charge in number of electrons per sample at the output irradiance on the FPA in watts per square meter responsivity in amperes per watt from each pixel element time frame electronic charge in coulombs leakage current in amperes from each element noise voltage density of on-chip source follower transistor in volts per root hertz referred to the diode node ∆fn = effective noise bandwidth of the source follower and following op-amp together in hertz)
366
Chapter Eighteen
Cs = In = gm = FPN =
capacitance of diode node to ground in farads noise current density of off chip op-amp in amperes per root hertz transconductance of source follower transistor in volts per amp residual fixed pattern noise (in electrons)
2. The noise from a CCD is a modification of the above, as follows: 2
2
2
I lt ⎛ E n∆ f nC s ⎞ ⎛ I n∆ f nC s ⎞ HRt 2 -⎟ + ⎜ --------------------⎟ + ( N t ) Qn = ⎛ ----------⎞ + ⎛ -----⎞ + ⎜ --------------------2 ⎝ q ⎠ ⎝q⎠ ⎝ ⎠ ⎝ q2 g2 ⎠ q
0.5
(2)
m
where Nt = transfer noise (especially important for surface channel devices, and negligible for the more common buried-channel devices)
Discussion These equations represent a root sum of squares (RSS) of the major noise sources for a CCD and APS visible focal plane arrays. Equation (1) assumes that correlated double sampling (CDS) is not employed, or if CDS is employed and is not perfect, this allows for the residual FPN leakage. FPN is the leakage of total spatial FPN after the application of whatever algorithm is employed to mitigate FPN. Equation (2) assumes correlated double sampling to remove the thermal fluctuation of the reset voltage (reset noise, sometimes called kCT noise) and any fixed pattern noise. In a surface-channel CCD, the signal is moved along the surface and is limited by the effects of interface traps that add a transfer noise and reduce the transfer efficiency. In a buried-channel CCD, the charges are confined to a channel below the surface, increasing transfer efficiency and eliminating interface trapping. For buried-channel CCDs, transfer noise (Nt) is not an issue, and that term disappears. Likewise, that term is not an issue for CMOS APS, as they require only one charge transfer to read out the signal. Typical commercial CCDs have a few hundred noise electrons per sample, whereas high-grade thermoelectrically cooled devices can have noise below a couple of tens of electrons per second. Advanced and expensive scientific visible detector arrays exhibit noise of a few electrons per second. Reference 1 points out that settling time can become a concern for large-format HDTV CCDs, and correlated double sampling cannot occur and still meet the required data rates. Fixed pattern noise is a critical noise source for CMOS APS FPAs. Although significant improvements in this noise source have been made in the 2000s, it can be a dominant noise source for some applications. Reference 2 states, There are two types of FPN for CMOS APSs. One originates from the pixel-to-pixel variation in dark current and source follower threshold voltage and the other from column to column variation in column readout structures. The former may become invisible in the future due to process improvements.
Additionally, Ref. 3 cautions that, although correlated double sampling can reduce FPN noise in CMOS, There always remains some residue to this FPN, which can be important if no care is taken. To lower it, it is important to model and quantify FPN as a function of the design parameters, which includes the layout level and technology matching parameters given by the foundry.
The interested reader is also referred to the other related rules in this book relating to correlated double sampling, RSS of noise sources, and the noise bandwidth.
Visible and Television Sensors
367
References 1. J. Hall, “Characterization and Calibration of Signal-Generating Image Sensors,” ElectroOptical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, Bellingham, WA, pp. 7-55 to 7-57, 2000. 2. S. Miyatake, et al., “Transversal-Readout Architecture for CMOS Active Pixel Image Sensors,” IEEE Transactions on Electron Devices, 50(1), pp. 121–129, January 2003. 3. A. Afzalian and D. Flandre, “Modelling of the Bulk versus SOI CMOS Performances for the Optimal Design of APS Circuits in Low-Power Low-Voltage,” IEEE Transactions on Electron Devices, 50(1), pp. 106–110, January 2003. 4. J. Hall, “Arrays and Charge-Coupled Devices,” Applied Optics And Optical Engineering, Vol. 8, R. Shannon and J. Wyant, Eds., Academic Press, New York, pp. 377–379, 1980. 5. S. Sze, Physics of Semiconductor Devices, John Wiley & Sons, New York, pp. 420–421, 1981.
PHOTOMULTIPLIER TUBE POWER SUPPLY NOISE To keep the gain of the tube stable to 1 percent, control the power supply voltage to 0.03 percent at 1000 V.
Discussion Photomultiplier tubes (see Fig. 18.5) are fast and sensitive. Stability is sometimes an issue, as the dynode sensitivity is a function of the applied voltage. The absolute voltages are typically high (e.g., –500 to 2000 V). As Hobbs points out, this is usually “provided by a powerful high-voltage supply and a multi-tap voltage divider made of high-value resistors.”2 A 0.03 percent change in voltage at 1000 V is 0.3-V change. This voltage change results in a subsequent change in gain, resulting in noise.
FIGURE 18.5
Typical PMT architecture. (From www.chem.vt.edu/chem-ed/optics.)
References 1. http://www.chem.vt.edu/chem-ed/optics/detector/pmt.html, 2003.
368
Chapter Eighteen
2. P. Hobbs, Building Electro-Optical Systems: Making it All Work, John Wiley & Sons, New York, pp. 98–101, 2000. 3. http://www.laidback.org/~daveg/academic/labreports/rep3/PMT.html, 2003. 4. E. Dereniak and D. Crowe, Optical Radiation Detectors, John Wiley & Sons, New York pp. 116–121, 1984. 5. C. Soodak, Application Manual for Photomultiplier Tubes, American Instrument Co., 1972.
P-WELL CCDS ARE HARDER THAN N-TYPE P-well CCDs have better radiation performance than do n-type devices, and CMOS can be quite hard.
Discussion According to Williams,1 conventional n-channel CCDs have phosphorus doped buried channels and suffer from the generation of phosphorus-vacancy (P-V) electron traps that degrade charge transfer efficiency. The dominant hole trap expected after proton irradiation of a p-channel CCD is the divacancy. Divacancy formation is considered to be less favorable in a p-channel CCD as compared to P-V formation in an n-channel CCD. In addition, the energy level of the deviancy, 0.21 eV above the valance band, is not likely to yield efficient dark current generation sites as compared to P-V sites, located closer to the middle of the bandgap (0.42 to 0.46 eV below the conduction band edge). CCDs have been shown to perform without significant degradation when exposed to about 4 krad of ionizing radiation and approximately 50,000 energic neutrons (typically 0.5 to 5 MeV).2 If properly designed, CMOS APS sensors can withstand quite high doses [e.g., 63 MeV proton radiation to an equivalent total dose of 1.3 Mrad (Si)].3 Moreover, Ref. 3 indicates that typically the front gate threshold shifts are very small for large dosages, and the “weak link” with respect to radiation hardness is the back gate. However, some common sensor system electronic components can fail before the CCD or CMOS focal planes, especially when tested at relatively high dose rates. “For example, the DSP56001 chip fails below 3 krad (Si) if tested with a dose rate of 100 rad/sec, whereas it operates successfully to 15 to 20 krad (Si) if tested with a dose rate of 100 rad/ sec. In all cases, recovery (annealing) occurred, and no permanent damage was observed.”4
References 1. Private communications with George Williams, 2003. 2. K. Klaasen et al., “Operations and Calibration of the Solid-State Imaging System during the Galileo Extended Mission at Jupiter,” Optical Engineering, 42(2), pp. 494–509, February 2003. 3. Y. Li et al., “The Operation of 0.35 µm Partially Depleted SOI CMOS Technology in Extreme Environments,” Solid State Electronics, Vol. 47, pp. 1111–1115, 2003. 4. G. Eppeldauer, “Temperature Monitored/Controlled Silicon Photodiodes for Standardization,” Proc. SPIE, Vol. 1479, Surveillance Technologies, 1991. 5. S. Holland, et al., “Fully Depleted, Back-Illuminated Charge-Coupled Devices Fabricated on High-Resistivity Silicon,” IEEE Transactions On Electron Devices, 50(1), pp. 225–238, January 2003. 6. J. Bogaerts et al., “Total Dose and Displacement Damage Effects in a Radiation-Hardened CMOS APS,” IEEE Transactions On Electron Devices, 50(1), pp. 84–90, January 2003. 7. Y. Li et al., “Proton Radiation Effects in 0.35 µm Partially Depleted SOI MOSFETs Fabrication on UNIBOND,” IEEE Transactions Nuclear Science, 49(6), pp. 2930–2936, 2002.
Visible and Television Sensors
369
RICHARDSON’S EQUATION FOR PHOTOCATHODE THERMIONIC CURRENT Richardson’s equation gives the photocathode thermionic current as –Φ ⎛ ---------0-⎞ 2 ⎝ kT ⎠
it = Ad ST e where Ad = photocathode area T = temperature Φ0 = photocathode work function k = Boltzmann’s constant S = a constant equal to
2
4πmqk S = -----------------3 h where m = mass of the electron q = charge of the electron h = Planck’s constant
Discussion Thermionic emission is the spontaneous emission of an electron as a result of random thermal energy. The higher the temperature, the more emission occurs, because of Brownian motion exceeding the energy needed to eject an electron. This can be the dominant contributor to dark noise in a photomultiplier tube or microchannel plate. Reference 1 states that the T2 term indicates that “cooling the PMT will reduce dark current and therefore increase the linear dynamic range at the small-signal end. Cooling a PMT to about –40°C (233 K) will often reduce the thermionic contribution below the other sources of dark current.” Although developed for photomultiplier tubes, this is useful for any photocathode component. This equation calculates that the current will increase about a factor of 2 for every 4 to 6°C increase in temperature (at room temperatures), this is a contributor to the “Increase in Intensifier Photocathode EBI with Temperature,” rule (p. 362).
References 1. E. Dereniak and D. Crowe, Optical Radiation Detectors, John Wiley & Sons, New York, pp. 118–119, 1984. 2. http://dept.physics.upenn.edu/balloon/phototube.html, 2003. 3. http://bilbo.bio.purdue.edu/~baker/courses/595R/equat_em.pdf, 2003.
SILICON QUANTUM EFFICIENCY In-band quantum efficiency is typically 30 percent for conventional front-illuminated devices and up to 80 percent for thinned back-illuminated devices (Fig. 18.6).
370
Chapter Eighteen
FIGURE 18.6
CCD quantum efficiency. (From Ref. 1.)
Discussion As a material, silicon tends to have a very high quantum efficiency for visible photons, although many can be lost as a result of reflection if a suitable antireflective coating isn’t applied. At room temperature, intrinsic silicon is a semiconductor with a bandgap of 1.12 eV. A 1.12-eV bandgap makes the production of electron-hole pairs possible for absorbed photons with wavelengths less than about 1.1 µm. Intrinsic photodetectors do not require doping for detection and produce carriers (electrons and/or holes) when a photon is absorbed by band-to-band transitions. Typical visible detector arrays do not have doping to alter the bandgap and operate as intrinsic devices. Silicon can be doped to reduce the bandgap (allowing detection far into the longwave infrared). This heavily doped silicon is an extrinsic material and produces carriers by transitions involving forbidden gap energy levels. Quantum efficiency is affected by many factors. First, the photons need to be absorbed into the material and not reflected or transmitted through the active region. Detectors usually have antireflective coatings applied to them to maximize the absorptance. Then, the photons need to survive to the depletion region to generate useful electron-hole pairs. When the photon energy generates a carrier, the carrier must migrate to the collection well and be captured and held until it is read out. All of these effects occur with some inefficiencies that reduce the total effective quantum efficiency. Typical visible silicon detectors (whether CCD, CID, or APS) tend to have quantum efficiencies of around 30 to 40 percent. Back-illumination architectures and thin material can boost this number to over 80 percent. Often, with CCDs and APS devices, the product of the fill factor and quantum efficiency is quoted; unfortunately, this is not always clear in data sheets or web sites. The addition of on-focal-plane electronics, such as antiblooming circuits, reduces the active area and thus the product of the fill factor and the quantum efficiency. However, antiblooming does not reduce quantum efficiency (which is a function of material, antireflection coatings, and
Visible and Television Sensors
371
capture efficiency). Antiblooming can reduce fill factors to 70 percent of that of a non-antiblooming CCD. The CCD with antiblooming will then need to integrate twice as long for the same sensitivity. For the reader’s convenience, we provide the following references that include additional plots and information on this topic.
References 1. G. Williams, H. Marsh, and M. Hind,. “Back-Illuminated CCD Imagers for High Information Content Digital Photography,” Proc. SPIE, Vol. 3302, Digital Solid State Cameras: Design and Applications, 1998. 2. S. Holland et al., “Fully Depleted, Back-Illuminated Charge-Coupled Devices Fabricated on High-Resistivity Silicon,” IEEE Transactions on Electron Devices, 50(1), pp. 225–238, January 2003. 3. J. Tower et al., “Large Format Backside Illuminated CCD Imager for Space Surveillance,” IEEE Transactions On Electron Devices, 50(1), pp. 218–224, January 2003.
WILLIAMS’ LINES OF RESOLUTION PER MEGAHERTZ From Ref. 1, 2 Lines of resolution per megahertz= ---T l A where A = frame aspect ratio in decimal format (a 4:3 ratio = 1.33) Tl = active CCD line time in microseconds
Discussion This is based on simple math, right out of the NTSC/ITU standards, and allows one to quickly estimate the resolution as a function of electronic speed. It can have significant system effects. Lines of resolution is a technical parameter that has been in use since the introduction of analog television. The measurement of lines of resolution attempts to give a comparative value to enable the evaluation of one television or video system against another in terms of overall resolution. Note that the reference here to system and overall indicates that this measurement refers to a complete video or television system. This includes everything employed to display the image, including the lens, camera, video tape (if used), and all the electronics that make the entire system work. This number (horizontal or vertical) indicates the overall resolution of a complete television or video system. There are two types of this measurement: (1) lines of horizontal resolution, also known as LoHR, and (2) lines of vertical resolution, or LoVR. However, it is much more common to see the term TVL (for TV lines). Note that this is different from the simple display lines (e.g., HTVL) referred to in Chap. 7, and the reader will find similar rules relating to analog displays in that chapter. There are some common misconception pitfalls. Lines of resolution is not the same as the number of pixels (either horizontal or vertical) found on a camera’s CCD, on a digital monitor, or on other displays such as a video projector. It is also not the same as the number of scanning lines used in an analog camera or television system such as PAL, NTSC, SECAM, and so on. Lines of resolution refers to the limit of visually resolvable lines per picture height (e.g., TVL/ph = TV lines per picture height). In other words, it is measured by counting the number of horizontal or vertical black and white lines that can be distinguished on an area
372
Chapter Eighteen
that is as wide as the picture is high. The idea is to make this measurement independent of the aspect ratio. If the system has a horizontal resolution of, for example, 750 lines, then the whole system (lens + camera + tape + electronics) can provide 375 perceptible black lines and 375 white perceptible spaces in between (375 + 375 = 750 lines). In either case, if you add any more lines per picture height, then you can’t reliably resolve the lines and spaces in a distinguishable manner, and the system has reached its limit of resolving detail. Lines of horizontal resolution applies to not only cameras but also to television displays, to signal formats such as those produced by a DVD player, and so forth. Therefore, when people talk about lines of resolution but don’t specify if they are horizontal or vertical lines, you need to be cautious. If a manufacturer doesn’t make the reference clear, then you can assume them to be horizontal numbers, because these are always larger numbers, so they sound more impressive. The reader should also see the related rule on number of display lines in Chap. 7, “Displays.”
Example For a format of = 4:3 and a line time of 53.3 µsec, there are 79.96 lines per MHz. For a 16:9 format, there are 59.95 lines per MHz.
References 1. Private communications with George Williams, 2003.
Appendix A Tables of Useful Values and Conversions The following idiosyncratic collection of tables and values represents frequently needed approximate information and conversion factors for the EO practitioner. This eclectic collection consist of constants, conversions, and definitions categorized by the authors’ whims. Although sometimes included, the authors are not encouraging the use of deprecated and archaic units. Such units are included here only as a help with their translation to more widely accepted SI units. TABLE A.1 Angular Measurement Arcseconds
4.848 microradians 2.77 × 10–4 degrees
Degree
0.01745 radians 17.5 milliradians 60 arcminutes 3600 arcseconds 17,452 microradians
Arcminute
0.0167 degrees 2.909 × 10–4 radians
Radian
0.159 of the circumference 57.296 degrees 3438 arcminutes 2.06 × 105 seconds
Steradian
0.08 of total solid angle
Rads/sec
0.159 revolutions per second 9.55 revolutions per minute 57.3 degrees per second
RPM
6 degrees per second 0.0167 revolutions per second 0.105 radians per second
RPS
21,600 degrees per minute
373
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
374
Appendix A
TABLE A.2 Area Measurements Square centimeter
1.076 × 10–3 square foot 0.155 square inch 1 × 10–4 square meter
Square mil
645 square micrometers
Square inch
6.452 square centimeters
Square foot
939 square centimeters 0.092 square meters
TABLE A.3 Astronomy Astronomical unit (mean Earth–Sun distance)
1.496 × 108 kilometers 93 million miles
Light year
9.46 × 1015 meters 9.46 × 1012 kilometers 5.88 × 1012 miles
Parsec
3.26 light years 3.09 × 1013 kilometers
Effective solar temperature
5900 K
Solar constant
1350 to 1390 W/m2 (mean above atmosphere)
Irradiance of a zeroth magnitude star 3.1 × 10–13 W/cm2 ≈0.2 mag per airmass
Astronomical visual absorption
TABLE A.4 Astronomical Bands (Given Flux Is Associated with a Star of Visual Magnitude Zero) Center Band wavelength abbreviation (µm) U B
0.365 0.44
Bandwidth (µm)
Flux (photons/m2 µsec)
Flux (W/cm2/µm)
Jansky (W/cm2/Hz)
0.068
7.90 × 1010
4.30 × 10–12
1910.6
0.098
1.60 ×
1011
7.23 ×
10–12
4664.7
10010
3.47 ×
10–12
3498.5
V
0.55
0.089
9.6 ×
R
0.7
0.22
6.20 × 1010
1.76 × 10–12
2875.7
I
0.88
0.24
4.90 × 1010
1.11 × 10–12
2857.1
0.26
2.02 ×
1010
3.21 ×
10–13
1673.1
1009
1.15 ×
10–13
1045.2
J
1.25
H
1.65
0.29
9.56 ×
K
2.2
0.41
4.53 × 1009
4.09 × 10–14
660.3
L
3.4
0.57
1.17 × 1009
6.84 × 10–15
263.6
0.45
5.06 ×
1008
2.01 ×
10–15
167.6
1007
9.69 ×
10–17
34.9
7.18 × 10–18
9.7
M
5
N
10.4
5.19
5.07 ×
Q
20.1
7.8
7.26 × 1006
Tables of Useful Values and Conversions
375
TABLE A.5 Atmospherics Absorption of CH4
Bands centered at 3.31, 6.5, and 7.6 µm
Absorption of CO2
1.35 to 1.5 µm 1.8 to 2.0 ≈4.2 to 4.6 µm ≈14 to 16 µm
Absorption of H2O
1.35 to 1.5 µm 1.8 to 2.0 ≈2.7 to 3.0 µm 5.6 to 7.1 µm (with the main absorption in ≈6.1 to 6.5 µm) And some minor narrow bands centered at 0.94, 1.1, 1.38, and 3.2 µm
Absorption of NO2
3.9 µm 4.5 µm 7.7 µm 17.1 µm And various bands in the UV
Absorption of ozone
≈0.15 to 0.3 (peak at ≈0.26 µm)
Atmospheric pressure
101,325 N/m2 101 kPa 760 mm of Hg at sea level
Density of air @ STP
1.29 × 103 g/cc 1.29 kg/m3
Troposphere altitude (nominal) 0 to ≈11 km (depends on season and latitude) Stratosphere (nominal)
11 to 24 km (Some misguided folks define the stratosphere to include the mesosphere.)
Mesosphere (nominal)
24 to 80 km
Thermosphere (nominal)
80 to ≈7000 km
Pressure of std. atmosphere
1.01 × 105 nt/m2 14.7 psi
TABLE A.6 CCD Size* Approximate Approximate unit cell size for Approximate well Format dimensions (mm)† nominal 768 × 480 array (µm) size (electrons) 19 × 14
25 × 30
>500,000
1-inch
12.8 × 9.6
16.7 × 20
330,000
2/3-inch
8.8 × 6.6
11.4 × 13.8
160,000
1/2-inch
6.4 × 4.8
8.3 × 10
80,000
1/3-inch
4.8 × 3.6
6.3 × 7.5
40,000
1/4-inch
3.65 × 2.74
4.8 × 5.5
<30,000
1/6-inch
2.5 × 1.8
3.3 × 3.8
<20,000
1.5-inch
*Approximate
and representative values; these vary from vendor to vendor. that no dimension (even diagonal) matches the format, and various vendors will modify these by 25 percent or so. †Note
376
Appendix A
TABLE A.7 Colors
Color
Approximate wavelength (µm)
Violet
0.35 to 0.45
Blue
0.45 to 0.50
Cyan
0.48 to 0.50
Green
0.49 to 0.56
Yellow
0.55 to 0.59
Red
0.60 to 0.75
TABLE A.8 Cryogens Boiling point of air
85–88 K
Boiling point of argon
–185.7°C 87.3 K
Boiling point of Freon®-14 CF4 Boiling point of helium
Boiling point of hydrogen
Boiling point of neon
145 K 4.2 K (unpumped) –272.2 C ≈0.7 K (pumped) 20.3 K –253 °C 27.1 K
Boiling point of nitrogen
77.2 K –196°C
Boiling point of oxygen
90.2 K –218.4°C
Freezing point of water
273.16 K
Room temperature
≈70°F ≈25°C ≈300 K
Sublimating dry ice
Any temperature above 195 K
Gas constant
Avogadro’s number Heat of fusion of water (0°C)
8.32 joules/mole K 1.98 cal/mole K 6.02 × 1023 molecules per mole 79.7 cal/g
Tables of Useful Values and Conversions
TABLE A.9 Digitization No. bits
Ratio
dB
No. bits
Ratio
dB
1
2
3
12
4096 36
2
4
6
13
8192 39
3
8
9
14
16,384 42
4
16
12
15
32,768 45
5
32
15
16
65,536 48
6
64
18
17
131,072 51
7
128
21
18
262,144 54
8
256
24
19
524,288 57
9
512
27
20
1,048,576 60
10
1024
30
21
2,097,152 63
11
2048
33
TABLE A.10 Density Measurements Density of water (4°C, 760 mm Hg)
1.0 g/cc 1000.0 kg/m3 62.43 lb/ft3 0.036 lb/ft3
Grams per cubic centimeter
0.036 lb/in3 64.2 lb/ft3
Pound per cubic inch
27.7 g/cm3
TABLE A.11 Earth Escape velocity Gravitational acceleration Mass Mean ocean depth Radius
11.19 km/sec 9.81 m/sec2 32.2 ft/sec2 ≈6 × 1027 grams 3800 m 6371 km (mean) 3960 miles 6378 km (equatorial) 6357 km (polar)
Surface area
5.1 × 1018 cm2
Land area
1.5 × 1018 cm2
Ocean area
3.6 × 1018 cm2
Volume Velocity of Earth in orbit
1.083 × 1027 cm3 ≈30 km/sec
377
378
Appendix A
TABLE A.12 Electromagnetic Spectrum (µm) Gamma rays
< 0.001
X-rays
0.001 to 0.02
Ultraviolet
≈0.02 to ≈0.4
Visible
≈0.4 to ≈0.75
Near infrared
≈0.75 to ≈1.2 ≈1.2 to 3
Shortwave infrared
≈3 to 6
Midwave infrared
≈6 to 14
Longwave infrared
≈14 to 100
Far infrared
≈100 to 1000
Submillimeter (or T-rays) Radio frequency
>1000
TABLE A.13 Emissivities of Common Materials (Approx. Values for the Infrared) Aluminum (black anodized)
0.6 to 0.85
Paper
0.91 to 0.95
Aluminum (oxidized)
0.11 to 0.19
Plaster
0.91
Plywood
0.96
Rust (iron)
0.69
Aluminum (polished) Aluminized Mylar®
0.02 0.03 to 0.05
Asphalt
0.97
Sand
Brick
0.93
Silver (polished)
Concrete (rough)
0.94
Skin
0.95 to 0.98
Dolomite lime
0.41
Snow
0.85
0.01 to 0.14
Soil (wet)
0.95
0.98
Soil (dry)
0.92
Gold Graphite Ice Nickel (electroplated) Nickel (oxidated)
0.95 to 0.98 0.05 0.31 to 0.46
Oak
0.9
Oil
0.82
Soil (frozen) Stainless steel
BTU
Water
Calorie Energy of one eV
5.03 × 1018 × λ (with λ in µm) 252 calories 1055 joules 0.29 watt-hours 4.184 joules 1.602 × 10–19 J
0.03
0.93 0.16 to 0.45
Titanium oxide white paint 0.88 to 0.94
TABLE A.14 Energy No. photons in a watt
0.8 to 0.93
0.96
Tables of Useful Values and Conversions
379
TABLE A.14 Energy (Continued) Energy per photon
1.98 × 10–19/λ W sec (with λ in µm)
Erg
1 × 10–7 joules 2.78 × 10–11 watt-hours
Joule
9.48 × 10–4 BTU 0.2388 calories 1 × 107 erg 1 × 107 dyne cm 1 watt-seconds 1 volt-coulombs 0.738 foot-pounds 2.78 × 10–4 watt-hours 1 newton-meter 3.73 × 10–7 HP hr 1 × 106 joules 2.4 × 107 cal A Cadillac traveling 55 mph
Megajoule
9 × 1016 joules
Kilogram
TABLE A.15 Greek Alphabet Alpha Beta Gamma Delta Epsilon Zeta Eta Theta
α β γ δ ε ζ η θ
Α Β Γ ∆ Ε Ζ Η Θ
Iota Kappa Lambda Mu Nu Xi Omicron Pi
ι κ λ µ ν ξ ο π
Ι Κ Λ Μ Ν Ξ Ο Π
Rho Sigma Tau Upsilon Phi Chi Psi Omega
ρ σ τ υ φ χ ψ ω
Ρ Σ Τ Υ Φ Χ Ψ Ω
TABLE A.16 Illuminance and Luminance Levels (Typical Representative Values in the Visible Bandpass)
Surface of the solar disk Bright sunlight Overcast sky Fluorescent lamp surface CRT display Twilight Full Moon Starlight Overcast night sky
Approximate W/m2 (strongly depends on bandpass) 6 × 107 1000 1 to 100 Strongly depends on bandpass Strongly depends on bandpass 1 × 10–2 2 × 10–4 1 × 10–6 5 × 10–8
Approximate Cd/m2 (nit) 1,500,000,000 N/A 3000 10,000
Approximate lux (lumens/m2) 5 × 1010 100,000 10,000 3000
60 to 150
20 to 50
N/A 2500 N/A N/A
10 0.2 0.001 0.00005
380
Appendix A
TABLE A.17 Laser Lines (Selected) Alexandrite
0.72 to 0.8 µm
Argon
0.51 µm
CO
5.0 to 7.0 µm
CO2
9.2 to 11 µm 10.6 µm
DF
3.8 to 4.0 µm
Doubled Nd:Yag
0.53 µm
Dy:CaF
2.35 µm
Er:Yag
1.64 µm
Erbium
1.54 µm (the popular eyesafe wavelength)
GaAs
0.9 µm
H2O laser
28 µm, submillimeters
HeNe
0.6328 µm 0.5944 to 0.6143 µm 1.152 µm 3.391 µm
HF
2.6 to 3.0 µm
Kr
0.35 µm
Nd: Yag
1.0645 µm
Ne Laser
0.3324 µm 0.5401 µm
Nitrogen laser
0.33 µm
Ruby
0.69 µm
Xe Laser
0.46 to 0.63 µm 2.03 µm 3.51 µm 5.56 µm 9.0 µm
TABLE A.18 Length Centimeter
Kilometer
Meter
0.3937 inch 104 micrometer 394 mils 3281 ft 0.54 nautical mile 0.621 statute mile 1094 yards Distance traveled by light in vacuum in 1/299,792,458 seconds 1 × 1010 angstroms 3.28 feet 39.37 inch 1 × 109 nanometers 1.094 yards
Tables of Useful Values and Conversions
381
TABLE A.18 Length (Continued) Mil
0.001 inch 25.4 micrometers 0.0254 millimeters
Inch
2.54 centimeters
Mile (nautical)
1852 meters 1.15 statute miles
Mile (statute)
5280 feet 63,360 inches 160,934 centimeters 1.609 kilometers 1609 meters
TABLE A.19 Materials (Fundamental Mechanical Properties)
Material Aluminum 6061-T6
ν, α, thermal Κ, thermal Cp, specific ρ, specific E, Young’s heat mass modulus Poisson’s expansion conductivity (GPa) ratio (ppm/K) (watts/m-K) (joules/kg-K) (kg/m3) 22.5–22.7
156–167
Astrositall™
0.05
1.19
710
Beralcast 191
12.0
190
1422
Beralcast 363
12.7–15.3
108
1360
2160
205
0.2
Beryllium
11.3–11.4
160–216*
1820–1925
1850
280–303
0.07–0.08
3.2
1.13–1.2
800–1047
2230
63–68
0.2
172
670
2950
364
0.14
0.05, 0.2
10, 35
712
1780, 1800
90–520
0.3–0.4
2650
235
Borosilicate
CERAFORM™ 2.44–3.38 SiC CFRP CSiC
879–897
2700
68–70
0.33
2460
92
0.24
2160
205
0.2
2.6
135
660
CVD silicon carbide
2.4–4
330
550–730†
Fused silica
0.55
1.38
750
2190–2202
74.5
0.17
Invar
3050–3210 400–465 0.16–0.25†
1
10
500–515
8130
145
0.3
Pyrex®
3.3
1.13
753
2230
63
0.2
Pyrolitic graphite
2.44
172
670
2950
364
0.14
Silicon
2.6
137–156
700–710
2300
112,193
0.28,0.42
Steel 13/4
23
227
879
7800
193–215
0.34
0.01–0.03
1.31
766
2210
67
0.17
0.05
1.46–1.64
821
2530
91–92
0.24
ULE™ Zerodur® *Much
higher (about 365) at cryogenic temperatures. value depends on the preparation method. Values separated by commas are individual estimates. Those separated by dashes show a range of values derived from a number of sources. †The
382
Appendix A
TABLE A.20 Materials (Derived Mechanical Properties)
Mechanical bending resistance [E/ρg(1 – ν2)]
Specific stiffness (E/ρ) (a.k.a. inertia loading parameter)
Resonant frequency Sqrt(E/ρ)
ρ/E
ρ3/E
Sqrt(ρ3/E)
Self deflection for equal mass (E/ρ3)
Measures of deflection (derived from Pacquin)
Aluminum 6061-T6
2884
25
0.159
40
289
17
3
Astrositall™
4,049
37
0.193
27
162
13
6
Beralcast 191
10,088
95
0.308
11
49
7
20
Beralcast 363
10,088
95
0.308
11
49
7
20
Beryllium
16,820
164
0.405
6
21
5
48
3,241
30
0.175
33
163
13
6
12,843
123
0.351
8
71
8
14
CFRP
6,631
58
0.242
17
56
7
18
CSiC
9,049
89
0.298
11
79
9
13
CVD silicon carbide
15,801
145
0.298
7
71
8
14
Fused silica
3,555
34
0.381
30
143
12
7
Invar
2,000
18
0.184
56
3706
61
0
Pyrex
3,003
28
0.134
35
176
13
6
12,843
123
0.168
8
71
8
14
Silicon
7,057
57
0.351
18
93
10
11
Steel 13/4
2,958
26
0.239
39
2373
49
0
ULE™
3,186
30
0.160
33
161
13
6
Zerodur®
3,937
36
0.174
28
176
13
6
Borosilicate Ceraform SiC
Pyrolitic graphite
Source: R. Pacquin, “Advanced Materials: An Overview,” in Advanced Materials for Optics and Precision Structures, Critical Review 67, SPIE Press, Bellingham, WA, pp. 3–18, 1997.
Tables of Useful Values and Conversions
383
Thermal stress (K/α − E)
0.07
13.63
23.80
258
2.28
0.19
5.15
15.83
77
1.45
1.06
0.42
2.40
7.06
34
52
5.74
2.98
0.19
5.32
18.95
62
0.48
2831
0.02
0.02
6.61
0.15
0.35
5
Ceraform SiC
87.02
14
25.66
38.30
0.03
35.66
70.49
193
CFRP
27.31
5
18.38
25.81
0.01
136.55 175.00
1666
CSiC
77.19
19
12.20
18.49
0.03
29.69
51.92
220
CVD silicon carbide
151.18
7
64.08
94.23
0.02
62.99 137.50
295
Fused silica
0.84
398
0.19
0.25
0.66
1.52
2.51
33
Invar
2.46
100
1.45
2.90
0.41
2.46
10.00
68
Pyrex
0.67
2920
0.02
0.03
4.90
0.20
0.34
5
Pyrolitic graphite
87.02
14
25.66
38.30
0.03
35.66
70.49
193
Silicon
91.93
17
7.40
10.57
0.03
35.09
56.49
431
Steel 13/4
33.11
101
1.97
2.25
0.69
1.44
9.87
49
ULE™
0.77
22
2.93
3.82
0.04
25.79
43.67
651
Zerodur®
0.79
30
3.02
3.68
0.06
15.79
32.80
356
68.95
134
0.68
42
2.19
3.08
Beralcast 191
61.86
63
3.25
Beralcast 363
36.76
141
Beryllium
60.65
Astrositall™
Borosilicate
0.56
0.33
Thermal insensitivity coefficient (δ/a)
109
0.50
Dynamic thermostability (EK/αCp)
7.42
Aluminum 6061-T6
Steady-state thermal distortion (α/Κ)
3.06
Thermal diffusivity δ = K/Cp – ρ
Thermal gradients (K/α)
Transient thermal distortion (α/δ)
Steady state thermostability (EK/α)/1000
TABLE A.21 Materials (Derived Thermal Parameters)
384
Appendix A
TABLE A.22 Miscellaneous False alarm probability (in white noise for a point source) at a given SNR with a Pd of 0.99
SNR of 5, Pfa ≈ 0.01 SNR of 6, Pfa ≈ 5 × 10–4 SNR of 7, Pfa ≈ 1 × 10–5 SNR of 9, Pfa ≈ 1 × 10–10
Gaussian probability that a value will not exceed
1 sigma: 68.3% 2 sigma: 95.4% 3 sigma: 99.7%
Index of refraction of water
1.344 @ 0°C 1.342 @ 30°C 1.337 @ 60 °C
Man-month (average)
163 hr
Man-week (average)
37.5 hr ≈2000 hr
Man-year Peak of the human eye’s response Water heat of fusion
≈0.4 to 0.65 µm 80 g-cal
Volume of one mole of gas at STP
22.4 liters
TABLE A.23 Numerical Constants e 2.718281828459045 π 3.141592653589793 TABLE A.24 Optics Amount of energy in circular diffraction pattern
84% in center disk an additional 7.1% in first bright ring an additional 2.8% in second bright ring an additional 1.5% in third bright ring
Optical density [= log10(transmission)]
0 = 1.0 opacity and 100% transmission 0.5 = 3.2 opacity and 32% transmission 1.0 = 10.0 opacity and 10% transmission 1.5 = 32 opacity and 3.2% transmission 2.0 = 100 opacity and 1% transmission 3.0 = 1000 opacity and 0.1% transmission 4.0 = 10,000 opacity and 0.01% transmission 5.0= 100,000 opacity and 0.001% transmission 6.0 =1,000,000 opacity and 0.0001% transmission
Refractive Index of Ge
≈4.0 (watch out for wavelength dependence)
Refractive Index of Glass
≈1.5 to 2.0
Refractive Index of Quartz
≈1.3 to 1.5
Refractive Index of Si
≈3.4
Refractive Index of ZnS
≈2.5
Tables of Useful Values and Conversions
385
TABLE A.25 Typical Optics Manufacturing Tolerances*
Ar (average R)
Commercial quality
Precision optics
Typical manufacturing limits
MgF2 R < 1.5%
R < 0.5%
R < 0.1%
Aspheric profile (µm) Bevels (max. face width in mm @ 45°) Center thickness (mm)
±10
±1
±0.1
1.0 mm
0.5 mm
No bevel
±0.050
±0.010
±0.150
Diameter (mm)
+0.00/–0.1
Glass quality (Nd)
+0.000/–0.025 +0.000/–0.010
±0.001
±0.0005
Melt controlled
2
0.5
0.1
Irregularity (no. of fringes) Radius
±0.2%
±0.1%
±0.025%
SAG (mm)
±0.050
±0.025
±0.010
Scratch-dig
80–50
60–40
10–5
Wedge lens (edge thickness difference, mm)
0.050
0.010
0.002
±3
±0.5
±0.1
Wedge prism (total included angle, arcmin) *Source: courtesy
of Optimax Systems, Inc.
TABLE A.26 Photonic Candelas per square foot 3.38 × 10–3 lamberts Foot-candle
1 lumen per square foot 10.76 lumens per square meter 10.76 lux
Lambert
0.318 lamberts per square centimeter 295.7 candelas per square foot
Lumens per square foot
10.76 lux
Lux
1 lumen per square meter 1 × 10–4 photons
TABLE A.27 Physical Constants Atomic mass unit
1.657 × 10–24 g
Avogadro’s number
6.022 × 1023 molecules/mole
Boltzmann’s constant
1.3806 × 10–23 W sec/degree 1.3806 × 10–23 J/K
Charge of an electron
1.602 × 10–19 coulombs
Gravitational constant
6.67 × 10–11 Nm2/kg2
Permeability of free space 12.566 × 10–7 henrys/meter Permittivity of free space
8.854 × 10–12 farads/meter
386
Appendix A
TABLE A.27 Physical Constants (Continued) Planck’s constant
6.626 × 10–34 W sec2 or 6.626 × 10–34 J sec
Mass of a neutron
1.675 × 10–27 kg
Mass of a proton
1.673 × 10–27 kg
Mass of an electron
9.109 × 10–31 kg
Speed of light
2.99792458 × 108 m/sec 2.99793 × 1010 cm/sec 2.99793 × 1014 µm/sec
TABLE A.28 Pressure Dynes per square centimeter 1.02 × 10–3 gram (force)/cm2 1.45 × 10–5 pounds per square inch One atmosphere
1.01325 bars 1.013 × 106 dynes per square centimeter 760 Torr 1033 gram (force)/cm2 760 mm Hg
PSI
6895 pascals 0.068 atmospheres 51.71 Torr 51.71 mm Hg
Torr
133.32 pascals 0.00133 bar 1 mm Hg
TABLE A.29 Radiometric Blackbody constant
2897.9 µm K for peak in watts 3669 µm K for peak in photons
c1 (radiation constant)
3.7412 × 10–16 W cm2 4.993 × 10–24 J/m
c2 (radiation constant)
1.4388 cm degree
Stefan–Boltzmann constant
5.6705 × 10–12 W cm–2 deg–4 1.354 × 10–12 cal cm–2 deg–4 sec–1
TABLE A.30 Temperature 0°C
273 K 32°F
0K
–273.16°C –459.7°F
A difference of 1°C
1K 1.8°F
Tables of Useful Values and Conversions
387
TABLE A.31 Time Hour
3600 seconds 0.04167 days 5.95 × 10–3 weeks
Day (mean = 24 hours)
1440 minutes 86400 seconds
Month (average)
Year
30.44 days 730.5 hours 2.63 × 106 seconds 4.348 weeks 365.256 days (sidereal) 8766 hours 525,960 minutes 3.16 million seconds 52.18 weeks
TABLE A.32 Velocity Mach 1 345 meters per second (defined as velocity of sound in air at STP) 34,500 centimeters per second 1132 feet per second 771 miles per hour Meter per second
2.24 miles per hour
Miles per hour
88 feet per minute 1.467 feet/second 1.609 kilometers per hour 0.869 knots 26.822 meters per minute 0.44 meters per second
Velocity of sound in air
usually ≈330 to 350 meters per second usually ≈1080 to 1150 feet per second
Velocity of sound in water
1470 meters per second 4823 feet per second
TABLE A.33 Video Formats (Approximate and for Broadcast) No. pixels (horz. × vert.)
Format
Frame rate (Hz) Interlace
QCIF
176 × 144
29.97
0
CIF
352 × 288
29.97
0
SIF-625 352 × 288
25
0
59.94
2:1
NTSC
522 lines/frame/480 horizontal lines of video data (e.g., 640 × 480 gives the 4:3 ratio, so the implied pixel ratio is 640 × 480)
PAL
Same as NTSC
50
2:1
CCIR
750 × 576 (a monochrome version of PAL)
50
2:1
HDTV1
1280 × 720
59.94
0
HDTV2
1920 × 1080
≈50 or 60
2:1
388
Appendix A
TABLE A.34 Video Timing Format
Line time (microseconds)
EIA 170
63.492
NTSC
63.555
PAL
64.0
SECAM
64.0
TABLE A.35 Volume Cubic centimeter 1 × 10–6 cubic meters 1000 cubic millimeters or 0.061 cubic inches 2.64 × 10–4 gallons 3.53 × 10–5 cubic feet Cubic meter
61,024 cubic inches 1.307 cubic yards 1 × 106 cubic centimeters
Liter
1000 cubic centimeters 0.0353 cubic feet 61.02 cubic inches 0.001 cubic meters 0.264 gallons (U.S.A.)
TABLE A.36 Weight Gram
2.2 × 10–3 pounds 0.001 kilograms
Ounce 28.35 grams Pound 454 grams
Glossary
This glossary has been developed specifically to aid the reader of this volume in interpreting unfamiliar terms. The definitions are intended to have a practical, rather than formal, presentation. That is, they may be specifically oriented toward the use of the word in the electro optics sciences, rather than a more general definition that would be found in a generic encyclopedia or dictionary. µrad µW aberrated
absorptance
acousto-optic tunable filter
adaptive optics
ADC afocal
airmass
Microradians, millionths of a radian. Microwatts, millionths of a watt. Defines a range of degradations in image quality. Aberrations include, but are not limited to, field curvature, distortion, coma, chromatic aberration, and astigmatism. The property of a material that acts to reduce the amount of radiation traversing through a section of the material. It is measured as the fraction of radiation absorbed. Generally, the bulk absorptance of radiation by a material follows Beer’s law. A special type of filter that works by exploiting the properties of its material; namely, that the index of refraction depends on strain within the material. Optical subsystems with the ability to change the wavefront in real time. They are generally used to compensate for atmospheric turbulence. Analog-to-digital converter (or conversion). Describes an optical system that does not form a focus, such as often employed in laser beam expanders. Such optics accept or project a light field and produce an unfocused beam. A term frequently used by astronomers, referring to the path length in the atmosphere through which the telescope looks. When a telescope is pointed straight up, it 389
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
390
Glossary
Airy disk
albedo algal Allard’s law angstrom (Å)
anomalous trichromats antireflection coating APC APD apodization APOMA apostilb
apparent elastic limit APS arcsecond
ASIC
aspheric athermal ATP
is looking through an airmass of 1. The airmass increases as the telescope is pointed toward the horizon, as a function of the cosine of the zenith angle. The distribution of radiation at the ideal focus of a circular aperture in an optical system when dominated by diffraction effects (e.g., no aberrations). The central spot formed by about 84 percent of the total energy. The total visible-band reflection and scatter from a surface, measured over all possible scattering angles. An adjective describing the properties of unicellular plants that can have a significant impact on the optics of the ocean. An equation that relates the night time visibility of a target to the properties of the intervening atmosphere. A unit of length, equal to 10–10 m, used to define wavelength (although it has been generally replaced by micrometers or nanometers). A form of color blindness in which the subject detects three colors, but one of them is misaligned in spectral sensitivity as compared with those associated with normal vision. A coating intended to reduce the amount of radiation reflected from a surface of an optical element. Armored personal carrier. Avalanche photodiode. The modification of the transmission properties of an aperture to suppress unwanted optical aberrations or effects. American Precision Optics Manufacturers Association (www. apoma.org). A unit of measure of light in the system that takes account of the response of the human eye. An apostilb is equal to 1 lumen/m2 for a perfectly diffuse (Lambertian) surface. The strain level at which the stress-strain relationship of a material deviates from linear. This is usually used to describe an arbitrary point on the stress-strain curve for materials that do not exhibit the linear relationship anywhere. Active pixel sensor. A measure of angle, usually employed by the astronomical community. An arcsecond is about 4.6 microradians and is computed as 1/60 of an arcminute (which is 1/3600 of a circle). Thus, an arcsecond is 1/60 of 1/3600 of 2π radians. Application-specific integrated circuit. These are custom made chips “hardwired” to do specific functions. They provide the lightest weight and lowest power processing but usually have little or no reprogrammability. Not having a spherical shape. Commonly used as a term for optics that are not conic sections. Insensitive to changes in temperature, as in optical designs that stay in focus regardless of the operating temperature. Acquisition, tracking, and pointing. A field of study focused on sampling theory, control theory, and servo systems. This is especially critical in search-and-rescue and weapon systems. Con-
Glossary
avalanche photodiode
azimuthal
background
backscatter
baffle bandgap
bandpass
bathymetry Benham’s disk Bessel, Friedrich Wilhelm binary optics
blackbody
391
versely, this usually is not a concern for navigation sensors or general cameras. ATP systems are employed to detect targets, control the line of sight to follow the motions of the target, and to aim a device (e.g., another sensor, laser, or weapon) at the target. The latter must take into account the speed of the device aimed at the target. A semiconductor detector device that exploits a phenomenon of photoemission and self-amplification. When used as part of the coordinate system of a pointing or tracking system, for ground based systems like astronomical telescopes, azimuth usually refers to rotation of the plane that is parallel to the Earth. For many systems (e.g., aircraft and spacecraft), the definition is somewhat arbitrary but, once chosen, must be used by all members of the design team. Azimuth is the complement of elevation and usually is orthogonal to elevation in pointing coordinates. The part of a sensed scene that is not the item of interest (e.g., the ocean is the background for a system trying to detect swimming shipwreck survivors, and paper is the background for the words that you are reading). Most of what you see is background. That part of light impinging on a surface that is not absorbed, transmitted, or specularly reflected. May be used to describe the part of the light scattered from a surface exposed to laser light, with the proviso that the backscattered light is that which is scattered toward the illuminating source. A shield that prevents unwanted light from impinging on the focal plane. Cold shields and sunshades are forms of baffles. The energy bandgap in a semiconductor material is the amount of energy needed to sufficiently interact with the lattice to generate free carriers, which are the source of the detected electrical signal. For photovoltaic and photoconductive devices, the bandgap defines the longest wavelength to which the detector is sensitive. The range of wavelengths over which a system functions. The term is also used in an electronic sense to define the range of electrical frequencies that are accommodated by an electrical system. The measurement of water depth by some means. A spinning disk used to illustrate some features of color vision. A prominent mathematician whose work has been recognized by assigning his name to a function that is frequently used in defining the electromagnetic properties of circular apertures, such as in diffraction theory. Optics that utilize diffraction to alter a light ray and are made from a photolithography (mask-and-etch) process. The photolithographic process results in the optical curve being approximated in a series of steps. This staircase approximation to a curve has the number of steps being a power of 2 (e.g., 2 steps, 4 steps, 16 steps, and so on), hence the term binary. A blackbody has the properties of complete absorptance; it emits exactly as described by Planck’s law with an emissivity of 1.
392
Glossary
BLIP
blowdown bolometer BQ
BRDF BSDF Cassegrain
CCD
CDR
CdS
CDS centroid
charge skimming Chretien
CID
(1) Background limited in performance. A “BLIP” system is one in which the noises are dominated by the photon noise of the background. In such cases, reducing the noise of the detector will not provide increased system sensitivity. (2) Background-limited infrared photodetector. A cooling system, usually employing a highly pressurized gas that is expanded through an orifice to generate cooing. See also Joule–Thomson. Any of a class of detectors that rely on temperature change in the detector to indicate the presence of radiation—usually long wavelengths such as infrared, T-rays, and MMW. Beam quality. A measure of the ability of a laser beam expander to form a beam of ideal quality (generally assumed to be that of a diffraction limited beam). Exact definitions depend on the application. Bidirectional reflectance distribution function, generally used for opaque surfaces. Bidirectional scatter distribution function, generally used for transparent media. Any of a class of two-mirror telescopes in which the primary mirror is concave of a parabolic shape, and the secondary mirror is convex and hyperbolic. These systems exhibit no spherical aberrations. Charge-coupled device. A particular implementation of detector and/or readout technology composed of an array of capacitors. Charge is accumulated on a capacitor and shift-registered along rows and columns to provide a readout. These are frequently used in commercial video cameras and are the most common visible arrays. However, the dominance of the CCD is being challenged by active-pixel CMOS imagers. Critical design review, a meeting in which most of the detail design is reviewed. Generally, about 80 percent of the drawings are complete at this review, and all critical components are selected. Cadmium sulfide. A detector material still in use but whose application in advanced EO systems has been nearly eliminated by advancements in other detector materials. Correlated double sampling. The resultant “center of mass” or other characterization of a distribution of light falling on an array of detectors. In the context of this book, it usually refers to light on a focal plane, often used to compute the location of a target to higher resolution than can be achieved by merely using a single detector. A technique to remove a portion of the noise-generated photoelectrons from a multiplexer. It allows longer integration times. Used generally with the name Ritchey as in Ritchey–Chretien. Often used to describe an optical implementation of a Cassegrain two-mirror telescope that has the property of having no spherical aberration or coma. This is achieved by using aspheric surfaces for both the primary and secondary mirrors. Charge injection device.
Glossary
CIE
clutter
CMOS
Cn2
coefficient of performance coefficient of thermal expansion cold body
cold shield
common module
confocal correlated double sampling CRT cryocooler
cryostat CTE cube corner
393
Commission Internationale de l’Eclairage, or the International Commission on Illumination, a standards-generating group concerned with color and illumination. The non-Gaussian spatial (and perhaps temporal) intensity variations in the background. Clutter is hard to filter out and usually looks like what you are trying to detect. It is an old radar term adopted by the EO community. Complementary metal oxide semiconductor, a common architecture for integrated circuits. Sometimes also used (incorrectly) to define a CMOS read out structure that may or may not be an “active pixel” device. The index of refraction structure constant, pronounced “see n squared.” This parameter is widely used to describe the impact of atmospheric turbulence on the propagation of light in the atmosphere. A unitless figure of merit used by refrigeration engineers. Generally, it is the delivered cooling capacity in watts divided by the electrical input in watts. The relative amount by which a material will expand or contract with temperature changes. The coefficient of expansion is a function of temperature. A target that is not thrusting or expelling any hot gas. When launching, the Space Shuttle is not a cold body; when orbiting, it is. A cold baffle (usually near an internal reimaging plane and the FPA). Its function is to limit the acceptance angle of the FPA to that of the final F-cone, thereby reducing the contribution of unwanted warm radiation and reflections from the surrounding housing to the noise. A series of electro-optical standardized subsystems (including detectors and dewars) sponsored by the U.S. Army. These form the basis of the vast majority of IR systems manufactured for the U.S. prior to 1990. Over 100,000 systems were made based on this concept. The property of two or more optical components having the same focal point. Commonly used in describing certain types of laser resonators. A technique to reduce certain types of noises, accomplished by sampling the output before and after each readout and subtracting one from the other. This is frequently employed “on chip” in visible sensors. Cathode ray tube, an old type of display that you still might find in your television. A refrigerator for providing cooling to cryogenic temperatures (approximately <200 K). These include a host of mechanical cycles, gas expansion systems, Peltier effect cooling systems, magnetic cooling systems, sorption systems, and others. The cold finger expansion assembly of a any cryocooler. Charge transfer efficiency or coefficient of thermal expansion. A retroreflector, frequently misnamed a “corner cube.”
394
Glossary
D*
This is the abbreviation for specific detectivity, pronounced “deestar.” This is a figure of merit for detectors, defined as Ad ∆f --------------NEP
Dall
DAR DAS density deuteranopia dewar
die diffuse attenuation digitizer dispersion distributed aperture
Dolbear doping
downwelling
where Ad is the area of the detector, ∆f is the electrical noise bandwidth, and NEP is the noise equivalent power. D* normalizes most detector noises and can be a powerful figure of merit. Unfortunately, D* depends on the measurement procedures, and important noise mechanisms (such as 1/f noise and changes in the optical bandwidth) frequently are omitted. Often expressed in units of Jones (cm Hz /watt), after a pioneer in the development of EO technology. A jones in cm ( Hz ) ⁄ watt . Usually used with Kirkham, such as in describing the Dall–Kirkham implementation of the two-mirror Cassegrain optical system that uses a aspheric primary and a spherical secondary mirror. Display aspect ratio. Detector angular subtense. Mass per given unit volume, often given in units g/cm3 or kg/m3. A form of color blindness in which green sensitivity is suppressed. A double-walled vacuum vessel providing thermal isolation. In the context of this book, it is the evacuated mechanical container that contains the cold train (usually cold finger, detector, cold shield, and cold filter). The term is also used to indicate the storage device (dewar) for liquid cryogens (such storage vessels are sometimes referred to as cryostats). An integrated circuit as discriminated on the wafer. A measure of the propagation of light in an absorptive and scattering medium such as water; it takes into account the presence of multiple scattering. The chipset or board that performs the complete analog-to-digital conversion. The change in the index of refraction as a function of wavelength. A system that has more than one aperture feeding light or information to a common processor. An example would be five television cameras scattered around a security installation, each feeding a centralized processor that automatically identifies targets. The name most commonly associated with estimating temperature by counting how frequently crickets chirp. The process of deliberately inducing impurities into the solidstate structure of a material to alter its electrical characteristics. Generally, these are small quantities, measured in parts per million or billion. In the context of this book, a directional definition commonly used in describing the flow of electromagnetic radiation in the
Glossary
DRI dynode dyschromatopic EFL EMI emissivity
ensquared EO erf ERIM etalon ETD etendue
euphotic
exitance f/# FAA
Fabry, Marie Paul Auguste Charles false-alarm rate
Faraday rotation
395
ocean and atmosphere, with “down” referring to a direction toward the center of the Earth. Detection, recognition, identification. A positively charged secondary emission electrode used in photomultiplier tubes to increase the number of electrons. A property of color blindness in which some colors, but not others, are seen. Effective focal length; the resultant focal length of a system of lenses or mirrors that are working in conjunction. Electromagnetic interference. The unitless ratio of a surface’s exitance to that of a perfect blackbody. A true blackbody has an emissivity of 1, and gray bodies have emissivities of less than 1. A shiny object (e.g., bare metal) has low emissivity. Usually used to describe the amount of energy focused onto a square area (such as a detector). Electro-optics. The field of study that integrates electronics and optics to create systems. The error function, defined in any book on mathematical physics or engineering physics. The Environmental Research Institute of Michigan, now largely replaced by Veridian. A type of interferometer in which high spectral resolution can be obtained. The Fabry–Perot interferometer is such a device. Edge thickness difference. The product of an optical system’s solid angle field of view and its effective collective area. The product of these two parameters is conserved and is approximately equal to wavelength squared. See the detailed rule on this subject. Usually used in conjunction with the word “zone.” Refers to the part of the ocean where light plays a role in its biology. This is commonly taken to be the range of depths at which the intensity of sunlight is at least 1 percent of the value just under the surface. Flux density emitted. This term has now been replaced by areance. Pronounced “f-number.” It is commonly defined, using a small angle approximation, to be the ratio of an optical system’s effective focal length to its effective aperture. Federal Aviation Administration, a branch of the U.S. government that is responsible for aviation safety, control, and administration. An optical scientist of about 100 years ago, whose name is most often seen with that of Perot, as in the Fabry–Perot interferometer. The frequency at which a system generates false alarms. This is related to the probability of detection and the amount of noise and clutter. Change in the plane of polarization of light radiation in a material as the result of the presence of a magnetic field.
396
Glossary
field of view
figure Fizeau, Armand Hippolyte Louis FLIR
focal length focus FOR
Foucault, (Jean Bernard) Léon
Fourier, (Jean Baptiste) Joseph, Baron FOV fovea FPA FPN frame differencing
Fried, David
Fried parameter
FWHM
The angular extent for which a system can detect light. For imaging systems, it is the angular field that light can be imaged. This can be defined in half angle (the angle from the center to the edge) and full angle (the angle from edge to edge, or twice the half angle). The reader is also cautioned that both half and full angle fields of view can be defined as the horizontal, vertical, diagonal, or circular directions. The general overall shape of an optic (e.g., a parabola, sphere, and so on). A prominent, early optical scientist whose name is associated with certain types of interferometers. Forward-looking infrared. A class of infrared sensors often used in defense and surveillance applications. A successful infrared imaging company also takes this as its name and can be found at www.flir.com. The distance from the principal plane to the focal plane. The act of forming the best possible image. Field of regard. The field over which a sensor can be operated. This differs from field of view in that the latter is the field that can be observed without moving the instantaneous line of sight. The FOR commonly describes the field that can be covered using gimbals or steering mirrors in the system. An important scientist who contributed to the study of both mechanics and optics, in the latter case especially on the problem of determining the speed of light. The knife-edge test is named after him. A prominent scientist and mathematician who gives his name to any number of important fields of current study. In electro-optics, his name appears associated with the field of Fourier optics, the Fourier transform, and so forth. Field of view. A confined area within the retina in which cone density is high and no tissue lies between the cones and the lens. Focal plane array (or assembly). An integrated electronic package that includes detectors and multiplexing readout electronics. Fixed pattern noise. An image processing technique that compares one frame of a given scene to one captured some time later (usually by subtraction). This is a very effective way of reducing static clutter and detecting moving targets or any other changes to the scene, such as the emergence of new targets. A prominent participant in the development of the modern theory of radiation propagation in turbulent media. His name is given to the Fried parameter, a measure of the optical properties of the atmosphere. A measure of the distorting effect of the atmosphere as the result of light propagating over a path in which turbulence is present. Named for David Fried, a prominent optical scientist whose work defined the field described above. Full width, half maximum. A common measure of the width of a distribution of some function. For example, in laser physics, this
Glossary
GaAs gamma
GHz graybody Greenwood frequency GSD Hartmann
HDTV HeNe
Herzberger formula HgCdTe HUD I2 IEEE IFOV
illuminance in-band index of refraction infrared InGaAs InSb insolation integral Stirling
397
term is often used to define the lateral dimension of a beam by determining the diameter that best meets the criterion of being at the half-power point of the beam. Gallium arsenide. A semiconductor material used in electronics, optics, and detectors. In relation to displays, it is a numerical value representing the display brightness as a function of signal. It is generally nonlinear. Gigahertz = 109 hertz. A surface or object that has a radiant areance with the same distribution as a blackbody, but with lower intensity. A measure of the rate at which the turbulence-induced properties of the atmosphere are changing. Ground sample distance (usually the geometric mean ground sample distance). An optical researcher from the turn of the twentieth century who contributed the concept of a multiple-aperture lens system that is used in optical testing and in wavefront sensing. See also Shack–Hartmann sensor. High-definition television. Helium neon, as in HeNe laser. HeNe lasers are common, low cost, and commercially available. They typically have a frequency of 0.6328 µm. A method for estimating the index of refraction of transparent materials. Mercury-cadmium-telluride, an alloy that provides photoconductive and photovoltaic conversion of light from about 1 to 22 µm. Head-up display. Image intensifier. Institute of Electrical and Electronics Engineers (www.ieee.org). Instantaneous field of view. The field of view of a single pixel in a multi-detector array. Flux that is incident on a surface; same as radiant areance. Having the property of being detectable by a particular electrooptic system. That is, the radiation is said to be within the wavelength limits of the system. A property of all transparent or semitransparent materials, defined by the speed of light in the material and, in turn, defining the effect of the material on the deviation of light entering from a medium of different index. The portion of the electromagnetic radiation spectrum that extends in wavelength from beyond the visible (≈0.8 µm) to the submillimeter or T-ray regime (≈100 µm). Indium gallium arsenide; a shortwave infrared detector material. Indium antimonide; a common mid-infrared detector material. Pronounced “ins bee.” The amount of radiation imposed on a surface by the Sun. A Stirling cooler of an architecture that has the expander directly connected to the compressor.
398
Glossary
interferometry IRAS Infrared Information Analysis Center IRST ISO
isoplanatic angle Jansky Johnson criteria
JT K Kapton® kCT noise Keck, William M. kelvin
kilopascals
Kirkham
Kolmogorov, Andrey Nikolayevich LADAR
The field of study for the development and application of systems that work by combining separate beams of light. The Infrared Astronomical Satellite. IRIA U.S. government-sponsored professional society that supports symposia and information of all subjects pertaining to this book. The address for their home page on the World Wide Web is http:www.iriacenter.org. Infrared search and track. (1) The Infrared Space Observatory, a European infrared astronomical satellite. (2) The International Standards Institute, the authors of the standards ISO9001 and 9002. The range of angles over which the wavefront of a system is essentially constant. A measure of radiant intensity widely used in radio astronomy; has the units of W/m2/Hz. A series of rules for the ability of humans to detect, recognize, and identify objects as a function of resolution. Generally, these are ground-military targets, expressed in cycles (2 pixels) on a target for correct identification by 50 percent of humans viewing a display. The interested reader should see the rules in Chapter 1. Abbreviation for Joule–Thomson. The abbreviation for kelvins, the accepted unit for temperature, with the scale starting at absolute zero. A plastic material used in specialized tape, some circuit boards, and some multilayer insulation materials. Reset noise in detector readouts proportional to Boltzmann’s constant times the capacitance and temperature. The philanthropist whose foundation funded the 10-m telescopes on Mauna Kea, Hawaii. The internationally accepted metric unit of temperature and the most common measure of temperature in the electro-optics industry. Zero kelvin is absolute zero, and each kelvin increment is the same as a degree centigrade. Note that the internationally accepted use of the term eliminates the word “degrees” and the degree symbol, such as, “The instrument operates at 10 kelvins (or 10 K),” but never “10 degrees kelvin” or “10°K.” Measurement unit for pressure. In the most recent definition of pressure, a standard atmosphere is defined as 110,325 Pa or 110.325 kPa. Usually used with Dall, as in the Dall–Kirkham implementation of the Cassegrain two-mirror telescope. A prominent Soviet mathematician whose work on the statistical properties of the atmosphere form the foundation of most modern theories of the impact of turbulence on light propagation. Laser radar. A system for determining the range and sometimes shape of distant targets by illuminating them with laser and light and detecting the reflected light with a co-located detection system. Used interchangeably with LIDAR by most researchers, the latter meaning light detection and ranging.
Glossary
Lambert laser
least significant bit
LED lenslet
lightweighting
line of sight LiTaO Littrow configuration LOS LOWTRAN
LSB lux LWIR
magnetostrictive
magnitude manufacturability
399
A unit of light fluence in the photopic/scotopic system of measurement. A lambert is defined as 3183.1 candelas/m2. Acronym for light amplification by stimulated emission radiation. Any man-made device that produces a coherent beam of light by taking advantage of stimulated emission of radiation. Amount an analog signal can vary without causing a change in the digitized representation. The smallest amount of information that is represented when an analog system is digitized. This can be the dominant noise source for quiet, large dynamic range sensors. Light-emitting diode, an important technology for producing the light used in communications and displays. A small lens, usually in an array near the FPA. Also used to define the properties of the lens systems employed in Shack–Hartmann wavefront sensors. The act of removing unnecessary mass from optical systems, usually with the intention of making the instrument compatible with an aircraft or spacecraft platform or to make the system more adaptable to a gimbal mount. The projection of the optic axis into object space. This represents the center of the field of view. A compound of lithium, tantalum, and oxygen, useful for detection of infrared radiation employing the pyroelectric effect. A special grating configuration in which the light is diffracted back toward the source. Line of sight, see above. Abbreviation for “low-resolution transmission.” It is an modeling tool used to determine the transmission and scattering properties of the atmosphere for a variety of conditions, including season, weather conditions, geolocation, and so forth. Generally limited to resolution of 20 cm–1, with extrapolation to 5 cm–1. MODTRAN and HITRAN are similar programs with improved resolution. PCTRAN is a version for the personal computer. Least significant bit, see above. A measure of radiation in the photopic/scotopic radiation measurement system (abbreviated lx). Long-wave infrared. The portion of the electromagnetic spectrum that is longer in wavelength than mid-infrared and shorter in wavelength than very long infrared. Typically defined by the available detector technology to be from ≈6 µm (where Pt:Si and InSb do not work) to ≈30 µm (where typically most doped silicon fails to work). Often, commercial and tactical military engineers will loosely use LWIR to mean the 8- to 14-µm atmospheric window. A phenomenon wherein a material changes dimensions as a function of exposure to magnetic fields (similar to electrostrictive and piezoelectric in application). A measurement of stellar brightness. Abbreviated mv for visual magnitude. The measure of the ease with which something can be manufactured. It is usually included in the description of the risk associ-
400
Glossary
MCP meridional microlens micron microradian Mie, Gustav
millibars milliradian millisecond milliwatt MODTRAN
MOS MRC MRT
MSS MTF MTTF multimode
multiplexer
mux mW MWIR Mylar® NA nanoampere near-net-shape NEDT NEI
ated with a particular EO system or technology, because it implies a cost in either time or money. Micro-channel plate. A plane through an optical system that includes the optic axis. A tiny lens, sometimes mounted directly on a focal plane. A millionth of a meter; micrometer, abbreviated as µm, is the preferred term. A millionth of a radian; often also appears as µradian or µrad. A prominent scientist in the field of atmospheric optics; gives his name to the theory of light scattering from particles the same size or larger than the wavelength. A measure of pressure, now replaced by 100 pascals. A thousandth of a radian; often appears as mrad. A thousandth of a second, abbreviated as ms or msec. A thousandth of a watt; often appears as mwatt or mW. An optical modeling tool that is used to estimate the transmission and scattering properties of the atmosphere. Used in cases where the spectral resolution of LOWTRAN is insufficient. Generally limited to resolution of 1 cm–1. Metal oxide semiconductor. Minimum resolvable contrast. Minimum resolvable temperature. A common figure of merit describing the performance of a thermal imager. Usually, this implies that the effect of the display and human eye are considered. Military Sensing Symposia. See http:www.iriacenter.org for details. Modulation transfer function, a way to describe image quality. Mean time to failure. A statistical measure indicating the reliability of a system, based on when half of the units have failed. Generally, this refers to fiber optics or lasers and indicates that they are capable of transmitting (in the case of fibers) or generating (for lasers) radiation of more than one electromagnetic mode. An electronic system that is able to provide inputs from several sources to one operational module, in turn. Often used in describing the design of electronic focal plane readout systems. Short for multiplexer. Milliwatt. Mid-wave infrared. Usually includes radiation from about 3 to 6 µm and/or the atmospheric window from about 3.2 to 5.2 µm. A plastic film material often used in the fabrication of multilayer insulation materials for cryogenics and spacecraft. Abbreviation for numerical aperture. One billionth of an ampere. A formation process in mechanical systems intended to result in the minimum need for removal of material. Noise equivalent delta temperature, a figure of merit suitable for infrared thermal imagers. Also abbreviated NE∆T. Noise equivalent irradiance. A measure of the performance of a system in which all of the noise terms are aggregated into one
Glossary
NEP
newtons NIIRS nm NOAA nonisotropic nonthrusting
ns NTSC NVESD NVL
NVTHERM
Nyquist frequency
obscuration off-axis on-axis OPD
opponent processes optical depth optical element Optical flow
401
measure that equates with the irradiance to a noise-free system. Defined in units of watts per unit area. Noise equivalent power. A measure of the performance of a detector or system in which all of the noise terms are aggregated into one measure that results in a signal-to-noise ratio of 1. NEP is defined with units of watts. Units of force in the SI system. National Image Interpretability Rating Scale. Abbreviation for nanometer. A commonly used measure of wavelength, defined as one billionth of a meter. U.S. National Oceanic and Atmospheric Administration. Not having the properties of isotropy; that is, not being the same in all directions of propagation. Describing the properties of a target that is not equipped with jet or rocket engines. Often used in characterizing the signature properties of missiles whose engines have burned out or the properties of the product of a rocket, such as a re-entry vehicle, that is not equipped with engines. Abbreviation for nanosecond, one billionth of a second. National Television System Committee, a U.S. national committee responsible for old video standards. Night Vision and Electronic Sensors Directorate, part of the U.S. Army Communication and Electronics Command. The U.S. Army’s Night Vision Laboratory at Ft. Belvoir, now called the Night Vision and Electronic Sensors Directorate, part of U.S. Army Communication and Electronics Command. Night Vision Thermal Model, a comprehensive IR sensor performance model developed by NVESD, Ft. Belvoir. Commercial copies are available from Ontar. In sampling theory, this is the highest frequency that can be faithfully reproduced and is equal to two times the system resolution. Also called the Nyquist criterion. An opaque blockage to light in the optics of a system. In reference to optics, this can either mean a direction not on the optic axis. In reference to optics, an optical system that directs the beam down the optical axis. Optical path difference. Used in describing the performance limitations of optical systems in which the phase front of the light propagating in the system is aberrated by the optics or the turbulent environment outside of the optics. It can also result from deliberate change in the optical path length of an arm of an interferometer. An approach to explaining the eye-brain functions that result in vision. The integration of the absorption coefficient along the path of light in an absorbing medium. A lens, mirror, flat, or piece of optics in an optical assembly. The differential change in the apparent scene caused by movement that is not along the viewing angle. For example, when
402
Glossary
optical gain optomechanics outer scale overscan PAL PbS
PbSe
PC PDR
Peltier, Jean Charles Athanase Pfa phenomenology
photoconductive photodetectors
photoemission photolithograph ic photomultiplier
photon photonics
driving, the road just before your hood moves (in angle space) faster than the road at the distant horizon, although both are moving relative to the car at the same speed. The effective radiometric aperture divided by the blur spot size. The field of engineering that addresses the integration of optics with mechanical structures and mechanisms. A property of atmospheric conditions that is a factor in estimating the influence of turbulence. To scan more than is strictly needed, usually to provide a margin of error for pointing or to view a radiometric reference. Phase alternate line, a video standard consisting of 625 lines (only 576 are used for the image) and a 50-Hz update. Lead sulfide, an infrared detector material providing shortwave response. One of the first IR detector materials made, it provides good sensitivity with thermoelectric cooling. Lead selenide. An infrared detector material providing shortwave and mid-wave photoconductive sensing. Like PbS, it was one of the first IR detector materials made. It provides good sensitivity with thermoelectric cooling. Photoconductive. Preliminary design review. A meeting at which early plans and trade studies for system development are presented and reviewed. (1) The discoverer of the basic physics that has led to the development of the thermoelectric cooler. (2) A type of cryogenic cooler named for him. Probability of false alarm, see definition below. In the context of this book, the characteristics of chemical reactions that yield electro-optic signatures from plumes, targets, and background. A type of detector that changes conductance (resistance) on exposure to light. Contrasts with the photovoltaic detector types. A broad description of components that are able to sense the presence of light in a quantitative way. Generally, this category includes photomultipliers, photodiodes, CCDs, and so on. A process involving the emission of an electron from a surface when a photon arrives at the surface. This is the basis of operation of the photomultiplier tube. A process using masks and etching to create very minute (less than 1 µm is common) features. It is widely used by the electronics industry to create integrated circuits. A detection system making use of the photoelectric effect and the amplification of the resulting electron emission from the photocathode surface to provide extremely sensitive detection. An array of these can be used to make imaging systems. A massless bundle of energy traveling at the speed of light, associated with light and light phenomena. The general field of creation, detection, and manipulation of photons of light. Includes virtually all of the modern electro-optic sciences.
Glossary
photopic photovoltaic phytoplankton picopixel
Planck function plume
PMT Poisson’s ratio
power spectral density probability of detection probability of false alarm
producibility projected area
PRR
PSD Pt:Si
pulse tube PV pyroelectric
403
Characterizing the response of the cones in the human retina. See also scotopic. A type of detector that produces an output voltage difference between two electrodes in response to impinging light. Small plants, usually unicellular, that participate in the absorption, scattering, and fluorescence properties of the ocean. Prefix indicating a fraction of 10–12, as in “a picowatt equals one millionth of one millionth of a watt.” Formally, a picture element, and the term should be strictly applied only to displays. However, it is commonly used to refer to an individual detector on a focal plane array. Refers to Max Planck’s blackbody radiation law. The exhaust emission from a chemical process. In this book, it refers to the exhaust cloud from a car’s tailpipe, the exhaust from a jet engine, and the flames and exhaust from a rocket engine. Photomultiplier tube. A mechanical property of solids that is significant in predicting their structural performance. The ratio of the absolute value of the rate of transverse strain to the corresponding axial strain resulting from a uniformly distributed axial stress. Poisson’s ratio is dimensionless. A mathematical representation of the variations in a measurement plotted in the frequency domain. In the context of this book, it is used to describe scene clutter or vibration. A statistical measurement of the likelihood that an object can be detected in the presence of noise. A statistical measurement of the likelihood that a noise (or nontarget) source will be identified as a target. A term often found in discussions of detection theory and the design and performance assessment of such systems. The measure of the ability and effort required to produce a particular item (system or component) at a given production rate. The effective two-dimensional silhouette of a three-dimensional object. A sphere has a projected area of a circle of the same radius. Production readiness review. A meeting to review and agree that all of the manufacturability, producibility, and testing aspects (including all tooling and capital equipment) are in place to start production (at some defined rate) of a system. Power spectral density. Platinum silicide. An archaic Schottky barrier device used as an infrared detection material. Typically characterized by low noise and relatively low quantum efficiency but high uniformity and ease of production. A variant of a Stirling cooler that replaces the piston in the expander with a “slug” of gas. (1) As used with detectors, photovoltaic. See definition above. (2) As used with optics roughness, peak to valley. A type of detection material that is slow to respond but has a very wide spectral sensitivity range.
404
Glossary
quantum efficiency
QWIP radiometrics radiometry rangefinder
raster Rayleigh scattering
refractivity reimaging
retinal eccentricity
RF Ritchey
RMS ro RoA
RSS RTI Rytov, Sergei Mikhailovich
A figure of merit describing the conversion efficiency from a photon to a usable free carrier (electron or hole). Quantum efficiency of 100 percent implies that all the photons incident generate carriers that can be read out. Quantum well infrared photodetector (or photoconductor). The science of measuring the flow of electromagnetic energy in quantitative terms. The study of the emission, transmission, and absorptance of light energy. An active or passive optical system that is capable of measuring the distance between points. A laser radar (LADAR) performs this function by illuminating the target and measuring the time delay until the reflected light appears at a sensor co-located with the transmitter. Many other systems can be employed to make range-finding measurements. A pattern generated by scanning one line of a two-dimensional display followed by another. A scattering process named after the theory developed by Lord Rayleigh (John William Strutt) to describe the scattering properties of the atmosphere. Generally, Rayleigh scattering effect decreases as wavelength increases (which is one reason why the sky is blue and not red). The same as index of refraction. An optical system that has more than one optical focus region (a focal plane in the optical context, but not in the hardware context) in the optical path. The optics form images in more than one location. Reimaging systems are common in cryogenic systems and are used with cold shields to limit the impingement of stray thermal emission from the background as well as telescope itself. The term eccentricity used in the discussion of the optics of the eye does not refer to the more common usage in defining ellipses. Rather, it refers to the position of some feature of the eye, measured from its optic axis, in degrees. Radio frequency. Used generally with the name Chretien as in Ritchey–Chretien. This is a special implementation of a Cassegrain two-mirror telescope that has the property of possessing no spherical aberration or coma. Root mean squared. A common abbreviation for Fried’s parameter. The resistance-area product. The “o” refers to the measurement taken at zero bias. This is a handy semiconductor figure of merit for evaluating material quality. For detector applications, this allows calculation of the Johnson noise and relates to sensitivity (the higher the RoA, the better). It is very temperature dependent, (e.g., can change several orders of magnitude in InSb for a 10 or 20 K change). It is pronounced “are naught a.” Root summed squared. Research Triangle Institute, found at www. rti.org. A theoretical physicist prominent in the development of the theory of radiation propagation in turbulent media. His name is con-
Glossary
saccades SAR Schmidt Schottky barrier
scotopic SCR SECAM seeing
Sellmeier formula SEM-E semiconductor
SFE Si
SiC
signal-to-clutter ratio slewed
SMPTE Snell, Willebrod
SNR
SOFIA SOI specific heat
405
nected with a limit to the amount of turbulence that can be present before the “weak turbulence” theory does not apply and more complex modeling methods must be employed. Small motions of the human eye. Search and rescue, or sample aspect ratio A telescope design that includes a correcting plate to provide high-quality imaging over a large field. A class of semiconductor technology and process. It refers to a class of detectors that work by internal semiconductor photoemission. Examples are Pd:Si, Pt:Si, and Ir:Si. Characterizing the properties of the response of the rods in the human retina. See also photopic. (1) Signal-to-clutter ratio, see below. (2) System concept review. Systeme Electronique Couleur Avec Memorie, a 1967 video standard calling for 625 lines and 25 frames per second. A measurement of the viewing “goodness” at a given place and time. This is frequently used by astronomers and relates to the turbulence effects on ground-based telescopes. A method for estimating the index of refraction of transparent materials. Standard electronics module. A standardized circuit board size and I/O architecture, popular for American military systems. A material with a concentration of free carriers that allows it to be a conductor or resistor and allows this property to be controlled by an applied voltage. Surface figure error. Silicon. The most commonly used material in the development of semiconductors. Also used as a material for visible and near infrared detectors and, when doped, short, mid. and long wavelength infrared focal planes. Truly nature’s gift to us all. Silicon carbide. A ceramic material of considerable interest in the development of modern optics, light emitters, and optical structures, particularly because of its important mechanical properties and light weight. The ratio of the target signal (e.g., in photo-generated electrons) to the signal (again, in electrons) caused by sensing the clutter. Pointed, as in the slewing of a sensor to increase its field of regard. Usually requires that the entire instrument be repointed, such as by using a gimbal or steering mirror. Society of Motion Picture and Television Engineers, found at www.smpte.org. A venerated optical scientist who lends his name to Snell’s law, which describes the refraction of light that occurs as it passes from a volume of one index of refraction to another. Signal-to-noise ratio. The Stratospheric Observatory for Infrared Astronomy, an airborne observatory. Silicon on insulator. Change in thermal energy with a change in temperature. Units are J/kg-K.
406
Glossary
specific stiffness
SPIE
SPIRITS split Stirling
SPOT SRR
steady-state thermal distortion steradians
stilb Stirling Strehl, Karl
subaperture
subpixel
subtense superesolution
SWIR
Specific stiffness calculates the inherent stiffness of a material. Rigid materials resist deformations caused by polishing, mounting, and gravity (tilt). The specific stiffness is the ratio of Young’s modulus and density and has units of newton-meters per gram. The International Society for Optical Engineering. An international professional society that deals of all subjects pertaining to this book, found at www.spie.org. Spectral Infrared Targets and Scenes. This is a software modeling tool to estimate spectral radiance from targets and backgrounds. A Stirling cooler of an architecture that has the expander separate from the compressor and powered by pressure differentials in a gas line. Satellite Pour l’Observation de la Terre (a French remote sensing system). System requirements review. An early meeting in the development of a system, during which the requirements for the system that have been derived from performance requirements can be reviewed and improved. Steady-state distortion per unit of input power and is defined as the coefficient of expansion divided by the thermal conductivity. Units are 10–6 m/W. The unit of measure of solid angle. Computed as the ratio of the area of the cap of a sphere to the square of the radius of the sphere. A measure of light in the photopic/scotopic system, equal to 10,000 candelas/m2. A type of thermodynamic cycle, named after the British scientist. It is commonly employed in mechanical cryocoolers. An optics researcher who lends his name to the oft-quoted measure of the performance of an optical system (Strehl ratio), in which the intensity that is projected or collected is compared with that of a perfect system measured on-axis. A portion of the total aperture, generally a circular portion. Often used in discussing the properties of the Shack–Hartmann wavefront sensor, which is made up of many independent and abutted optical systems, each equipped with its own lens and focal plane. Usually used in describing the performance of multi-detector systems in locating targets to an accuracy better than can be achieved by the field of view of a single detector pixel alone; usually by calculating the “centroid” of the light falling on a group of detectors or observing the rise time of the signal as the blur spot crosses pixels. The angular extent of a system, such as the field of view of a sensor. The process of providing, by analysis, information on the location or shape of a target exceeding that which can be obtained by the optical system alone. Usually implies a positional accuracy better than the diffraction limit. See also subpixel. Short-wave infrared. The part of the electromagnetic spectrum with wavelengths longer than the near infrared (≈1.1 µm) and shorter than mid-infrared (≈3 µm).
Glossary
TDI TE TEC temporal processing thermal conductivity thermal diffusivity
TIA time delay and integration
TIS torr tracker
transient thermal distortion transmission transmissive transmittance
T-rays
Trichromatic theory Troland TTL turbulence
uncooled
407
Time delay and integration, see below. Thermoelectric, as in a thermoelectric cooler. Thermoelectric cooler. Signal or image processing that operates on features in the time domain. Frame-to-frame differencing is an example. Rate of heat flow through a material with a given thermal gradient. Units are W/m-K. The greater the thermal conductivity, the faster a material will reach thermal equilibrium. A figure of merit that represents the ability of a material to dissipate thermal gradients. D is equal to the thermal conductivity divided by the density and specific heat. Units are 10–6 m2/s. Large values are preferred for optical elements. Total included angle, also transimpedance amplifier. A signal enhancing process whereby a scan is synchronized to the integration of several detectors in the scan direction. As the blur spot moves from one detector (in the scan direction) to another, the signal is added (usually in the analog domain). The signal-to-noise is equal to the square root of the number of TDI detectors in the scan direction. Total integrated scatter. A measure of pressure equal to 133.32 pascals, the latter being a more common modern unit of pressure. An electro-optic system designed to locate, follow, and report on the position of a target. The information from the tracker is often used to control some other system such as an astronomical telescope, weapon system, or the like. Transient distortion per unit of input power, defined as the coefficient of expansion divided by the thermal diffusivity. Units are s/m2-K. The measure of the amount of radiation that passes through a substance. Having the property of being partially transparent. Must be specified as to wavelength. The ratio of the amount of radiation of a particular wavelength that passes through a path in a material to the amount incident on the material. Terahertz electromagnetic radiation, or radiation of approximately terahertz frequency. Generally, this is defined as wavelengths from about 80 to several hundred micrometers. An approach to explaining the eye-brain functions relating to vision. A measure of retinal illuminance equal to about 2 × 10–3 lux. Transistor-to-transistor logic (usually at 3 to 5 V). In atmospheric optics, the presence of cells of index of refraction that vary from place to place and time to time, causing refraction of light rays, ultimately limiting the ability of images to be formed. Systems or components in which operation is adequate at the ambient temperature of the surrounding system, without mechanical coolers. Also, systems or components in which no special effort
408
Glossary
unitless unobscured
upwelling
UV
Verdet constant Veridian vidicon
visibility
visible spectrum
VME wafer wave wave number wavefront
wavelength WFE WFOV white noise Young’s modulus of elasticity ZnS
is made to include complicated cooling systems such as cryogenic fluids or gases, refrigerators, or other mechanisms. Often, uncooled systems employ thermoelectric coolers or phase change materials. A dimensionless quantity; often a ratio of two quantities with the same units. Often used to describe optical system designs in which there are no structures or other nontransmissive parts in the optical path. Off-axis mirror telescopes and most refracting optical systems satisfy this definition, whereas the common on-axis two-mirror telescope does not. A directional definition commonly used in describing the flow of electromagnetic radiation in the ocean and atmosphere, with “up” referring to away from the center of the Earth. Ultraviolet. This is part of the electromagnetic spectrum with wavelengths longer than X-rays (≈0.1 µm) but shorter than visible light (≈0.4 µm). A measure of the rotation of the electric vector in materials that exhibit Faraday rotation. An organization (superseding ERIM) active in electro-optical endeavors, www.veridian.com. An image collection technology that uses a scanning electron beam working in conjunction with a photoconductive detector to form an image. The quantitative measure of the ability of human observers to see standard objects at distance through atmospheres containing rain, snow, or other obscurants. The part of the electromagnetic spectrum that is approximately visible to the human eye. Generally, this is from about 0.35 to 0.76 µm. Versamodule Europe. A standardized circuit board size and I/O architecture, popular with commercial systems. A slice of a boule, usually used to delineate a material disk suitable for semiconductor processing. A periodic undulation in a field. A measure of the frequency of electromagnetic radiation. Computed by the formula 1/λ, or sometimes π/λ. Characterization of a beam of light, usually for the purpose of characterizing the degradations induced by passage through a nonperfect optical system or transmission medium. The distance from one peak in a field of waves to the next. Wavefront error. Describes the degraded properties of light propagating through other-than-perfect optics. Wide field of view. Noise that is not frequency dependent (i.e., has a flat PSD). This is also sometimes simply called modulus of elasticity. Measure of the rigidity of a material. Units are 109 N/m or GPa. Large values are preferred. Zinc selenide. A common IR refractive material.
Index
A Abbe number, 260, 262, 310, 311 Absolute calibration, 277 Absolute magnitude, 43 Absorptance, 78, 251, 252, 334, 370, 389, 391, 404 Absorption coefficient, 117, 167, 211, 230–233, 235, 237, 334, 356, 358, 401 atmospheric gases, 34, 45–48, 50, 57, 73, 74, 79, 167, 172, 173, 188, 273, 274, 341, 356, 375, 401 ice, 233, 234 Acousto-optic tunable filter (AOTF), 244, 389 Acquisition, 1, 5, 11, 13, 390 Acquisition, tracking, and pointing, 1–29, 132, 390 Activation energy (AE), 183, 217, 218 Active-pixel sensor (APS), 109, 353, 354, 355, 356, 361, 365, 366, 367, 368, 370, 390 Adaptive optic, 31, 32, 35, 36, 41, 46, 51, 52, 53, 58, 63, 66, 200, 203, 206, 208, 263, 389 actuators, 35, 36, 59, 63, 200, 206, 208 wavefront sensor, 59, 64, 196, 406 Adverted vision, 156, 158 Aerodynamic heating, 347 Airmass, 41, 42, 58, 76, 374, 389 Airy disk, 20, 249, 250, 287, 265, 259, 170, 355, 390 Albedo, 190, 231, 235, 390 Algal types, 232, 390 Allard’s law, 48, 390 Alphanumeric, 4, 6, 22–24 Amdahl’s law, 216 American Precision Optics Manufacturers Association (APOMA), 299, 390 Analog-to-digital converter (ADC), 109, 389 Andrade’s beta law, 203 Anomalous trichromats (trichromacy), 144, 390 ANSI, 124, 125
Antarctic glacial ice, 233 Antiblooming, 370, 371 Antireflection coating, 117, 259, 390 Aperture shape factor, 186 Aperture size for laser beams, 167 Apodization, 390 Armored personnel carrier, 3 Arrhenius, 183, 217 Arrhenius equation, 217, 218 Aspect ratio, 125, 135, 199, 200, 315, 326, 371, 372, 394, 405 Astigmatism, 197, 243, 244, 245, 266, 301, 302, 304, 389 Astronomical optical bands, 374 Atmospheric pressure, 168, 340, 343, 375 see Index of refraction structure constant transmission, 50, 168, 279, 295, 325 turbulence, 35, 172, 389, 393, 36, 58, 178 Atmospheric structure coefficient or function, see Index of refraction structure constant Avalanche photodiode (APD), 11, 105–107, 365, 390, 391
B Background flux, 97, 100, 107, 293, 363 high, 70 low, 100, 363 subtraction, 71, 78 Background limited in performance (BLIP), 95, 107, 278, 316, 392 Backscatter, 235, 237, 391, 339 Baffle, 96, 315, 324, 331, 391, 393 Baffle attenuation, 315 Bake-out, 98 Balmer alpha line of hydrogen, 310 Balmer beta line of hydrogen, 310 Bandgap, 109, 110, 357, 368, 370, 391
409
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
410
Index
Bandpass optimization, 277 Base fog, 360 Bathymetry, 230, 234, 235, 391 Beam quality, 174, 175, 181, 186, 188, 194, 300, 392 Beer’s law, 44, 47, 48, 51, 167, 168, 172, 194, 231, 233, 236 Benham’s disk, 139, 391 Bessel functions, 171, 255 Bidirectional reflectance distribution function (BRDF), 268, 315, 324, 331–333, 339–341, 392 Bioluminescence, 239, 240 Biota, 239 BK-7, 306, 310 Black coatings, 289, 324 Black paint, 333, 336, 341 Blackbody, 33, 34, 74, 76–77, 82, 100, 113, 114, 119, 120, 182, 225, 273–275, 280, 281, 283, 289–295, 340, 386, 391, 395, 397, 403 Blackbody accuracy, 289 temperature, 113, 114, 290 temperature, moon, 34 temperature, sun, 33, 34, 293 Blaze angle, 254 Blow-down expander, 85, 86 Blue sky, 76, 334 Blur spot, 21, 24, 171, 243, 257, 271, 287, 296, 402, 406, 407 Bolometer, 102, 103, 392, 273 Bolometric effect, 101 Boxcar amplifier, 112 Breakdown voltage, 105 Brightness, background, 74 laser, 165, 180, 181, 182, 187, 299 stellar, 40, 41, 399 target, 343, 354 visual and display, 7, 33, 123, 124, 127–129, 139, 147, 162, 272, 282, 397
C Calibration, 77, 277, 282, 283, 290 Camera obscura, 271 Candela, 141, 146, 149, 150 Cassegrain telescope, 41, 242 Cataract, 152 Cauchy equation, 196, 198, 212 Central obscuration, 250, 265 Channel stops, 359 Charge-coupled device (CCD), 13, 11, 353, 354, 109, 135, 136, 350, 351, 355, 356, 357, 365, 366, 368, 370, 371, 375, 392, 359–363 noise, 353, 356, 358–363, 365, 366 size, 355, 375 Charge skimming, 121, 392 Charge transfer, efficiency and inefficiency, 205, 206, 356, 359, 360, 361, 366, 368, 393 Charge-injection device, 351, 392 Chlorophyll, 232–235 Circle of least confusion, 243 Circular plate, 210, 269 Circular variable filter (CVF), 246 Classification, 6, 12, 13
Cleaning optics, 302 Clear lake waters, 229 Clogging, 93 Clutter, 4, 5, 7, 10, 11, 69–74, 78, 79, 83, 84, 111, 119, 135, 153, 225, 278, 279, 393, 403, 405, 395–396 infrared, 79 leakage, 71 low, 72 moderate, 72 signal to, 72, 73, 78, 83, 278, 405 variance, 72 CMOS APS, 353, 361, 365, 366, 368 2 , see Index of refraction structure constant Cn Coastal waters, 27, 229, 231 Coating, 28, 89, 98, 101, 102, 117, 201, 242, 254, 255, 259, 260, 267, 289, 302, 303, 307, 308, 315, 321, 324, 331, 332, 335, 337, 339, 350, 370, 390 Coblentz, William, 274 Coefficient of materials expansion, 205 Coefficient of performance (COP), 97, 393 Coefficient of thermal expansion (CTE), 197, 205, 393 Coherence scale, 178 Cold shield, 85, 89, 90, 95–97, 100, 107, 289, 324, 393, 394 Collapse, structural, 78, 88, 202 Collimator, 303 Color blindness, 138, 144, 390, 394, 395 Coma, 243, 244, 301, 389, 392, 404 Common module, 103, 393 Complementary error function, 172 Computer punch cards, 308, 309 Cone, in eye, 6, 141, 142, 144, 158, 260, 262 Conrady formula, 199 Containment bottle, 94 Contrast, 2, 5–7, 18–20, 22, 29, 48, 49, 67, 74, 79, 84, 113, 126–129, 133, 134, 142, 146, 147, 149, 152–154, 156, 157, 160–163, 236, 239, 278, 282, 293, 301, 323, 358, 400 apparent, 128 mero-range, 128 minimum resolvable, 323, 400 modulation, 128 perceptional, 128 physical, 128 transfer function, 7 Control loop bandwidth, 208 delay, 143 Convex optic, 305 Correlated double sampling (CDS), 358, 360, 361, 366, 392, 393 Cost optics, 262–264 photon, 218 Creep strain, 203 Crickets, 219 Critical dimension, 6, 9, 10, 12, 13, 323 Critical flicker frequency (CFF), 148 Cross section, 49, 168, 169, 191, 248, 330, 342, 343 Crossfeed speed, 304 Crown glasses, 310
Index
CRT, 123, 128, 130, 148, 149, 162, 349, 379, 393 Cryocooler, 86, 90, 91, 99, 393 Cryogens, 376, 394 liquid, 91 solid, 91 Cryostat, 87, 93, 94, 393 Cube corner, 168, 169, 393 Cutoff wavelength, 102, 110, 115, 122, 288 Cycles of resolution, 5 Cycles per milliradian, 15, 16, 160, 259 Cylindrical cavity emissivity, 283, 284
D D*, 107, 108, 112, 115, 117, 288, 289, 317, 325, 394 Dall–Kirkham, 245, 394, 398 Dark current, 108, 111, 116, 122, 360, 361, 363, 365, 366, 368, 369 Dark level, 358 Data latency, human eye, 142–143 Dawes limit, 316 Deflection, 200, 206, 207, 209–211, 269, 382 Defocus, 243, 247, 257, 266, 267, 301, 322, 325 Density, optical, 384 Depletion, CCD, 351, 370 Depletion scaling, CMOS, 356, 357 Detection criterion, 6, 10, 13 Detector angular subtense (DAS), 26, 260, 394 Deterministic microgrinding, 298, 306 Deuteranomalia, 143 Deuteranopia, 143, 144, 394 Dewar, 85, 97, 98, 100, 103, 308, 394 radiant input, 100 cold shield, see Cold shield Diabetic retinopathy, 152 Diamond turning, 197, 297–299, 304, 308, 310 Diffuse, 81, 236, 290, 291, 341, 342, 346, 390 Diffuse attenuation coefficient, 229, 231, 233, 238, 239, 394 Diffuse backscatter coefficient, 237, 240 Digitization, 102, 108, 109, 224, 353, 358, 377, 389 Digitizer, 108 Dip coating, 201 Dispersion, optical, 199, 220, 310, 311, 394 Display gamma, see Gamma Distance to horizon, 220 Distortion, 29, 59, 83, 206, 207, 209, 244, 300, 383, 389, 406, 407 Distributed aperture, 394 Dither, 15 Dolbear, A. E., 219, 394 Dome, 58, 326, 327, 347 collapse, 201, 202 Doped Ge, 102 Doped silicon, 102, 359 Dose rate, 368 Downwelling, 47, 62, 82, 394 Drag coefficient, 326 DRI, 1, 2, 5, 9, 11, 15, 124, 127, 395 Dust, 67, 168, 302 Dynamic thermostability, 383 Dyschromatopsia, 143, 144
411
E Eccentricity, human eye, 141, 155, 156, 158, 404 Effective emissivity of a cylindrical cavity, 283 Effective focal length, 250, 251, 262, 265, 295, 322, 395 Elasto-optical coefficients, 191 Electromechanical design, 196 Electronic zoom, 15, 287 Electro-optics, 2, 31, 32, 85, 165, 225, 277, 297, 313, 389, 395, 398 Emissivities, 77, 78, 335, 337, 376, 395 Emissivity, 35, 69, 75–79, 81, 82, 89, 90, 98, 100, 219, 273, 277, 278, 280, 281, 283, 284, 289, 290, 295, 315, 324, 330, 331–332, 337–340, 335, 344, 346, 347, 391, 395 Encircled energy, 171, 196, 248 Energy bandgap, 109, 110, 357, 368, 370, 391 Energy flow into eye, 145 Equivalent background input (EBI), 362, 363, 369 Error function, 8, 9, 172, 395 Etalon, 252, 266, 267, 395 Etendue, 174, 179, 286–288, 395 Euphotic, 232, 395 European Infrared Space Observatory, 85 European Southern Observatory, 206, 263, 351
F f/#, 96, 171, 243, 244, 247, 250, 251, 257, 260–263, 270, 295, 296, 355, 395 Fabry–Perot, etalons, 251–252, 266, 395 False alarm, 4, 8, 72, 172, 384, 395, 402, 403 Fermat’s principle, 258 Field curvature, 243–245, 271, 389 Field of view (FOV), 2, 3, 10, 25–26, 47, 73, 76, 83, 84, 95, 97, 107, 135, 153–154, 163, 235, 236, 243, 244, 247, 253, 257, 258, 260, 286–288, 315, 322–324, 327, 395–397, 399, 406, 408 Figure change, 203 Fill factor, 17, 355, 364, 370 Film, 87, 102, 125, 252, 260, 271, 272, 282, 313, 314, 349–350, 360, 362, 400 Filtering, 70, 71, 107, 112, 266, 296 Finesse, 252, 266 Flatness, optical, 252, 303 Flint glasses, 310 FLIR, see Forward-looking infrared Fluorescence, 194, 229, 335, 403 Flux density, 100, 363, 395 Focal plane array (FPA), 25, 85, 101, 109, 113, 365, 396, 403 Fog, 47, 51, 57, 67, 168, 191, 360 Forward-looking infrared, 6, 10, 101, 302, 323, 396, 409, 410 Foucault knife-edge test, 241, 301, 302, 396 Fovea, 137, 141, 153, 396 Frame differencing, 78, 79, 396, 407 Framingham eye study, 152, 153 Fraunhofer line, 33 Free spectral range, 253, 267 Fried’s parameter, 33, 35, 36, 52, 53, 57–59, 63, 64, 178, 188, 396, 404 Fringe movement, 305 Fringe visibility, 318–319
412
Index
Fringes, 167, 298, 305, 318, 319, 385 Full width, half maximum (FWHM), 174, 396 Fundamental or natural frequency, 197, 207, 208 Fused silica, 211, 212, 306, 381, 382, 383
G Galactic latitude, 37, 38 Galactic longitude, 37 Gallium aluminum arsenide (GaAlAs), 183, 184 Gamma, display, 128, 129, 397 Gas bottle, 85, 88, 93–95 Gate clocks, 130, 359 Gaussian beam, 167, 170, 171, 173, 174, 179, 190, 286 focus, 301 noise, 8 radius, 170 General image quality equation (GIQE), 317 Geometric mean ground sample distance, 317, 397 Gifford-McMahon refrigerator, 86, 87 Gimbal, 2, 14, 139, 320, 399, 405 Glaucoma, 152 Graphite-reinforced plastic, 205 Grating blockers, 253 efficiency, 254 Gray levels, 129 Graybody, 76, 280, 340, 397 Greek alphabet, 379 Greenwood frequency, 52, 208, 397 Ground sample distance, 28, 317, 397 Guide stars, 26, 31, 58 Gustafson’s law, 216
H Hagan–Rubens relationship, 337 HDTV, 124, 132, 349, 353, 359, 363, 366, 397 HeNe, 397 Heads-up display, 123 Heads-down display, 123 Heisenberg’s uncertainty principle, 248, 250 Helvetica font, 23, 24 Herzberger formula, 198, 397 HgCdTe, see Mercury cadmium telluride High-definition television, 124, 132, 349, 353, 359, 363, 366, 397 Highly polished aluminum, 78 HITRAN, 168, 399 Hollow waveguides, 254 Homo sapiens, 153 Horizon, 41, 220, 402 Horizontal sweep, 130 HTVL, 135, 136, 371 Hubble Space Telescope, 199 Human body, signature, 295, 338 eye cone density, 141, 158, 396 common diseases, 152–154 flicker frequency, 148 resolution, 139, 147, 149, 153 delete entry, 138, 146, 147, 405 spectral response, 142, 158 Humidity, 54, 62, 66, 67, 227, 205, 76
Hyperfocal distance, 247, 256, 257 Hyperspectral imagery, 1, 237
I Ice, see Absorption, ice Ice cream, 227 Identification, 1–3, 5, 6, 9, 12–16, 70, 395, 398 Identification, chemical, 313 Image intensifiers, 6, 34, 354, 361, 362 Image processing, 5, 69, 72, 113, 127, 146, 163, 216, 320, 322, 353, 396, 407 Impact ionization, 359 Index of refraction, 52, 53, 58, 198, 199, 255, 259, 260, 271, 301, 305, 310, 389, 394, 397, 404, 405, 192, 193, 286, 337 air, 59, 60, 61, 176, 220, 393, 407 seawater, 81, 234, 236, 237, 238, 384 structure, or C 2n , 35–36, 52–58, 60, 63, 67, 172–173, 176–178, 180, 393 Indium antimonide, 74, 102, 103, 116, 119, 397, 404 Indium gallium arsenide, 87, 102, 227, 397 Infrared Astronomical Satellite, 85, 398 InSb, see Indium antimonide Instantaneous field of view, 10, 247, 253, 260, 286, 315, 322, 323, 397 Interpolation, 14 IRAS, see Infrared Astronomical Satellite IRIA, 71, 104, 330, 398 ISO speed, 360
J James Webb Space Telescope, 32 Jansky, 37, 38, 374, 398 Jet aircraft engine, 340 Jet Plume, 340, 341 Jitter, 15, 60, 78, 186, 187, 267, 362 Johnson criteria, 4–7, 9, 11, 13, 17, 29, 398 Johnson noise, 404 Joint Army, Navy, NASA, Air Force (JANNAF) plume radiance code (PLURAD), 330 Joule–Thompson, 85–87, 93, 94, 392, 398
K kCT noise, see Reset noise Keck, telescope, 31, 31, 263, 264, 352, 398 Kell factor, 6, 22, 131, 135 Kelvin, Lord, 86 Kirchoff, Gustav, 242, 273 Kirchoff’s law, 331 Knoop scale, 307 Koschmeider’s rule, 48
L LADAR, 185, 398, 404 Lagrange theorem, see Etendue Lambert, Johann, 273, 385, 399 Lambertian, 81, 90, 100, 290, 291, 315, 324, 331, 332, 333, 338–343, 346, 390 Lambert’s cosine law, see Lambertian Landau–Levich equation, 201 LANDSAT, 237 Lapidary grinding process, 298, 308
Index
Laser, 165–194, 53, 56, 163, 229, 230, 234, 266, 282, 319, 342, 380, 389, 391, 392, 396, 397–399, 404 cross section, 168, 330, 342, 343 diodes, 124, 182, 183, 399 invention of, 165–166 lines, 168, 380 pulse duration, 184 reliability, 182, 183, 184 rods, 191 Laser beam aperture size, 167 divergence, 173, 184, 185 intensity, 186–189 pointing, 189 spread, 48, 174, 175, 177–179, 186, 187 wander, 172, 180 Law of maximum pain, 224 Law of reflectance, 257 Lead selenide, 102, 103, 119, 402 Lead sulfide, 102, 119, 121, 402 Learning curve, 221, 264 LEDs, see Light emitting diode Lemprosity function, 142 LIDAR, 185, 213, 398 Light speed, 228, 235, 386, 397 Light-emitting diode, 183, 399 Line pairs, 4, 6, 132, 259 Line-of-sight noise, 24 Lines of horizontal resolution, 371, 372 Lines of resolution, 133, 135, 354, 371, 372 Liquid-fueled rocket, 330 Liquid mirrors, see Mirrors, liquid Low-light cameras, 316, 362 LOWTRAN, 168, 399 Lumen, 145, 146, 364, 385, 390 Lunar radiance, 34 Lux, 150, 157, 275, 282, 360, 379, 385, 399, 407
M Mach number, 347 Macular degeneration, 152 Magnitude, 374 apparent, 43 Material removal rate, 304, 306 Mean time between failure (MTBF), 182 Mean time to failure, 86, 217, 400 Mechanical bending resistance, 382 Mechanical stability, 205 Melloni, Macedonio, 277 MEMS, 226, 227 Mercury cadmium telluride, 74, 87, 98, 102, 103, 109, 110, 116, 119, 121, 122, 227, 397 Metal layer insulation, 98 Metal mirrors, 196, 203 Metal reflectivity, 337 Microbolometer, 101, 103 Microchannel plate (MCP), 362, 363, 364, 369 Mie scattering, 49, 66, 67 Military Sensor Symposiums, 104 Minimum f/#, 261, 262 Mirror, 196, 197, 199, 200, 204–208, 212, 213, 252, 269, 298, 331, 392, 401 deflection, 269
413
deformable, 63, 207, 208 figure, 298, 300, 301 liquid, 212 near net shape, 196, 213, 400 spin cast, 196, 212 support criteria, 206 Missile plume, 329, 330, 343–345, 403, 280 Modal representation, 64 MODTRAN, 66, 76, 168, 399, 400 Modulation transfer function, 7, 12, 15, 16, 258, 259, 285, 315, 400 Moh scale, 307 Monochromatic vision, 144–146 Moon, see Lunar Moore, Gordon, 222 Moore’s law, 222 Motion picture effect, 147 MPEG2, 124, 350 MTF squeeze, 16 Multilayer insulation, 98, 398, 400 Murphy, Edward A., 223 Murphy’s law, 223 MWIR extinction, 50
N Narcissus effects, 321, 322 National Image Interpretability Rating Scale, 27–29, 317, 318 National Institute of Standards and Technology (NIST), 275, 283 National Television System Committee, 123–126, 129, 130, 132, 133, 135, 350, 354, 371, 387, 388, 401 Nd:Yag, 172, 191, 192, 380 Near net shape, 196, 213, 400, 401 Night Vision and Electronic Sensors Directorate, 2, 6, 12, 401 Noise 1/f, 73, 111, 113, 118, 119, 120, 122, 289, 358, 394 as function of temperature, 109, 110, 113–116, 118, 365, 121, 122, 356, 357, 358, 362–363, 365, 369 bandwidth, 112, 118, 120, 394, 366 CCD, 360 factor, 363, 364 figure, 363, 364 fixed pattern, 113, 114, 119, 353, 358, 360, 366, 396 Johnson, 111, 118, 404 quantization, 224 readout, 109, 122, 289, 360, 361, 363 shot, 111, 118, 122, 358 spurious, 359, 365 thermal, 97, 117 white, 4, 8, 84, 118–120, 358, 384, 408 video, 133, 136, 354, 365, 366 Noise equivalent angle, 24, 25, 64 Noise equivalent delta temperature, 114, 285, 288, 289, 400 Noise equivalent photon flux density (NEDQ), 363 Noise equivalent power, 112, 394, 401
414
Index
Noise equivalent temperature difference (NETD or NE∆T), see Noise equivalent delta temperature Noise ratio, signal-to-, 2, 4, 8, 10, 384, 405 Nonuniformity, 25, 110, 113, 114, 122, 358, 364 Normalized difference vegetation index (NDVI), 77–78 North Atlantic Ocean, 232 NTSC, see National Television System Committee Number of stars, 38–44 Numerical aperture, 22, 170, 171, 260, 400 NVTHERM, 6, 17, 323, 401 Nyquist criterion, 131, 320, 401 Nyquist frequency, 401
O Obscured aperture, 250 Observation tasks, 1, 2, 5, 9, 11, 15, 124, 127, 395 Off-axis rejection, 324 Opponent process theory of vision, 162, 401 Optical alignment, 308 augmentation, 154, 169 figure, see Mirror figure invariant, see Etendue material properties, 381–383 path length, 193, 401 properties of water, 229–240 signature code (OSC), 330 Optics manufacturing tolerances, 385 Orientation, see also DRI, 6 Overlap, 150, 253, 319, 320 Oversizing an optical element, 307 Overwhelmingly large telescope (OWL), 263 Oxford cryocooler, 86, 87
P PbS, see Lead selenide PD, see Probability of detection Peak-to-valley (PV), 206, 265, 403, 300 Perimeter, used in diffraction calculations, 248 Phase alternating scan (PAL), 124 Phase conjugation, 64 Phosphorus, 102, 183, 368 Photoconductive, 98, 101, 102, 115, 391, 397, 402, 408 Photoemission, 391, 402, 405 Photointerpreters, 28, 317 Photolithography yield, 226, 227 Photometry, 273 Photomultiplier tube, 367, 369, 402, 403 Photopic, 141, 142, 145, 146, 150, 155, 157, 158, 399, 403, 405, 406 Photosynthesis, 334 Pigment, 144, 162, 232, 233, 334 Pitch hardness, 308 Pixel aspect ratio, 125 Planck function, 39, 40, 100, 273, 274, 278, 280, 291, 292, 293, 403 Planck, Max Karl Ernst Ludwig, 273 Plano-convex lens, 192, 270, 271 Plume, 280, 329, 330, 340, 341, 343, 344, 345, 403
Plume, rocket, 329, 330, 344 thrust scaling, 343 Pointing, 1, 2, 60, 169, 180, 189, 190, 197, 303, 320, 326, 342, 390, 391, 402 Pointing of a beam of light, 189 Poisson statistics and noise, 11, 225, 316 Poisson’s ratio, 202, 381, 403 Polarization, 184, 198, 245, 254, 395 Power spectral density, 73, 83, 84, 300, 403 Preattentive, 147 Presample blur, 17 Pressure, 91, 82, 94, 95, 97, 99, 200, 208, 309, 347, 398, 400, 407 air, 60, 41, 42, 59, 61, 62, 66, 76, 168, 340, 343, 375 axial, 209, 269 buckling, 88, 89, 202 collapse, 88, 201, 202 Preston’s law, 309 Probability criteria, 11 Probability of chance, 9, 11, 19 detection, 2, 4, 8, 11, 13, 19, 20, 322, 323, 395, 403, 172, 173, 134 false alarm, 4, 8, 72, 172, 384, 395, 402, 403 pointing, 190 Protanomalia, 143 Protanopia, 143 Psychophysics, 162 Psychometric function, 18, 19 Pt:Si, 74, 102, 113, 399, 403, 405 Pulse stretching, 190, 191 Pupil size, eye, 150, 154–157 P-wells, 368 Pyrex®, 197, 200, 202, 207, 269, 270, 381–383 Pyroelectric, 102, 115, 399, 403
Q Quad cells, 25 Quantization error, 109, 224, 225 Quantum efficiency, 11, 102, 105, 107, 108, 116, 117, 138, 155, 156, 357, 363, 364, 369, 370, 404 Quantum efficiency cones, 155 silicon, 369 Quantum well infrared photodetector, 102, 404 Quarter wave, 300
R R384 database, 49, 50, 53, 55 Radiant exitance, 273, 275, 278, 280, 281, 289 Radiometrics, 404 Radiometry, 273–275, 277, 286, 287, 296, 404 Rain, 48, 49, 51, 67, 168 Range equation, 322, 323 Rayleigh 1/4-wave criterion, see Quarter wave criterion, 20, 21, 316 distance or range, 174 Lord, 20, 404 scattering, 66, 404 Rayleigh–Jeans approximation, 273, 280
Index
Reach-through structure (RTS) silicon avalanche photodiodes, 106, 107 Read an English alphanumeric character, 4, 22, 23 Readability, 6, 23–25 Recognition, see DRI Rectangular plate, 210 Reflectivity, 81, 100, 252, 325, 331, 332, 337, 341, 342 of wet surfaces, 81 Relative edge response (RER), 317 Reset noise, 358, 359, 366, 398 Resistance-area product, 108, 115, 116, 404 Resolution, way too many. Need your help Resolution display, 15, 131, 135 human eye, 147, 149, 153 required to read an alphanumeric, 23–25 spatial, 6, 79, 135, 136, 139, 259 temporal, 124, 150 Resolution Assessments and Reporting Standards (IRARS) committee, 29 Resonant frequency, 382 Responsivity, 105, 106, 110, 116, 117 Retina, 137–139, 141, 143, 144, 150, 152, 154, 155–158, 160, 396, 403, 405 Retinal eccentricity, see Eccentricity Retinal illumination, 150, 156, 157 Retro-reflector, see Cube corner Richardson’s equation, 369 Risk, 314, 399 Ritchey–Chretien, 392, 404, 245 RoA, see Resistance area product Rod density, 158 Rods, in the human eye, 137–163 Root mean squared, 72, 108, 265, 266, 300, 404 Root summed squared, 69, 366, 404 Rose threshold, 133–135 Roughness, 263, 267, 268, 337, 403, 291
S Saccades, 138, 146, 147, 405 Salinity, 236, 237 Sampled imagers, 6, 17 Sampling frequency, 125, 354 Scanning electron microscope (SEM), 335, 336 Scanning sensors, 15 Scatter, 45–50, 66, 67, 190–191, 229–231, 233, 235–238, 240, 315, 334, 394, 404 Scatter angle, 191 Schedule, 303, 314, 317, 320, 321 Schmidt telescope, 258, 327, 405 Schottky barriers, 115, 278, 403, 405 Scintillation, 33, 57, 60, 172, 175, 188 Scotopic, 150, 155, 157–159, 399, 403, 405, 406 Scratch and dig, 302, 304, 307, 311, 312, 315, 385 SCUD missiles, 313 Seawater, 233–236 SECAM, 129, 133, 354, 371, 388, 405 Seeing, 33, 58, 316, 405 Self-deflection, 200, 209, 269, 382 Sellmeier formula, 198, 405 Shack–Hartmann sensor, 59, 63, 64, 194, 397, 399, 406 Shade, Otto, 6, 131
415
Silica, 211, 212, 306, 381–383 Sine rule, 95 Skin, human, 295, 378, 338–340 Sky irradiance, 82 Sky temperature, 75, 76 Slope of waves, 240 Slurry grinding, 298, 306 SMPTE292, 132, 292, 350 Snow, 57, 67, 168, 378, 408, 333 Society of Motion Picture and Television Engineers (SMPTE), 124, 125, 405 Solar, 33, 34, 74, 79, 324, 345, 346, 374, 379 background, 74, 75 constant, 74, 374 reflection, 345, 346 Solid angle, 169, 181, 182, 227, 228, 275, 286–289, 331, 373, 406 Spatial frequency, 15, 17, 70, 73, 83, 84, 258, 259, 265, 300 Spatial resolution, 6, 79, 135, 259 Specific heat, 194, 381, 405, 407 Specific stiffness, 197, 382, 406 Spectacles, 241, 297 Spectral Infrared Thermal Signatures (SPIRITS), 330, 406 Specular, 81, 185, 238, 240, 257, 258, 291, 315, 324, 331–333, 339, 341, 342 Speed of light, see Light speed Speedup, 216 Spherical aberration, 243, 270, 271, 301, 392, 404 Spherical polishing, 297 Spherical waves, 35–36, 176, 178, 45, 271 Spinning molten glass, 196, 213 Spin cast mirrors, see Mirrors, spin cast Split Stirling, 406 SPOT (Satellite Pour l’Observation de la Terre), 237, 406 Spurious charge, 359 Spurious response, 15, 17, 18 Square aperture, 171 Squeeze factor, 16, 17 Stagnation temperature, 347 Standard Advanced Dewar Assemblies, 103 Standardized IR Radiation Model (SIRRIM), 330 Star trackers, 6, 25, 287 Starlight, 149, 178, 379 Steady-state thermal distortion, 383, 406 Stefan–Boltzmann constant, 273, 386 Stellar populations, 42 Stellar sequence, 34 Stereo distance, 161 Stereographs, 161 Sticky notes, 308, 309 Stiles–Crawford effect, 157 Stimulus strength, 18, 19 Stirling, Robert, 86 Stirling, split, 406 Strategic Defense Initiative (SDI), 31, 329 Stratosphere, 46, 375 Strehl ratio, 36, 52, 174, 175, 251, 266, 406 Stroke of actuators, 63 Struts, 40, 248, 251 Subaperture, 59, 64, 65, 297, 406 Submarines, 230
416
Index
Submerged objects, 229, 238, 239 Subpixel resolution, 25 Sun, see Solar Sunlight, 34, 46, 62, 74, 80, 123, 149, 239, 278, 324, 334, 337, 379, 395 Super low ionization ratio, k (SLIK), 105–107 Superposition of colors, 161 Superresolution, 15, 25 Surface energy, 361 figure error (SFE), 301, 405 state, 361 tilt, 312
T T62 Russian tanks, 3 Target detection, 1, 2, 4–6, 11, 127, 139 Target task performance metric (TTP), 7 Telescope wind loading, 326, 327 reflective, 2, 5, 31–36, 40, 41, 46, 47, 51, 53, 58, 59, 176, 178, 186, 196, 197, 199, 200, 203, 204, 213, 241–247, 251, 258, 261, 263, 264, 316, 326 liquid, 212, 213 mass driver, 204, 205, 327 largest element, 327 off-axis rejection, 324 Dawe’s Limit, 316, 317 Television, 123, 124, 126, 131, 135, 230, 313, 314, 357, 371, 393, 394, 397, 405, 349–351, 353–355 Temperature, minimum resolvable, 285, 323 Temporal processing, 78, 407 Temporal-hour, 54–56 Thermal conductivity, 79, 98, 99, 192, 194, 197, 206, 381, 406, 407 diffusivity, 194, 383, 407 expansion, 192, 197, 205, 381, 393 focusing, 191 insensitivity coefficient, 383 stress, 383 Thermoelectric cooler, 97, 363, 407 Thrust scaling, 343 Ti: Sapphire, 191 Time-delay and integration (TDI), 103, 407 Tissuglas®, 98 Total cross-sectional area (TCSA), 54, 55 Total dose, 356, 368 Total integrated scatter (TIS), 268, 407 Total scattering coefficient, 231 Tracking, see DRI Transparency, 229, 230, 257, 330 Trichromatic theory of vision, 161, 162, 407 Tritanopia, 143, 144 Troland, 145, 146, 150, 157, 407 Troposphere, 375 Turbofan, 341 Turbojet, 340, 341 Twilight, 80, 379
U U.S. Army Communications–Electronics Command, 2, 6, 12, 401 Uncooled, 87, 97, 103, 277, 360, 407, 408 Underwater glow, 239, 240 Underwater photographic, 236 Unicellular algae, 232, 390, 403 Uniformity, 110, 113, 114, 122, 365, 403 Unmanned combat aerial vehicle (UCAV), 313 Upwelling, 237, 238, 62, 408, 82
V Van Cittert–Zernike theorem, 319 Vanadium oxide, 101–103 Vegetation, 77, 78, 27 Verdet, 408 Very large telescope, 264, 200 Video, 11, 15, 79, 124, 125, 128, 130–133, 136, 142, 143, 350, 354, 355, 371, 387, 388, 392, 401, 402, 405, 120, 216 delay, 143 digital, 29, 124, 125, 132, 109, 350, 353 formats, 132 NTSC, 126, 132, 287 PAL, 287 Visibility, 48, 50, 51, 53, 66, 67, 128, 238, 318, 319, 390, 408 Vision, 137–163 Visits, 317 Visual magnitude, 39, 40, 42, 44, 374, 399
W Wald and Ricco’s law, 134, 135 Water, 229–240 absorption coefficient, 230–233, 235, 237, 238 index of refraction, 236–238 reflectance, 237, 238, 240 Wave slope, 240 Wavefront, 33, 36, 38, 58, 59, 63, 64, 65, 138, 174, 192, 195, 196, 203, 206, 265, 266, 300, 301, 302, 305, 398, 399, 406, 408 Wavefront error, 36, 46, 64, 65, 174, 175, 187, 252, 265, 266, 298, 300, 389, 397, 408 Well capacity, 108, 109, 111, 121 Wet surface, 81 White noise, 4, 8, 84, 118–120, 258, 384, 408 Wien displacement law, 294, 295 Wien’s approximation, 273 Wiener spectrum, 73 Wind loading, 326 Window, 202, 208–211, 227, 241, 302, 347, 399
Y YAG laser, 172 Young’s modulus of elasticity, 197, 202, 406, 408, 270
Z Zernike polynomial, 64 Zonal representation, 64
About the Authors
Ed Friedman earned a B.S. in physics at the University of Maryland in 1966 and a Ph.D. in cryogenic physics from Wayne State University in 1972. He started his career in the field of ocean optics and subsequently developed system concepts for remote sensing of the atmosphere and oceans. After completing studies related to the design of spacecraft and instruments for the measurement of the radiation balance of the Earth, he was appointed a visiting scientist in the climate program at the National Center for Atmospheric Research (NCAR). Subsequent employers included The Mitre Corporation, Martin Marietta (where he met the co-author), Ball Aerospace and Technologies Corporation, and the Boeing Company, where he currently serves as a Technical Fellow in the Lasers and Electro-Optics Division. In the last ten years, he has concentrated on the development of mission concepts and technologies for astrophysics and space science. While at Ball, he was Chief Technologist of the Civil Space business unit. Recent areas of interest include the use of space-based interferometers to create detailed maps of stellar positions and the use of coronagraphic methods for detection of planets in distant solar systems. In 2001, he was awarded a patent for a novel method of alignment and phasing of large, deployed Earth-viewing optics. He has been a patent reviewer for the journal Applied Optics and an editor for the journal Optical Engineering. Dr. Friedman has published more than 10 peer-reviewed papers on remote sensing, diffractive beam propagation, and ocean optics. Early in his career, he published a book and approximately ten articles on electronics. While a visiting scientist at NCAR, he published five articles on the role of remote sensing in detecting human influences on climate. He is the coauthor of the two previous editions of this book. Ed recently retired after ten seasons as a member of the National Ski Patrol. He and his wife Judith Friedman live in the mountains west of Boulder, Colorado. John Lester Miller earned a B.S. in Physics at the University of Southern California in 1981, participated in physics, math, and engineering graduate studies at Cal State Long Beach and the University of Hawaii, then earned an M.B.A. from Regis University in 1989. He chairs the SPIE session on advanced infrared technology, co-chairs a session on homeland security, and referees papers for several electro-optical journals. He has held positions as Chief Scientist, Director of Advanced Technologies, Program Director, Functional Manager, Lead Engineer, and Electro-Optical Engineer with FLIR Systems (Portland, Oregon), the Research Triangle Institute (Lake Oswego, OR), Martin Marietta/
417
Copyright © 2004 by The McGraw-Hill Companies, Inc. Click here for terms of use.
418
About the Authors
Lockheed Martin (Denver, Colorado; Utica, New York; and Orlando, Florida), the University of Hawaii’s NASA IRTF (Hilo, Hawaii), Rockwell International (Seal Beach, California), Mt. Wilson and Palomar Observatories (Pasadena, California), and Griffith Observatory (Los Angeles, California). While at Martin Marietta in Denver, he met Ed Friedman. He has published more than 40 papers on optical sciences and is author of Principles of Infrared Technology and the co-author of the two previous editions of this book. John has several patents pending in electro-optical technologies. His experience includes leading integrated research, design, and marketing efforts on advanced security systems, active imagers, infrared sensors, space sensors, helmet-mounted systems, scientific instrumentation, homeland security surveillance systems, radiometric test facilities, aviation enhanced vision systems, and environmental and weather monitoring sensors. John is the Vice President of Advanced Technology for FLIR Systems Inc., in Portland, Oregon. He and his wife, Corinne Foster, split their time between Lake Oswego and Bend, Oregon.