DirectDetection
LADAR Systems
Tutorial Texts Series Optical Design Applying the Fundamentals, Max J. Riedl, Vol. TT84 Infrared Optics and Zoom Lenses, Second Edition, Allen Mann, Vol. TT83 Optical Engineering Fundamentals, Second Edition, Bruce H. Walker, Vol. TT82 Fundamentals of Polarimetric Remote Sensing, John Schott, Vol. TT81 The Design of Plastic Optical Systems, Michael P. Schaub, Vol. TT80 Radiation Thermometry Fundamentals and Applications in the Petrochemical Industry, Peter Saunders, Vol. TT78 Matrix Methods for Optical Layout, Gerhard Kloos, Vol. TT77 Fundamentals of Infrared Detector Materials, Michael A. Kinch, Vol. TT76 Practical Applications of Infrared Thermal Sensing and Imaging Equipment, Third Edition, Herbert Kaplan, Vol. TT75 Bioluminescence for Food and Environmental Microbiological Safety, Lubov Y. Brovko, Vol. TT74 Introduction to Image Stabilization, Scott W. Teare, Sergio R. Restaino, Vol. TT73 Logic-based Nonlinear Image Processing, Stephen Marshall, Vol. TT72 The Physics and Engineering of Solid State Lasers, Yehoshua Kalisky, Vol. TT71 Thermal Infrared Characterization of Ground Targets and Backgrounds, Second Edition, Pieter A. Jacobs, Vol. TT70 Introduction to Confocal Fluorescence Microscopy, Michiel Müller, Vol. TT69 Artificial Neural Networks An Introduction, Kevin L. Priddy and Paul E. Keller, Vol. TT68 Basics of Code Division Multiple Access (CDMA), Raghuveer Rao and Sohail Dianat, Vol. TT67 Optical Imaging in Projection Microlithography, Alfred Kwok-Kit Wong, Vol. TT66 Metrics for High-Quality Specular Surfaces, Lionel R. Baker, Vol. TT65 Field Mathematics for Electromagnetics, Photonics, and Materials Science, Bernard Maxum, Vol. TT64 High-Fidelity Medical Imaging Displays, Aldo Badano, Michael J. Flynn, and Jerzy Kanicki, Vol. TT63 Diffractive Optics–Design, Fabrication, and Test, Donald C. O’Shea, Thomas J. Suleski, Alan D. Kathman, and Dennis W. Prather, Vol. TT62 Fourier-Transform Spectroscopy Instrumentation Engineering, Vidi Saptari, Vol. TT61 The Power- and Energy-Handling Capability of Optical Materials, Components, and Systems, Roger M. Wood, Vol. TT60 Hands-on Morphological Image Processing, Edward R. Dougherty, Roberto A. Lotufo, Vol. TT59 Integrated Optomechanical Analysis, Keith B. Doyle, Victor L. Genberg, Gregory J. Michels, Vol. TT58 Thin-Film Design Modulated Thickness and Other Stopband Design Methods, Bruce Perilloux, Vol. TT57 Optische Grundlagen für Infrarotsysteme, Max J. Riedl, Vol. TT56 An Engineering Introduction to Biotechnology, J. Patrick Fitch, Vol. TT55 Image Performance in CRT Displays, Kenneth Compton, Vol. TT54 Introduction to Laser Diode-Pumped Solid State Lasers, Richard Scheps, Vol. TT53 Modulation Transfer Function in Optical and Electro-Optical Systems, Glenn D. Boreman, Vol. TT52 Uncooled Thermal Imaging Arrays, Systems, and Applications, Paul W. Kruse, Vol. TT51 Fundamentals of Antennas, Christos G. Christodoulou and Parveen Wahid, Vol. TT50 Basics of Spectroscopy, David W. Ball, Vol. TT49 Optical Design Fundamentals for Infrared Systems, Second Edition, Max J. Riedl, Vol. TT48 Resolution Enhancement Techniques in Optical Lithography, Alfred Kwok-Kit Wong, Vol. TT47 Copper Interconnect Technology, Christoph Steinbrüchel and Barry L. Chin, Vol. TT46 Optical Design for Visual Systems, Bruce H. Walker, Vol. TT45 Fundamentals of Contamination Control, Alan C. Tribble, Vol. TT44 Evolutionary Computation Principles and Practice for Signal Processing, David Fogel, Vol. TT43 Infrared Optics and Zoom Lenses, Allen Mann, Vol. TT42 Introduction to Adaptive Optics, Robert K. Tyson, Vol. TT41
DirectDetection
LADAR Systems
Tutorial Texts in Optical Engineering Volume TT85
Bellingham, Washington USA
Richmond, Richard D. Direct-detection LADAR systems / Richard D. Richmond & Stephen C. Cain. p. cm. -- (Tutorial texts in optical engineering ; v. TT85) Includes bibliographical references and index. ISBN 978-0-8194-8072-9 (alk. paper) 1. Optical radar. I. Cain, Stephen C., 1969- II. Title. TK6592.O6R53 2009 621.3848--dc22 2009051442
Published by SPIE P.O. Box 10 Bellingham, Washington 98227-0010 USA Phone: +1 360 676 3290 Fax: +1 360 647 1445 Email:
[email protected] Web: http://spie.org Copyright © 2010 Society of Photo-Optical Instrumentation Engineers All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means without written permission of the publisher. The content of this book reflects the work and thought of the author(s). Every effort has been made to publish reliable and accurate information herein, but the publisher is not responsible for the validity of the information or for any outcomes resulting from reliance thereon. Printed in the United States of America.
Introduction to the Series Since its inception in 1989, the Tutorial Texts (TT) series has grown to more than 80 titles covering many diverse fields of science and engineering. The initial idea for the series was to make material presented in SPIE short courses available to those who could not attend and to provide a reference text for those who could. Thus, many of the texts in this series are generated by augmenting course notes with descriptive text that further illuminates the subject. In this way, the TT becomes an excellent stand-alone reference that finds a much wider audience than only short course attendees. Tutorial Texts have grown in popularity and in the scope of material covered since 1989. They no longer necessarily stem from short courses; rather, they are often generated by experts in the field. They are popular because they provide a ready reference to those wishing to learn about emerging technologies or the latest information within their field. The topics within the series have grown from the initial areas of geometrical optics, optical detectors, and image processing to include the emerging fields of nanotechnology, biomedical optics, fiber optics, and laser technologies. Authors contributing to the TT series are instructed to provide introductory material so that those new to the field may use the book as a starting point to get a basic grasp of the material. It is hoped that some readers may develop sufficient interest to take a short course by the author or pursue further research in more advanced books to delve deeper into the subject. The books in this series are distinguished from other technical monographs and textbooks in the way in which the material is presented. In keeping with the tutorial nature of the series, there is an emphasis on the use of graphical and illustrative material to better elucidate basic and advanced concepts. There is also heavy use of tabular reference data and numerous examples to further explain the concepts presented. The publishing time for the books is kept to a minimum so that the books will be as timely and up-to-date as possible. Furthermore, these introductory books are competitively priced compared to more traditional books on the same subject. When a proposal for a text is received, each proposal is evaluated to determine the relevance of the proposed topic. This initial reviewing process has been very helpful to authors in identifying, early in the writing process, the need for additional material or other changes in approach that would serve to strengthen the text. Once a manuscript is completed, it is peer reviewed to ensure that chapters communicate accurately the essential ingredients of the science and technologies under discussion. It is my goal to maintain the style and quality of books in the series and to further expand the topic areas to include new emerging fields as they become of interest to our reading audience. James A. Harrington Rutgers University
To my wife, Linda, who has always been my strongest supporter. R.R. To my wife, Karen, and kids, Asher, Josiah, and Tobias, who make all my days worth living. S.C.
Contents Preface .................................................................................................... xi Mathematical Notation ......................................................................... xiii Chapter 1 1.1 1.2 1.3
1.4 1.5
1.6
1.7 Chapter 2 2.1
Introduction to LADAR Systems ....................................... 1 Background......................................................................................1 LADAR and RADAR Fundamentals ..............................................1 1.2.1 Heterodyne versus direct detection ...................................7 LADAR Range Equation .................................................................8 1.3.1 Laser transmitter models ...................................................8 1.3.2 Atmospheric transmission...............................................10 1.3.3 Target reflectivity and angular dispersion ......................11 1.3.4 Dispersion upon reflection ..............................................12 1.3.5 LADAR receiver throughput and efficiency ...................14 Types of LADAR Systems and Applications ................................14 1.4.1 Three-dimensional-imaging LADAR systems................15 Sources of Noise in LADAR Systems ...........................................15 1.5.1 Photon counting noise .....................................................16 1.5.2 Laser speckle noise .........................................................16 1.5.3 Thermal noise .................................................................18 1.5.4 Background noise ...........................................................18 LADAR Systems and Models .......................................................19 1.6.1 Computational model for the range equation and signal-to-noise ratio (SNR) .............................................19 1.6.2 Avalanche photodiode ....................................................24 Problems ........................................................................................25 LADAR Waveform Models ............................................... 27 Fourier Transform .........................................................................27 2.1.1 Properties of the DFT .....................................................28 2.1.1.1 Periodicity of the DFT.................................29 2.1.1.2 Time-shift property of the DFT ...................29 2.1.1.3 Convolution property of the DFT ................29 2.1.2 Transforms of some useful functions ..............................30 2.1.2.1 Transform of a Gaussian function ...............30 2.1.2.2 DFT of a rectangular shape .........................30
ix
x
Contents
2.2
2.3 2.4 2.5
2.6
Laser Pulse Waveform Models......................................................31 2.2.1 Gaussian pulse model .....................................................31 2.2.2 Negative parabolic pulse model ......................................32 2.2.3 Hybrid pulse models .......................................................33 2.2.4 Digital waveform models................................................34 Pulse/Target Surface Interaction Models.......................................36 LADAR System Clock Frequency and Ranging Error..................45 Waveform Noise Models ...............................................................45 2.5.1 Waveform noise sources introduced at the single-sample level .........................................................46 2.5.2 Sampling criteria and the effect of aliasing on waveforms .......................................................................48 Problems ........................................................................................52
Chapter 3 3.1 3.2 3.3 3.4 3.5
Wave Propagation Models ............................................... 55 Rayleigh-Sommerfeld Propagation ...............................................57 Free-Space Propagation ................................................................58 Atmospheric Turbulence Phase Screen Simulation .......................69 LADAR System Point Spread Function ........................................75 Problems ........................................................................................83
Chapter 4
Detection and Estimation Theory Applied to LADAR Signal Detection................................................................ 85 Simple Binary Hypothesis Testing ................................................85 Decision Criteria ............................................................................92 Detection Methods Using Waveform Data....................................96 Receiver Operating Characteristics .............................................101 Range Estimation.........................................................................103 4.5.1 Peak estimator ...............................................................104 4.5.2 Cross-correlation range estimator .................................107 4.5.3 Leading-edge detectors .................................................112 Range Resolution and Range Accuracy ......................................114 Problems ......................................................................................115
4.1 4.2 4.3 4.4 4.5
4.6 4.7 Chapter 5 5.1 5.2 5.3 5.4 5.5
LADAR Imaging Systems............................................... 117 Single-Pixel Scanning Imagers ...................................................117 Gated Viewing Imagers ...............................................................118 5.2.1 Design and modeling considerations ............................122 Staring or FLASH Imagers ..........................................................123 Modeling 2D and 3D FLASH LADAR Systems ........................126 Speckle Mitigation for Imaging LADAR Systems ......................128
References ........................................................................................... 135 Index ..................................................................................................... 137
Preface The field of 3D LADAR (LAser Detection And Ranging) is growing steadily with new advances in focal plane readout technology driving ever-faster image capture and readout capabilities. This text is designed to introduce engineers to the basic concepts and operation of 3D imaging LADAR systems. The book facilitates the instruction of junior and senior year student engineers as well as graduate students who have a background in statistics and linear systems through a single-term course in LADAR systems. The book begins with the laser range equation and follows with discussions of sources of noise in LADAR signals, LADAR waveforms, the effects of wavefront propagation on LADAR beams through optical systems and atmospheric turbulence, algorithms for detecting, ranging, and tracking targets, and finally, comprehensive system simulation. This book also provides computer code for accomplishing the many examples appearing throughout the text. Exercises at the end of each chapter allow students to apply concepts studied throughout the text to fundamental problems encountered by LADAR engineers. The exercises closely follow the examples so that guidance is available for successfully solving these problems. Students in both academia and industry can use the book as part of a formal course or a self study to acquire a basic understanding of LADAR systems. The book relates how to simulate realistic LADAR data as well as how to process it to extract target-related information. Many thanks are due to Karen Cain, who provided many of the illustrations found in Chapter 1. Thanks are also due to both Karen and Asher Cain, who aided in the initial editing of the text.
xi
Mathematical Notation A Ag
B c C y x det t dA D D DB D Ds Dt DR f fc fl fo
Gapd
h htot Hatm Hdet Hopt Htot
tilt in the horizontal direction caused by atmospheric turbulence aperture transmittance function amplitude of a Gaussian beam tilt in the vertical direction caused by atmospheric turbulence average number of photoelectrons contributed by the background speed of light in a vacuum capacitance of the detector circuit in Farads (F) size of the LADAR receiver detector in meters (m) sample size in the source plane of an optical field in meters (m) sample size in the distant plane of an optical field in meters (m) sample size in the detector plane dictated by propagation rules passband width of the background rejection filter in microns (μm) detector integration time effective target surface area total number of photoelectrons collected by a LADAR system horizontal tilt structure function number of photoelectrons contributed by the background phase structure function number of photoelectrons contributed by the laser pulse diameter of the aperture of the LADAR transmitter optics diameter of the aperture of the LADAR receiver optics frequency in Hertz (Hz) maximum spatial frequency of an optical field in inverse meters focal length of the LADAR receiver optics fundamental frequency of the discrete Fourier transform in Hertz (Hz) angular field of view of the LADAR receiver avalanche photodiode gain quantum efficiency of the LADAR detector Planck’s constant point spread function of the LADAR system atmospheric transfer function detector transfer function optical transfer function total system transfer function, the Fourier transform of htot frequency of the laser light in Hertz (Hz) xiii
xiv
Mathematical Notation
x y
horizontal wind speed across the LADAR receiver aperture vertical wind speed across the LADAR receiver aperture intensity of the LADAR beam at the target in watts per square meter (W/m2) intensity of the returned pulse at the receiver aperture in watts per square meter (W/m2) Boltzmann’s constant number of photons incident on the LADAR detector wavelength of the laser light processed by the LADAR system likelihood ratio test size of an optical field in the distant plane in meters (m) size of an optical field in the source plane in meters (m) coherence parameter; large for incoherent light, 1 for fully coherent index of refraction of the atmosphere average index of refraction of the atmosphere along a path perturbation in the index of refraction about the average value number of electrons measured by the LADAR system number of electrons measured by the LADAR system due to background number of electrons contributed by the detector dark current number of electrons measured by the LADAR system with speckle noise number of noise electrons contributed by thermal noise effects probability of detection probability of false alarm total laser power incident on the LADAR detector in watts (W) laser power incident on the LADAR detector with diffraction effects geometrically predicted laser power incident on the LADAR detector total reflected laser power from the target in watts (W) transmitted laser power in watts (W) pulse width parameter for the negative parabola model in seconds (s) elementary charge in coloumbs (C) readout noise standard deviation in electrons (e.u.) target surface reflectivity Fried’s seeing parameter in centimeters (cm) distance between two surfaces viewed by a LADAR system range between the laser RADAR system and the target range between the LADAR system and the first target in a two-target case range between the LADAR system and a second target in a two-target case range from the LADAR system that the far-field approximation is valid phase correlation function horizontal tilt correlation function vertical tilt correlation function standard deviation of the tilt in the horizontal direction
Itarget Ireceiver
kb K
Lr Ls M N nsr n Nsignal Nb Ndark Nspeckle Nthermal Pd Pfa Pdet Pdet_diff Pdet_tot Pref Pt pw qe Qn
t
ro ΔR R R1 R2 Rff R R R
Mathematical Notation
pc speckle w S SIR
a o
t t tlens to tsr T Tp tns
atm R t
Z
xv
standard deviation of the tilt in the vertical direction photo-current standard deviation in amperes (A) photon standard deviation due to photon counting noise and laser speckle pulse width parameter for the Gaussian pulse model in seconds (s) average number of photoelectrons contributed by the laser pulse background intensity in watts per square meter (W/m2) difference in the time of flight between two surfaces at ranges R1 and R2 atmospheric transmission LADAR receiver optics transmission time of flight in seconds (s) through a vacuum change in the time of flight due to changes in index of refraction lens transmittance function including optical delays time of flight in seconds (s) through an atmospheric path with no turbulence time of flight in seconds (s) through the atmosphere crcuit temperature in Kelvin (K) target profile: the surface area of the target as a function of range time of flight in nanoseconds (ns) phase error introduced by atmospheric turbulence target surface angular dispersion in steradians (sr) beamwidth of the LADAR transmitter in radians (rad) beam waist parameter for the Gaussian beam distance between the target plane and the plane of the LADAR aperture
Chapter 1
Introduction to LADAR Systems 1.1 Background RADAR (RAdio Detection And Ranging) is the process of transmitting, receiving, detecting, and processing an electromagnetic wave that reflects from a target. RADAR was first developed by the German Army in 1935.1 As theoretical and technical developments continued, RADAR techniques and applications expanded into almost every aspect of the modern world. One area of that technical development was in the wavelength of the transmitted signal, first in the 50-cm range and later down into the millimeter and microwave regions. Pulsed light sources and optical detectors were first used in 1938 to measure the base heights of clouds.2 The acronym LiDAR (LIght Detection And Ranging) was first used in 1953,3 and the 1962 development of high-energy or Q-switched pulsed lasers made such sources available for LiDAR applications. In 1963, Fiocco and Sullivan published work on atmospheric observations using a ruby laser.4 Since that time, laser-based sensors have demonstrated most, if not all, of the same functions as radio frequency (RF) or microwave RADAR. The National Institute of Standards and Technology (NIST) adopted the term LADAR (LAser Detection And Ranging) for these laser-based RADAR-type systems.5 That term will be used in this text.
1.2 LADAR and RADAR Fundamentals All ranging systems, whether RADAR, LiDAR, or LADAR, function by transmitting and receiving electromagnetic energy. The only difference among them is that they work in different frequency bands.6 Therefore, many of the same considerations, such as antenna theory and propagation time, apply to all of these systems. This section will define some terms that are common to both LADAR and RADAR systems and also contrast the differences between them. Both LADAR and RADAR systems have similar configurations and subsystems, as shown in Fig. 1.1.
1
2
Chapter 1
Figure 1.1 Typical LADAR/RADAR systems
The signal generator/transmitter subsystem, which is an RF/microwave oscillator/amplifier in RADAR and a laser (also possibly an oscillator/amplifier combination) in LADAR, determines the wavelength and waveform as well as the power of the transmitted signal. RADAR wavelengths are normally grouped in bands, and RADAR developers usually choose them in consideration of other factors such as atmospheric propagation windows. However, in principle the wavelength can be anywhere in the general band with a suitably designed oscillator and amplifier. LADAR developers do not have that same flexibility, because they are limited to wavelengths where both suitable lasers and detectors are available. Gas lasers, such as CO2 lasers, operate on discrete, narrow lines. And while the operating range of such a laser can be adjusted across a range of wavelengths (9 to 11 μm for CO2) by varying the isotopes used in the gas mixture and by adjusting feedback into the cavity, the laser still operates or lases at one of the discrete laser lines. Solid state laser materials may only operate at a few (sometimes only 1) discrete lines. For example, a Nd:YAG laser will normally lase at 1.06 μm, but it can be forced, with a loss of power, to operate at 1.4 μm. While various laser materials can operate over a range of deep ultraviolet (DUV, < 250 nm) to long-wave IR (11 μm), material properties, atmospheric propagation windows and other factors limit the wavelengths of practical
Introduction to LADAR Systems
3
LADAR transmitters to a few commonly used choices (e.g., 1.06 μm with Nd:YAG, 1.5 μm for erbium-doped material, and the 9–11μm band of CO2). While the various waveforms used in RADAR—continuous wave (CW), amplitude or frequency modulated (AM, FM), and pulsed—are also used in LADAR, the mechanism for producing these waveforms is significantly different in RADAR and LADAR. Producing the desired waveform in a RADAR transmitter could be as simple as turning the oscillator (or amplifier) on and off, or using a variable source such as a voltage-controlled oscillator into an amplifier. By comparison, the various LADAR waveforms are usually created by operating on the optical path of the laser. Q-switches that can rapidly change the output coupling of the cavity are used to dump the built-up energy stored in the cavity, producing a sharp, short pulse. Components like acousto-optical modulators are used to impress modulation on the laser output. Because the optical alignment of these components is critical, care must be taken to provide very stable bases and mounts for the optical elements. The recent development of fiber-optic-based components has made LADAR elements directly analogous to their counterparts in RADAR systems. Once a signal is generated, it must be launched toward the target. In RADAR, this is done through an antenna. While RADAR could operate with a simple dipole-type antenna, the resulting omnidirectional beam pattern would be of minimal use, so some type of directivity is needed when using an antenna such as a multielement Yagi or dishes. The optical equivalent of the antenna in a LADAR system is the telescope (or an arrangement of optical lenses). The simplest system to implement is the bi-static configuration shown in Fig. 1.1(a). Here, separate paths and antennas are used for the transmission and receiving functions. Although this configuration is mechanically simple, it does result in a larger system package, especially for some of the longer-wavelength RADAR systems. The main advantage of this configuration is that it does not require coupling of the noise produced by the antenna’s backscattering of the transmitted beam into the receiver channel. This configuration is rarely used in modern RADAR systems, but it is commonly used in LADAR systems. For most current RADAR applications, the same antenna is used for both the transmission and receiving functions (mono-static configuration, Fig. 1.1(b)). While using this configuration can reduce the size and mechanical complexity of the system, it does increase the internal circuitry and number of subsystems needed. The use of only one antenna requires the incorporation of a transmit/receive (T/R) switch. In simplest terms, this T/R switch is a three-port device with one port connected to the transmitter, one to the receiver, and one to the antenna. The switch is a direction device that routes the energy coming from one port to the next port in the rotation direction (e.g., energy coming from the transmitter is routed to the antenna, and from the antenna to the receiver) with minimal energy (noise) routed in the reverse direction. In microwave systems, this switch is called a circulator, and a magnetic field in the waveguide effects the signal rotation around the signal path. In a LADAR system, a common T/R switch uses waveplates to rotate the polarization of the laser beam and a polarization-
4
Chapter 1
sensitive beamsplitter to route the energy into the proper channel. Again, the fiber optics industry recently developed fiber-coupled circulators that are directly analogous to the microwave waveguide and circulator. The final subsystem shown in Fig. 1.1 is the receiver. For both RADAR and LADAR systems, the receiver function transforms the propagating energy captured by the antenna into an electrical signal that can be processed to extract the desired information. In a RADAR system, the fluctuating electromagnetic fields of the returning signal induce currents in the receiver that can be picked up by the detector and amplified, thus creating the signal processed by subsequent subsystems. In a LADAR system, the returning photons cannot directly induce this type of current. Instead, a photodiode is used to convert the photons to current. Charge carriers are generated in response to light incident upon the photodiode, and the photon energy of received light is converted into an electrical signal by releasing and accelerating current-conducting carriers within the semiconductor. This light-induced current of the photodiode is proportional to the intensity of the incident radiation and is the signal that is transferred to other subsystems within the receiver. Since all electromagnetic energy travels at the speed of light c, in free space, the relationship between the range R and the round-trip travel time t is given by
t 2 R / c.
(1.1)
Since few terrestrial LADAR applications have round-trip times that even approach seconds, time is usually accounted for in units of nanoseconds (tns). Solving Eq. (1.1) for the range yields
R c t / 2 t ns 0.150 m.
(1.2)
A term that is useful for characterizing LADAR systems but is often used incorrectly is resolution. The NIST report defines resolution as “the smallest distance separation between two distinct objects illuminated by a LADAR source that can be detected in the signal return.”5 The most common misuse of the term is in reference to imaging systems to describe the size, in range, that such a system can properly image. In Fig. 1.2, the target is a step-like structure where the range difference between the two surfaces (a, b) is ΔR. In the top figure, the beamwidth t is small enough that a single surface is illuminated for each pulse (P1, P2). Then ΔR becomes
R ( R1 R2 ).
(1.3)
In Eq. (1.3), R1 is the distance to surface b, and R2 is the distance to surface a. In the bottom portion of Fig. 1.2, the beamwidth is large enough that both surfaces
Introduction to LADAR Systems
5
Figure 1.2 Range accuracy versus resolution.
of the target are simultaneously illuminated by a single pulse. This is the situation to which the NIST definition refers. In the former situation, we can write
t 2( R R) / c,
(1.4)
where τ is the time of flight for distance ΔR. If we subtract Eq. (1.1), we get
2R / c.
(1.5)
A simple system such as a range finder usually makes only a single range measurement per transmitted pulse and does not consider range resolution. However, some systems have processors that can determine multiple ranges from a single return (i.e., “first-pulse, second-pulse” or “first-pulse, last-pulse” logic) that can be useful for applications such as removing foliage from an image to produce “bare-earth” terrain maps. Another common aspect of LADAR and RADAR systems is antenna beamwidth, which is usually referenced as the half-power points (t). In general, the beamwidth is given by
t 1.22 / Dt (in radians),
(1.6)
where is the wavelength of the light being transmitted, and Dt is the antenna diameter.7 This assumes a uniformly illuminated aperture. Unlike range accuracy and resolution, the beamwidth, and ultimately the spatial resolution, of a system is a direct function of the signal wavelength. For example, for a 3-cm RADAR system with a 1-m antenna, t = 0.03 rad (1.72 deg); and for a 1.06-μm LADAR with a 50-mm (2 in) aperture, t = 0.000021 rad (only 0.0012 deg). For most modern RADAR systems, the targets of interest are usually smaller than the transmitted beamwidth, and the targets act as isotropic scatterers. On the
6
Chapter 1
For most modern RADAR systems, the targets of interest are usually smaller than the transmitted beamwidth, and the targets act as isotropic scatterers. On the other hand, LADAR systems often have beamwidths smaller than the targets, and the targets can resemble anything from a Lambertian to a specular reflector and often are combinations of both. Another target characteristic to which RADAR and LADAR respond differently is surface roughness (more correctly, the scale of the surface roughness). Measuring this roughness can be of particular interest for applications such as monitoring sea states. Rough targets scatter the incident electromagnetic energy diffusely; smooth targets scatter specularly. Surface roughness is a relative measure and depends on the wavelength of the illuminating signal. A surface that appears rough at one wavelength might appear smooth when illuminated with longer-wavelength radiation. With RADAR, as the wavelength decreases, the size of features on the target that can be observed also decreases. At the longest wavelengths, the largest features, such as the size and location of aircraft wings, are not observable but can vary the intensity of the reflected signal as the target viewing angle changes. As the RADAR wavelength gets shorter, features such as the joints between wings and fuselage can start acting like corner cube (commonly known as retro) reflectors and can significantly contribute to the reflected signal. At the shortest wavelengths, seams between body panels—even the heads of rivets and screws—can act as retroreflectors that enhance the reflected signal. The unique shapes and construction techniques used on so-called “stealth” aircraft are attempts to counter these surface and retro effects and reduce the apparent target size to the RADAR cross section (RCS). As the wavelength decreases farther into the LADAR bands, the target surfaces themselves can produce specular and retroreflections. Except for a few bands around 22.2, 60, and 94 GHz, most RADAR systems are not affected by the same atmospheric absorption attenuation that affects LADAR. Atmospheric attenuation is explored in more detail in Sec. 1.3.2. Since RADAR wavelengths are much longer than the diameters of most atmospheric aerosols, scattering is not an issue. One exception is at the shorter wavelengths (Ku, K bands) where weather RADAR takes advantage of scattering by rain droplets and snowflakes. In general, the following comparisons can be stated for LADAR and RADAR systems:
Optically thick clouds and precipitation can attenuate a LADAR beam, while RADAR scatterers may consist of clouds and hydrometeors (e.g., rain or frozen precipitation). Thus, RADAR systems are generally less susceptible to atmospheric absorption effects than LADAR systems. LADAR beam divergence can be two to three orders of magnitude smaller than conventional 5- and 10-cm-wavelength RADAR. This gives LADAR systems superior spatial resolution but a less efficient wide-area search capability than a RADAR system. The combination of the short pulse (of the order of 10–8 s) and the smallbeam divergence (about 10–3 to 10–4 rad) creates small illuminated
Introduction to LADAR Systems
7
volumes for LADAR (about a few m3 at ranges of tens of km). This makes LADAR better at conducting measurements in confined places such as urban areas. The choice of whether to use a RADAR or LADAR system for a given application is ultimately driven by mission requirements. Neither LADAR nor RADAR is necessarily a better technology, but for a specific application one may be more appropriate than the other. 1.2.1 Heterodyne versus direct detection The differences between heterodyne (or coherent) receivers and direct-detection systems, which are incoherent in nature, are illustrated in Fig. 1.3. The primary difference between the two types of systems is that in the heterodyne receiver a portion of the outgoing laser energy is split off and redirected to the receiver detector. This energy is then aligned with that from the receiver aperture, and the detector then operates as a classical mixer. As with conventional RF receivers, the output current i(t) of the detector/mixer can be written as8 1
i (t ) ( q / h ){PLO Ps (t ) 2[ PLO PS (t )] 2 cos[2π( LO S )t S (t )]},
(1.7)
where νLO is the optical frequency of the local oscillator (LO) and signal, and φS is the relative phase between the LO and signal due to the Doppler effect. The frequency of the laser energy reflected by the target moving at a velocity (V) relative to the LADAR is shifted by
and
(2V / c)
(1.8)
S LO .
(1.9)
Psig Pbk
Optical Filter
Optical Detector
Receiver Amplifier
Signal Processor
Incoherent Detection Receiver
Psig
Optical Detector
Pbk
Receiver Amplifier
PLO To Transmitter Optics Laser
Coherent Detection Receiver
Signal Processor
8
Chapter 1
The last term in Eq. (1.7) is the signal of interest in a heterodyne system, with the envelope of the signal following the shape of Ps(t) and the signal within the envelope varying at (now referred to as the intermediate frequency (IF)). While the frequency of the optical fields is too high for any electronic circuits to respond to, modern electronics can easily accommodate the IF in a heterodyne system (for example, a 1.06-μm system would have an IF of approximately 1 MHz). While a heterodyne system has a theoretical advantage over direct detection of about 30 dB,9 this is rarely achieved in a practical system. Effects such as phase front distortion due to the target’s propagation and depolarization of the signal can significantly reduce the heterodyne efficiency of the receiver.10 A coherent detection system is often more complex than a direct-detection LADAR and requires the use of diffraction-limited optics to achieve maximum efficiency.
1.3 LADAR Range Equation Various types of LADAR systems take advantage of different portions of the signal propagation process. The description of this process, the LADAR equation, is directly analogous to the original RADAR equation and can be broken down into several terms that quantify the contribution or effects of various elements of the process illustrated in Fig. 1.1. The range equation is widely used as an analytical tool for computing the power received (Pr) from a target illuminated by a laser pulse containing a given power (Pt). The range equation has many forms that depend on the physical process being interrogated by the LADAR. In general, the LADAR equation covers the following aspects of laser propagation, reflection, and reception:
Laser transmitted power Pt Laser transmitter beam diameter and angular divergence t Atmospheric transmission a Target surface reflectivity ρt Target surface angular dispersion R LADAR receiver quantum efficiency η
1.3.1 Laser transmitter models For purposes of this chapter, we will assume the laser transmitter used in a typical direct-detection LADAR system fires a pulse of laser energy that exists for a short period of time. In later chapters, the shape of the pulse in time or the waveform will be considered, but for this elementary analysis, we shall consider only a pulse that is rectangular in shape and exists for a period of time equal to the pulse width (typically in nanoseconds, ns). The instantaneous power in watts (W) transmitted by the system Pt is then the energy in the pulse divided by the pulse width in time. This describes the temporal shape of the power output of the transmitter.
Introduction to LADAR Systems
9
This simple model of beam propagation yields a beam with a diameter of Db meters, where Db for small angles is computed by multiplying the angular divergence t in radians (rad) by the propagation distance R. This simple model is consistent with the case where the transmitting beam is diffused so that it will propagate through a divergence angle determined by the beam diffuser. Another practical case of beam propagation is where the beam is focused. In this case, focusing optics are employed to produce the smallest beam diameter possible at the target. For a focused beam (or an unfocused beam propagating to the far field) passing through an aperture with a diameter of Dt meters, the beamwidth is proportional to the angular limit of resolution for any optical system. Figure 1.4 demonstrates the two scenarios that lead to the diffraction-limited beam size. In a situation where a lens is used to focus the beam, the diffraction-limited spot size is achieved when the lensmaker’s equation is satisfied. If the beam entering the lens is collimated, the diffraction-limited spot will appear at a distance equal to the focal length of the lens. A diffraction-limited spot can also be achieved by propagating a collimated field through a distance that is great enough to meet the far-field condition. In this case, the beam must propagate a distance greater than Rff to achieve the diffraction limit, where Rff is given by Eq. (1.10):
R ff
2 Dt 2
.
(1.10)
Once the angular size of the propagating beam is known, the intensity reaching the target area through a vacuum can be computed via Eq. (1.11):
I target
4 Pt . π(t R) 2
(1.11)
Figure 1.4 Two scenarios that produce a beam at the target that is diffraction-limited in size.
10
Chapter 1
In Eq. (1.11), the power of the transmitted beam is divided by the area of the beam at the target, yielding a quantity in units of watts per square meter (W/m2). The area of the beam is computed by using the small-angle approximation, in which the diameter of the circular beam is the range R to the target × the divergence angle of the beam. The inequality used in Eq. (1.11) denotes the fact that propagation losses have not been factored into this calculation. 1.3.2 Atmospheric transmission As the transmitted laser beam propagates through the atmosphere, some of the energy is absorbed and scattered by atmospheric molecules and suspended dust and aerosols. This attenuation is wavelength dependent, as shown in Fig. 1.5. The sharp peaks in the absorption spectra a(R) are due to molecular resonances from the gasses that make up the normal atmosphere (O2, N2, and CO2), where the R in parentheses indicates that the absorption can vary along the propagation path due to variations in the atmosphere. While the scattering coefficient b(R) also has some wavelength dependence, it is much smoother and is due to Rayleigh scattering across the particle size distribution of the dust and aerosols. Practical methods have been put forward for measuring effects due to varying atmospheric conditions.11 Software programs are available that can be used to model conditions and scenarios (path length, look angle, etc.) and to estimate these effects.12 Once the values for these coefficients have been obtained, the transmission loss a can be determined using Beer’s Law, as shown in Eq. (1.12):
R a exp [a ( R) b( R)dR] . 0
(1.12)
Transmission 1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00 1
3
5
7
9
Wavelength (microns)
Figure 1.5 Atmospheric transmission in the 1- to 11-μm region.
11
Introduction to LADAR Systems
11
In general, the actual range distribution of these coefficients is not known, and a nominal “average” value is assigned. In those cases, Eq. (1.12) simplifies to
a exp (a b) R .
(1.13)
When this effect is added to the propagation of the transmitted laser beam, it causes the beam intensity at the target to be modified, as shown in Eq. (1.14):
I target
4 a Pt . π(t R) 2
(1.14)
The application of the atmospheric transmission loss does not change the unit of the beam intensity at the target, Itarget. 1.3.3 Target reflectivity and angular dispersion The reflective nature of the target is quantified by the target parameter t. The target reflectance is a unitless quantity that captures the ability of the material to reflect laser radiation. The typical values for this parameter range from as little as 2% to as high as 25%, although an anomalous material such as a polished surface may have a much higher reflectance. The effect of reflectivity quantifies the percentage of laser radiation that is reflected from the target surface. The surface area of the target is also an important factor in determining the amount of radiation that returns to the receiver. In order to compute the surface area parameter dA, it is important to understand the relationship between the laser RADAR receiver’s field of view (FOV) and the angular size of the target. The smaller of the two quantities should be used as the target surface area in the range equation as long as the illuminating beam area in the target plane is not smaller than both quantities. If the illuminating beam is the limiting factor, then the area of the target is equal to the projected area of the beam at the target, as shown in Eq. (1.15):
dA
πt2 R 2 . 4
(1.15)
In the case where the beam is not the limiting factor and the detector FOV is the limiting factor for determining the apparent target area, then the FOV can be modeled as a simple square target. The angular size of the square is approximately equal to the size of any side of the square divided by the range to the target. To compute the instantaneous field of view (IFOV) of the receiver, the size of the square detector used to collect the laser radiation is divided by the focal length of the optical system used to focus the radiation onto the detector, fl:
12
Chapter 1
/ fl .
(1.16)
Comparing these quantities will determine whether the laser RADAR receiver FOV or the target dimension itself is used to determine the effective target area dA. If the angular size of the target is smaller than the FOV of the laser RADAR receiver, then dA is simply the area of the target. If the FOV is smaller than the angular size of the target, then Eq. (1.17) is used to compute dA:
dA ( R) 2 .
(1.17)
With the area and the reflectivity of the target, it becomes possible to compute the amount of laser power that is reflected from the target surface, Pref:
Pref Itarget t dA.
(1.18)
1.3.4 Dispersion upon reflection The laser radiation that is successfully reflected from the target can be reflected back in a variety of ways, depending on the characteristics of the reflecting surface. For mirrors and other so-called specular surfaces, the reflected angle (measured with respect to the surface normal i) is equal to the incident angle. In this case the angular divergence of the reflected radiation matches the angular divergence of the beam before it intercepts the target, as shown in Fig. 1.6. For optically rough targets, the reflected radiation is spread out over an angle larger than the transmitted beamwidth, as shown in Fig. 1.7. The solid angle over which radiation is dispersed is denoted as R and generally takes on either the value of steradians for Lambertian targets or a value equal to t2 steradians for mirrored surfaces. Values between these limits are possible for surfaces that are not entirely rough or polished, but these cases represent the extremes. In this model it is assumed that the beam is striking the target surface at an angle that is normal to the surface. Angular dependence will change the amount of reflected radiation as well. The bidirectional reflectance distribution function (BRDF) is a 4D function that determines the amount of light reflected from an object. The BRDF relates the reflectance of the object as a function of the incidence and observation angles as well as the local azimuth and zenith angles, but an adequate description of the BRDF function is well beyond the scope of this text. With the angular dispersion of the reflected beam, the intensity of the reflected beam in the plane of the LADAR receiver aperture can be computed. This simplified treatment assumes the returning beam power is spread out over the surface area of a spherical section mapped out by the solid angle of the reflected beam. The intensity at the receiver aperture is computed using Eq. (1.19):
Introduction to LADAR Systems
13
Figure 1.6 Angular divergence of reflected radiation when striking a smooth target. The angular divergence matches the angular divergence of the reflected beam before it intercepts the target.
Figure 1.7 Rough targets disperse laser energy, scattering it over an angle greater than the incident beam divergence angle.
I receiver
Pref R 2 R
a.
(1.19)
The power captured by the LADAR receiver becomes the product of Ireceiver and the area of the circular receiver aperture with diameter DR in Eq. (1.20):
Preceiver
a πDR2 Pref . 4 R 2 R
(1.20)
Since the divergence or beamwidth of the pulse remains unchanged for specular reflections, the reflected energy will decrease rapidly as the angle of incidence deviates away from the normal. The equations presented here assume the beam is
14
Chapter 1
striking the target at an angle normal to the target surface. Angles of incidence that are not normal to the target surface will produce less power at the receiver. 1.3.5 LADAR receiver throughput and efficiency The efficiency of the laser RADAR receiver affects the amount of signal power measured by the system. The efficiency is driven by both the optics transmission and the quantum efficiency of the detector. The first factor, the transmission of the optics , indicates the fraction of energy that arrives at the detector from the total energy captured by the receiver aperture. Generally this efficiency factor is fairly high unless the LADAR receiver shares light from the aperture with other sensors. The end-to-end range equation to compute signal power at the detector Pdet that incorporates these efficiency terms is expressed by Eq. (1.21):
Pdet
o a2 DR2 t (dA) Pt . R 2 R (t R) 2
(1.21)
The number of photons K produced by the power falling on the detector is a random variable whose mean is equal to the power falling on the detector × the integration time of the detector circuit divided by the energy per photon. The mean of this variable is computed by Eq. (1.22):
E[ K ] Pdet t / hv,
(1.22)
where E denotes the expectation operation, t is the integration time of the detector circuit, h is Planck’s constant, and v is the frequency of the light striking the detector. The average number of photoelectrons Nsignal produced by the detector is equal to the number of photons × the quantum efficiency of the detector. Equation (1.23) computes the fraction of the signal that is converted into photoelectrons determined by the quantum efficiency.
E[ N signal ] E[ K ].
(1.23)
The electrons produced by the optical signal generate the electrical signal that is eventually digitized. The optical and electrical signals also contain a noise component that must be accounted for; noise effects are discussed in detail in Sec. 1.5.
1.4 Types of LADAR Systems and Applications LADAR systems can be catalogued into many categories. They may be grouped by transmitted waveform (i.e., CW, modulated, or pulsed); by receiver concept (heterodyne or direct detection); or by the intended measurement (range, velocity, backscatter, or spectral absorption). Systems within the realm of direct-
Introduction to LADAR Systems
15
detection LADAR vary depending on whether they illuminate portions of the target by scanning the target area with the illumination beam, or illuminate the entire target at once. The technical considerations associated with some of these categories and their applications will be discussed in this section. 1.4.1 Three-dimensional-imaging LADAR systems Any sensor that could produce a full 3D image of a target or scene would make tasks such as target acquisition and recognition or mapping considerably easier. Several programs have been undertaken to develop just such a sensor. The most common concept uses direct time-of-flight (TOF) incoherent design, as illustrated in Fig. 1.8. In the simplest implementation, an illumination pulse is generated by means of a short-pulse laser, and the signal generated by the reflected pulses is made available to a timing circuit to determine the range. To develop a useful image, the range measurements must be made at many individual picture elements (pixels). Three specific classes or approaches to 3D imaging will be discussed in more detail in Chapter 5.
1.5 Sources of Noise in LADAR Systems Several phenomena contribute noise to direct-detection LADAR measurements. The noise sources include statistical fluctuations in the light arriving at the LADAR detector, noise introduced by the system, and unwanted photons. The types of noise discussed in this section are listed below:
Photon counting noise Laser speckle Thermal noise Background noise
Figure 1.8 Three-dimensional imaging concept.
16
Chapter 1
1.5.1 Photon counting noise The number of photon-electrons counted during time t is a random variable whose mean is proportional to the expected number of photons. The random element in the measurements reflects the fact that the photons arrive at random times, thus introducing uncertainty in the number of photons measured during a finite time interval. The number of photons measured during the detector integration time has been proven to be a Poisson random variable. Figure 1.9 demonstrates the effect of random photon arrival times on the number of photons counted by a detector with a finite integration time. 2 The photo-current variance pc , which is expressed in units of C2/s2, can be computed using knowledge of the photocount variance by multiplying the expected photocount from Eqs. (1.19) and (1.20) by qe2 (the square of the elementary charge) and then dividing by t 2, as shown in Eq. (1.24):
2 pc
qe 2 2qe 2 B Pdet E[ N signal ] 2 . t hv
(1.24)
In this equation, twice the positive bandwidth B of the detector circuit is equal to the reciprocal of the detector integration time t. 1.5.2 Laser speckle noise Laser speckle noise effects are caused by interference occurring at the detector from a large collection of independent coherent radiators. The interference phenomenon occurs as the electromagnetic field from the illuminating laser
Figure 1.9 Effect of random arrival times on the number of photons counted during a fixed interval. If the photons arrive at predictable intervals, six photons are counted. Random arrival times here show eight photons being counted.
Introduction to LADAR Systems
17
reflects off of an extended target surface. Goodman gives a full treatment of the mathematics that gives rise to the speckle. Speckle noise can be simulated in a laser RADAR measurement by modeling the number of photons detected, Nspeckle, as a negative binomial random variable with a mean equal to N and a variance given in Eq. (1.25):13
2 E[ N signal ] 1 speckle
E[ N signal ] . M
(1.25)
2 is the variance of the measured photocounts, and M is In this equation, speckle the number of degrees of freedom of the light. For fully coherent light, M = 1, and for fully incoherent light, M approaches infinity; thus, M is a measure of the coherence of the light, both spatially and temporally, during the measurement time of the LADAR system’s detector. The negative binomial random variable captures the statistical fluctuations in the measured signal due to both the interference phenomena and the photon noise discussed in Sec. 1.5.1. Figure 1.10 shows how the variance of the photo-count measurement changes as a function of M for the case where E[K] is equal to 100 photons. If we compare the noise variance in the case where speckle is present to the noise variance of the light when speckle is absent, it is clear that speckle simply adds excess noise that is proportional to the square of the power incident on the detector, which quickly becomes dominant as long as the coherence of the measurement is high (M = 1).
Figure 1.10 Speckle noise variance and photon noise variance for the case where the average number of photons is equal to 100. The speckle noise variance is plotted as a function of the number of degrees of freedom M in the light. As M moves toward infinity, the speckle noise variance approaches the photon noise variance.
18
Chapter 1
1.5.3 Thermal noise Anything that is not at a temperature of 0 K radiates photons. Since a detector is not perfectly cold, it will generate noise. If the detector is coupled to an analogto-digital converter via a capacitor, then the electrical charge variance is determined by Eq. (1.26):
Qn2 kbTC / qe2 .
(1.26)
In this equation, Qn is the charge standard deviation of the number of thermal noise electrons, Nthermal, generated in the detector circuit, kb is Boltzmann’s constant, T is the temperature of the circuit, and C is the capacitance of the circuit. Typically, the exact capacitance of the detector circuit is unknown. More practically, the thermal noise present in the LADAR system is assessed via a dark current measurement. Dark current is defined as the current traveling through the detector circuit when no light is present at the detector surface. The noise associated with the dark current itself is Poisson in nature. If the mean of the dark current is measured and subtracted from the variance of the dark current measurement, the resulting number is the variance of the thermal noise. 1.5.4 Background noise Background in the context of LADAR system measurements constitutes any other light or signal that is collected by the detector and does not originate from the laser transmitter. For most practical scenarios, the background radiation is sunlight that falls on the area within the receiver’s instantaneous FOV. The background photons collected by the sensor bear no information concerning the range to the target, but the random arrival times of the photons from the background contribute noise to the LADAR system measurement. The variance of the noise generated by the background is equal to the number of photoelectrons produced by the background due to the Poisson nature of the noise. The calculation for the mean number of photoelectrons, E[Nb], received from the background during a LADAR system measurement is given in Eq. (1.27):
S IB AB t a o DR t E[ N dark ] , 4 R 2 hv 2
E[ N b ]
(1.27)
where Nb is the number of photoelectrons contributed by the background, including the Poisson noise;9 SIB is the intensity of the background light at the target in units of W/m2 per μm of electromagnetic bandwidth; and is the electromagnetic bandwidth in μm of an optical bandpass filter present in the LADAR system. Such a filter is typically designed to reduce the amount of background radiation and has bandwidths in the nm to sub-nm range. AB is most often equal to dA unless this parameter is limited by the size of the transmit
Introduction to LADAR Systems
19
beam, in which case AB is determined by the smallest angular size of the target or the IFOV of the detector. In either case, AB is an area computed in the same way as dA. E[Ndark ] is the expected number of electrons contributed by dark current. 1.6 LADAR Systems and Models This text is primarily focused on direct-detection LADAR systems that perform either scanning or flash modes of operation to form 3D images. Both types of systems possess a laser transmitter that illuminates the target area with a short laser pulse. The receiver gathers the light returning from the target and focuses it onto a detector that measures the return pulse as a function of time and determines the time of arrival using some sort of decision logic. The first basic LADAR model that will be developed in this text involves the computation of the signal and noise associated with the measurement of a rectangular pulse returning from a distant target that has been illuminated by a direct-detection LADAR system. 1.6.1 Computational model for the range equation and signal-tonoise ratio (SNR) The method for computing the average number of photoelectrons collected by a hypothetical receiver for a given LADAR scenario involves a simple translation of the equations in this section into a MATLAB script. The LADAR scenario definition is the first part of the program. This section of the code defines the laser power, the range to the target, and the beam transmitter divergence angle. The laser power parameter is not the average power of the laser, but the instantaneous power. The instantaneous power Pt is computed by taking the average power, P_avg, and dividing it by fp. This intermediate quantity is the amount of energy per pulse, E_t. Example 1.1 In the following example, the LADAR system has an average power of 100 mW and a pulse repetition frequency of 10 pulses/s; the pulses themselves, assumed to be rectangular with a width in time of 10 ns, are being used to illuminate a target 1000 m away with a transmit beam divergence of 10 mrad: True_range = 1000; % True range to target in units of meters PRF=10 % Pulses per second P_avg=0.1 % Average laser power in units of watts E_t = P_avg/PRF; % Energy per pulse in units of Joules Pulse_width = 10e-9; % Pulse width in units of seconds P_t = E_t/Pulse_width; % Instantaneous energy in units of Watts
20
Chapter 1
theta_t=.01; % Transmit beam divergence in radians tau_atm=1; % Atmospheric Transmission tau_opt=1; % Receiver Optics Transmission The intensity of light at the target plane can be computed using Eq. (1.11). The following MATLAB code can be used to compute the target plane intensity, Itarget: I_target = 4*tau_atm*P_t/(pi*(True_range^2)*(theta_t^2)) % Target intensity in units of watts per meter square
The intensity of the laser radiation in the target plane is used to compute the power of the light reflected back toward the receiver optics, Pref. To compute the reflected power, the target area and reflectivity must be specified. In most cases the target area is not limited by the physical dimensions of the target, but by the area of the target viewed by the detector within the LADAR receiver. The following MATLAB code can be used to compute the reflected power Pref : rho_t=.1 % Target reflectivity reciever_focal=.1 % Focal length of the LADAR receiver in meters delta=1e-4 % Physical size of the detector in the LADAR receiver in meters dA=(True_range*delta/reciever_focal)^2 % Area of the target limited by the IFOV of the receiver in square meters P_ref = I_target*dA*rho_t; % Reflected power from the target in units of Watts The reflected power returns to the LADAR receiver aperture after it has been dispersed by its interaction with the target. This interaction can produce a wide range of reflection characteristics, as described in Sec. 1.3.4. In the example used in this section, it is assumed that the target produces a Lambertian reflection and the light is scattered into sr of solid angle. The intensity at the LADAR receiver aperture, Ireceiver, is computed via Eq. (1.16) using the following MATLAB code: theta_r=pi; % Reflection angle for Lambertian targets I_receiver=tau_atm*P_ref/(theta_r*(True_range)^2); % Intensity of the signal at the receiver aperture The number of photoelectrons measured by the system from the pulse returned from the target is computed by multiplying Ireceiver by the area of the receiver aperture, the optical efficiency of the system, the efficiency of the detector, and the inverse of the photon energy:
Introduction to LADAR Systems
21
quantum_eff=.3; % Quantum Efficiency of the detector ap_diameter=.01; % Aperture diameter in units of meters h = 6.626e-34; % Planck's constant v = 3e8/1.55e-6; % frequency of light that has a 1.55 micrometer wavelength N = tau_opt*(ap_diameter^2)*pi*I_receiver*Pulse_width*quan tum_eff/(4*h*v); The noise is assumed to have a component with a negative binomial distribution that arises from a combination of laser speckle and photon counting noise. It is also assumed to contain a component caused by background noise and a third component caused by thermal noise. In this text the strategy for simulating noise involves generating the components of the noise independently and adding them together to realize the total system noise. Traditionally, different types of noise have been combined by assuming independence and adding the variances of the noise. Although this strategy is valid for computing the combined variance of the noise, it does not allow for a model that produces realizations of noise with the proper probability mass function. In the example in this section, the negative binomial noise is simulated in MATLAB as described in Sec. 1.5.2. The simulation procedure involves using the inverse cumulative distribution function (ICDF). The inverse of the cumulative distribution function (CDF) is the value of the random variable that produces the cumulative probability represented by the CDF. The first step involved in simulating noise with a negative binomial distribution is to simulate a normally distributed random variable: x=rand; % x is uniformly distributed between 0 and 1 The coherence parameter for this example is equal to 1 and represents a case where the speckle is fully developed.17 The MATLAB command that generates a negative binomial random variable with full coherence and a mean equal to the number of photoelectrons N is predicted by the model: M=1; % Coherence Parameter N_speckle=icdf('nbin',x,M,M/(M+N)); % Noisy signal due to speckle The noise due to the background is computed via Eq. (1.27). The result is the average number of photoelectrons received that are not due to the laser pulse. This example uses typical values for solar radiance that are dependent on the solar angle and cloud cover. The parameter AB, which represents the area of the scene that reflects sunlight into the receiver, is equal to the area on the target
22
Chapter 1
subtended by the pixel. This is not always the case, but in this example the LADAR system is assumed to be illuminating a target much larger than the laser spot or the IFOV of the receiver. The dark current electrons in this example are computed by multiplying the dark current by the pulse width of the laser, as this is the time over which the LADAR measurement is being made. The following MATLAB code can be used to compute the noise due to the background: S_irr= 1000; % Watts per square meter per micrometer delta_lam=.001 % Bandwidth of receiver in units of micrometers Pbk= S_irr*delta_lam*dA*rho_t*ap_diameter^2/(4*True_range^2 ) % Background power collected by the receiver i_dark=0.75e-9; % 1 nano-amp of dark current electron=1.619*10^(-19); % Elementary charge of electrons N_dark=i_dark*Pulse_width/electron; N_b=Pbk*Pulse_width*quantum_eff*tau_atm*tau_opt/(h*v)+ N_dark; % Number of photoelectrons from the background N_back=Poissrnd(N_b); % Background number of photons with random arrival times included Finally, the thermal noise needs to be computed. If the temperature of the circuit is known as well as the noise figure of the circuit, the noise electron standard deviation in the circuit can be computed via Eq. (1.26). Assuming a capacitance of 1 picofarad (pF) and a temperature of 300 K for this LADAR example, the following lines of code compute the number of noise electrons: kb=1.3806504*10^(-23); % Boltzmann's constant T=300; % Circuit temperature in Kelvin C=1e-12; % Capacitance of detector circuit Q_n_sq=kb*T*C/electron^2; % Variance of electron thermal noise N_thermal=sqrt(Q_n_sq)*randn; % Gaussian thermal noise random generation In this particular example, the number of photoelectrons produced by the returning LADAR pulse is approximately equal to 744.46. The number of background photons is approximately equal to 46, while the thermal noise standard deviation is equal to 397 electrons. The SNR can be computed by forming a ratio of the signal divided by the standard deviation of the noise. As long as the individual noise components are
Introduction to LADAR Systems
23
statistically independent of one another, the standard deviation of the noise can be approximated as the square root of the sum of the variance of the individual noise components:
SNR
N signal 2 Qn2 speckle Nb
.
(1.28)
The SNR for this example using Eq. (1.28) yields a result of 0.88. As an exercise, a Monte Carlo analysis can be used to verify the validity of Eq. (1.28) using MATLAB code. The Monte Carlo analysis features the use of random number generators to generate the various noise components. The random data d represents the data that can be obtained by summing the noisy signal components, as in Eq. (1.29):
d N speckle N b N thermal .
(1.29)
The SNR becomes the mean of d divided by its standard deviation. The following MATLAB code computes the SNR via the Monte Carlo approach: for trials=1:200 x=rand; % x is uniformly distributed between 0 and 1 M=1; % Coherence Parameter N_speckle=icdf('nbin',x,M,M/(M+N)); % Noisy signal due to speckle S_irr= 1000; % Watts per square meter per micrometer delta_lam=.001; % Bandwidth of receiver in units of micrometers Pbk= S_irr*delta_lam*dA*rho_t*ap_diameter^2/(4*True_range^2 ); % Background power collected by the receiver i_dark=0.75e-9; % 1 nano-amp of dark current electron=1.619*10^(-19); % Elementary charge of electrons N_dark=i_dark*Pulse_width/electron; N_b=Pbk*quantum_eff*tau_atm*tau_opt*Pulse_width/(h*v)+ N_dark; % Number of photoelectrons from the background N_back=Poissrnd(N_b+N_dark); % Background number of photons with random arrival times included kb=1.3806504*10^(-23); % Boltzmann's constant T=300; % Circuit temperature in Kelvin
24
Chapter 1
C=1e-12; % Capacitance of detector circuit Q_n_sq=kb*T*C/electron^2; % Variance of electron thermal noise N_thermal=sqrt(Q_n_sq)*randn; % Gaussian thermal noise random generation data(trials)=N_speckle+N_back+N_thermal; end SNR=N/std(data) This code yields a result of 0.889 when run over 200 trials. The nature of the Monte Carlo analysis is such that the result will vary from run to run, but it is in close agreement to the theoretically predicted value. 1.6.2 Avalanche photodiode An avalanche photodiode (APD) is a device widely used in LADAR systems. It is essentially a detector that possesses gain. Within a conventional photodetector, a photon striking the detector surface has some probability of producing a single photoelectron that in turn produces a current within the detector circuit that can be converted to a voltage. An APD is a detector that produces a flood or avalanche of photoelectrons from a single incoming photon. The gain of the APD dictates how many electrons are produced by each photon that is successfully converted into a useful signal. The quantum efficiency of the detector determines the probability of causing the avalanche. A similar effect can be produced using a photomultiplier tube (PMT), which also amplifies light. The difference between these devices is simply the physical method by which the amplification takes place. The APD is a solid state device while the PMT is not. The effect of the APD or PMT on the numerical model introduced in Sec. 1.6.1 is to add a gain factor to both the signal and certain noise terms. The effect of the APD can be captured by changing the expression for the SNR in the following way:
SNRAPD
Gapd N signal 2 2 2 Qn2 Gapd Nb speckle Gapd
,
(1.30)
where Gapd is the gain of the APD or the number of photoelectrons that are created with the successful detection of each photon. The addition of the gain results in the new SNR, SNRAPD, which is higher for any gain over 1 because the APD does not amplify the electrons generated in the detector circuit that follows the APD. If the gain of the APD is made to be very large (on the order of 1000 or more), the detector is said to be operating in Geiger mode. Geiger-mode operation allows the thermal noise to be ignored in Eq. (1.30). Geiger-mode
Introduction to LADAR Systems
25
operation does produce an effect known as saturation where the APD cannot respond to another photon after one photon has caused the avalanche. This results in a dead time during which the APD must recover and will not be able to respond to incoming photons. For this reason, Geiger-mode APDs generally perform ranging on the first surface encountered in the detector’s FOV and will not be capable of discerning the presence of another surface at a different range unless multiple pulses are fired. Figure 1.11 shows the SNR of the signal as a function of the APD gain computed via Eq. (1.30), thus demonstrating how the SNR increases as the APD moves from linear-mode operation to Geiger-mode operation.
Figure 1.11 SNR as a function of APD gain.
1.7 Problems 1-1
A laser transmitter produces an average of 1 W of laser power and 10 pulses/s. If the pulses are rectangular with a width of 5 ns, what is the energy per pulse of the laser light?
1-2
A laser transmitter produces an average of 5 W of laser power and 100 pulses/s. If the pulses are rectangular with a width of 10 ns, what is the peak instantaneous power of the laser light in the pulse?
1-3
A laser transmitter with a beam divergence of 0.01 rad illuminates a target that is square with sides 1 m in length. The range to the target is 1 km. The receiver has a focal length of 1 m and a detector size of 100 μm. What is the correct value of dA for use in the laser range equation for this scenario?
1-4
A laser transmitter with a beam divergence of 0.001 rad illuminates a target that is square with sides 5 m in length. The range to the target is 3 km. The
26
Chapter 1
receiver has a focal length of 2 m and a detector size of 1 mm. What is the correct value of dA for use in the laser range equation for this scenario? 1-5
A bistatic LADAR system illuminates a target that produces a Lambertian reflection. Using the simple assumptions found in the development of the range equation, what is the maximum angular separation between the transmitter and receiver that would allow a range measurement to be taken?
1-6
If a LADAR system receives 100 photons from the returned pulse and 10 photons of background radiation, what is the SNR of the system? Assume the coherence parameter M = 10 and that there is no thermal noise.
1-7
If a LADAR system receives 400 photons from the returned pulse and 50 photons of background radiation, what is the SNR of the system? Assume thermal noise is also present in the measurement with a standard deviation of 100 electrons, the dark current is equal to 10 nanoamps (nA), and the pulse width is 10 ns.
1-8
If the sun produces 1000 W/m2/μm of radiation on a target and the LADAR system has a square 100-μm detector pixel and a 1-m focal length, how many photons of background noise are collected during a measurement time of 10 ns? The LADAR system has a 10-cm aperture diameter, the target is Lambertian with a reflectance of 10%, and the range from the LADAR system to the target is 1 km. Also assume no transmission losses, a wavelength of 1 μm for the laser radiation, and a bandwidth of 1 nanometer for the optical rejection filter.
1-9
Using the computer code provided in this chapter and the LADAR system parameters contained in it, vary the detector gain by running the code for gain values of between 1 and 1000. Then plot the SNR as a function of the gain and comment on how the SNR changes as a function of the gain.
1-10 Using the computer code provided in this chapter and the LADAR system parameters contained in it, vary the range of the target and plot the SNR versus the range. Determine the LADAR system’s effective range distance by finding the ranges for which the SNR is greater or equal to 1. Assume the gain of the APD = 50. Repeat these exercises for the case where the APD gain = 1000. This problem is an example of linear-mode operation (low gain) versus Geiger-mode operation (high gain).
Chapter 2
LADAR Waveform Models Chapter 1 featured the computation of signal power measured from a laser pulse reflected from a target. The assumed waveform of the pulse was a rectangular function in time. This chapter describes more complicated waveform models that will allow for a better temporal understanding of a LADAR system’s performance. This chapter will explain the tools necessary to compute the shape of the returned pulse that has been reflected from a laser-illuminated target under a variety of conditions. Target interaction models will be introduced that predict how the shape of the laser pulse in time is modified by the target surface geometry. Both waveform signal and noise models will be used to simulate realistic LADAR returns and facilitate the derivation of algorithms that are capable of extracting range information from the LADAR signal. Chapter 4 will show that the shape of the waveform has an effect on the accuracy of range measurements extracted from the LADAR return signal. This chapter also features the use of the discrete Fourier transform (DFT) for processing laser RADAR data. The DFT is an important tool for both simulating LADAR waveforms and estimating range from measured LADAR data. To utilize the DFT, it is necessary to gain some familiarity with its form and properties. This chapter features some examples of the DFT of typical waveform shapes and signals. 2.1 Fourier Transform The Fourier transform maps a signal in the time domain (like a laser RADAR waveform) to a function that describes the frequency content of the signal. For example, if the signal was a complex sinusoidal wave as a function of time of the form e j 2πft , with a frequency f in units of Hz, its transform would be zero for every frequency index except for one whose magnitude is the amplitude of the complex sinusoidal wave and whose phase is equal to the phase of the wave. Thus, the Fourier transform is a complex quantity G whose values for different frequencies are computed via Eq. (2.1):
27
28
Chapter 2
G( f )
g (t )e
j 2πft
.
(2.1)
A discrete version of the Fourier transform samples the signal and its transform in time and frequency. The DFT shown in Eq. (2.2) uses a summation as opposed to an integral due to the discrete version of the signal used in the operation. The summation does not go from negative to positive infinity, because it is assumed that the signal is periodic with a period equal to Nt. N 1
G (nf o ) g (k t )e j 2πnfo k t .
(2.2)
k 0
In this equation, g is a set of digital samples that are indexed by the integer k, which ranges between 0 and N−1. Each sample in the signal corresponds to a time that is computed by multiplying the index by the time between samples t in units of seconds. The DFT of the signal g is the function G, which is also a discrete set of samples indexed by the integer n, which ranges between 0 and N − 1. The discrete frequencies corresponding to the frequencies present in the signal are equal to the index n multiplied by the fundamental frequency of the transform fo. The entire transform is defined by the time between samples and the number of samples in the signal. The relationship between these parameters and the fundamental frequency is given by Eq. (2.3):
fo
1 . N t
(2.3)
This choice of fundamental frequency, when substituted back into Eq. (2.1), produces the following simplification of the DFT: N 1
G (nf o ) g (k t )e
j 2πnk N
.
(2.4)
k 0
The DFT possesses an inverse operation that maps the function G in the frequency domain back to the time domain signal g: N 1
g (k t ) G (nf o )e
j 2πnk N
.
(2.5)
n 0
2.1.1 Properties of the DFT This section introduces a number of important properties of the DFT. In most cases these properties are presented without proof; however, a few sample proofs
LADAR Waveform Models
29
are offered to demonstrate the process by which DFT properties can be surmised from the basic DFT and inverse DFT operations shown in Eqs. (2.4) and (2.5). Properties of the continuous-time Fourier transform are similar to the DFT in many cases. However, those properties are not discussed here because the continuous Fourier transform is not used to process LADAR data gathered by sensors or produced by computer simulation. 2.1.1.1 Periodicity of the DFT The DFT is a periodic function of frequency.15 Since the maximum frequency present in the transform is equal to Nfo, the values of the transform from the frequencies between 0 and Nfo are equal to the values from −Nfo to 0. If N is even, the first N/2 samples in the DFT correspond to the positive frequency components of the signal from frequencies 0 and Nf o / 2 − fo, while the next N/2 samples are the negative frequency components from −Nfo / 2 to − f o. 2.1.1.2 Time-shift property of the DFT The time-shift property of the DFT states that if a signal g(kt) has a DFT G(nfo), then the DFT of the signal g((k −t)), where s is a delay in the signal in units of samples, is given by15
G (nf o )e
j 2πn s N
N 1
g ((k s ) t )e
j 2πnk N
.
(2.6)
k 0
2.1.1.3 Convolution property of the DFT The convolution of two functions can be computed with the use of Fourier transforms via the convolution property of the Fourier transform. Computing the DFT of the convolution summation is the first step in proving the convolution property of the Fourier transform: Ns
Ns
G (nf o ) h((k m) t ) s (m t )e j 2πkn / N .
(2.7)
k 1 m 1
In this equation, G is the result of the Fourier transform of the discrete convolution of the functions h and s. Rearranging Eq. (2.7) yields an alternative expression for G: Ns
Ns
m 1
k 1
G (nf o ) s (m t ) h((k m) t )e j 2πkn / N . Substituting k' = k − m allows the rightmost summation to be expressed as the DFT of the function h. This substitution also implies that k = k' + m. Thus, the following expression is attained:
30
Chapter 2 Ns
Ns
m 1
k 1
G (nf o ) s (m t )e j 2πmn / N h(k )e j 2πk n / N .
(2.8)
The DFT of both functions can be recognized after performing the substitution operation. Therefore, the DFT of the convolution of two functions is equal to the product of their DFTs. As a consequence of this property of the DFT, the convolution of two functions can be computed in MATLAB using the following line of code: g=real(ifft(fft(s).*fft(h))); 2.1.2 Transforms of some useful functions 2.1.2.1 Transform of a Gaussian function
The Gaussian function occurs in many applications and disciplines. In LADAR the Gaussian shape can be used to describe pulse shapes produced by laser illuminators. The continuous Gaussian function is given by13 t 2
1 2 g (t ) e 2 . 2π
(2.9)
This signal possesses the traditional Gaussian shape as a function of t with the standard deviation being equal to in units of seconds. The Fourier transform of g is equal to13
G ( f ) e 2
2 2
π f2
.
(2.10)
This result also has a Gaussian shape; however, it has been scaled both in amplitude and width. The DFT possesses exactly the same relationship between the discrete Gaussian and its transform. 2.1.2.2 DFT of a rectangular shape The rectangle function is very useful in LADAR systems because it describes both the pulse shapes of ideal laser pulses and the range profiles of simple sloped targets. The rectangle function, or rect function, is parameterized by its width W :15
t rect W
t 1 if –W/2 < t < W/2, and rect W
0 otherwise.
The Fourier transform of the rect function scaled by the width parameter W in units of seconds is a scaled sinc function:
LADAR Waveform Models
31
G ( f ) W sinc(Wf ) .
(2.11)
The DFT has the same relationship between the discrete rectangle function and its transform. The difference between the discrete and continuous case is that the parameter t is replaced by an integer k times the sample time t.
2.2 Laser Pulse Waveform Models The target illuminated by a LADAR system is assumed to contain at least one reflective surface in the receiver’s IFOV. This implies that the return observed by each pixel as a function of time will be equal to an attenuated version of the outgoing pulse plus a bias term associated with background light and the bias current in the detectors. The amount of power produced by the laser and transmitted toward the target area as a function of time, Pt, dictates the shape of the pulse in time. As stated previously, the most simple waveform model is the rectangular pulse shape. In this model, the amount of energy per unit time is given by
Pt (t )
t Et rect , pw pw
(2.12)
where Pt is the power in the laser pulse as a function of time, Et is the amplitude of the pulse in units of joules, t is the time in seconds, and pw is the width of the pulse from edge to edge in units of seconds. 2.2.1 Gaussian pulse model The rectangular pulse model is an approximation to the actual pulse shapes produced by laser illuminators. This chapter introduces some common shapes used to describe symmetric pulses. The first nonrectangular pulse shape we will introduce is the Gaussian pulse shape, whose function is identical to the function used to describe the probability density function for a normally distributed random variable:
Pt (t )
Et
w 2π
t 2
e
2 w2
.
(2.13)
In this equation, w is the width parameter of the Gaussian pulse shape in units of seconds. This shape provides a more realistic description of the power produced by a laser transmitter than that of the rectangular pulse, because the laser power is a continuous function of time. One drawback of this model is the leading tail of the pulse, which implies that the output of the laser illuminator is non-zero for all time before the pulse is fired. This response, which gets ever smaller as time moves toward negative infinity, results in a violation of causality since the
32
Chapter 2
system output occurs long before the operator fires the laser. The trailing tail that continues to infinity is less of a problem, because it does not violate causality. 2.2.2 Negative parabolic pulse model Another model used to describe the output power of laser transmitters is the negative parabolic model. In this model the laser power as a function of time is modeled as an inverted parabola in time. The negative parabola model, with a pulse width of pw s, is shown in Eq. (2.14): 14
Pt (t )
t 3Et 4t 2 1 2 rect . 2 pw pw pw
(2.14)
The scaling of this function guarantees that the energy in the pulse is equal to the parameter Et. The rectangular function serves to limit the pulse amplitude to strictly positive values, which is what is expected for the power produced by a laser transmitter. The benefit of the negative parabola model is that it is a causal function, meaning its output is zero as time approaches negative infinity. The drawback of this model is that it has no tail to account for an exponentially decreasing power output that can be found in many laser transmitter pulse profiles. Figure 2.1 shows examples of the three pulse models plotted on the same time axis.
Figure 2.1 Pulse shapes for the rectangular, Gaussian, and negative parabolic shapes. The pulses each have a total of 1 J of energy, and they all have a total width of 6 ns.
LADAR Waveform Models
33
2.2.3 Hybrid pulse models Another type of model for describing the power produced by a laser transmitter as a function of time is the hybrid pulse model. This model represents a conglomeration of the other models introduced in this chapter. The models are combined by using one model for the front half of the pulse and a different model for the second half of the pulse. A hybrid pulse thus has the benefits of the noncausal waveforms if one of those models is used for the first half of the pulse, while the back part of the pulse can use the Gaussian model, which describes the exponential decay of power sometimes observed in laser pulses. An equation for a hybrid model of a negative parabolic pulse on the front end combined with a Gaussian pulse on the back end is shown in Eq. (2.15):
Pt (t )
Et
norm
t 2t pw / 2 2 t s w / 2 1 2 rect e rect s . (2.15) pw pw w
4t 2
2
2 w
In this equation, norm is a normalizing factor in units of seconds that will account for the area under the curve of the hybrid waveform. This normalizing factor is computed for different hybrid waveform parameters by integrating Pt(t) from negative to positive infinity when Et and norm are both equal to 1 in Eq. (2.15). The parameter s in the hybrid waveform model allows the user to control the length of the tail in the Gaussian part of the waveform in terms of the width parameter of the Gaussian. Figure 2.2 provides an example of a pulse shape that can be achieved using the hybrid waveform model.
Figure 2.2 Hybrid waveform created with a negative parabolic pulse width of 6 ns on the front half of the waveform and a Gaussian standard deviation of 2 ns on the back side of the waveform. The normalizing factor is 4.5 ns.
34
Chapter 2
2.2.4 Digital waveform models
The waveform models introduced in previous sections of this chapter are analog waveforms. One major application of waveform models is in the area of LADAR system simulation. Simulation models require a discrete form of the signals processed by the LADAR system. This section will introduce discrete pulse models to facilitate this type of computer modeling. The adaptation of continuous models to that of discrete models is accomplished by sampling the pulse waveforms in time. The sampled version of the rectangular waveform Pt(k,t) is shown in Eq. (2.16):
Pt (k , t )
t Et t tk rect k rect pw p t w
.
(2.16)
In this equation, k is the sample number, which is an integer, and t is the time between samples. In most circumstances the sample time should be chosen to be small enough to meet the Nyquist criterion for the waveform shape.15 The integer k tracks the number of samples that occur between when the pulse is fired and when it is received. The rectangular pulse shape measured at the LADAR receiver as a function of time, which includes this time delay, is shown in Eq. (2.17):
Pdet (k , t )
t 2R / c Edet (t tk ) rect k rect . pw pw t
(2.17)
In this equation, Edet is the total energy detected from the target reflection of a rectangular pulse with energy Et as predicted by the laser range equation. The same process of sampling the waveform and delaying it to account for the range to the target and back to the receiver can be performed for both the Gaussian and negative parabolic waveforms. This process yields the pulse models shown in Eqs. (2.18) and (2.19):
Pt (k , t )
Et
w 2π
( tk 2 R / c ) 2
e
2 w2
(t tk ) rect t
(2.18)
and
(t k 2 R / c ) 3Et 4(tk 2 R / c ) 2 (t t k ) Pt ( k , t ) . (2.19) 1 rect rect 2 2 pw pw pw t These discrete models produce waveforms that are piecewise rectangular. The return power computed for a rectangular pulse from the range equation in Chapter 1 can be used to compute the return power at discrete points in time
LADAR Waveform Models
35
provided one assumes that the pulse is rectangular for a much wider range of waveforms. The detected power from a rectangular pulse was given by Eq. (1.18). When the transmitted power becomes a waveform of piecewise rectangular segments, the power in the waveform detected by the LADAR receiver is computed via
Pdet (k , t )
o a2 DR2 t (dA) Pt (k , t ) . R 2 R (t R) 2
(2.20)
Operationally, the modifications to the computer code introduced in Chapter 1 to calculate the received waveform power as a function of time involve the construction of a loop over time indices. This structure allows different discrete times to be visited in the simulation between the minimum and maximum times for which the receiver is programmed to measure the signal returning from the target. This set of times corresponds to the range gate of the LADAR system, Rgate:
Rgate (Tmax Tmin )c / 2,
(2.21)
where Tmin corresponds to the first time the LADAR system begins to measure the return signal from the target, and Tmax corresponds to the last time the return signal is measured. Example 2.1
A LADAR system with the parameters listed below is used to illuminate a target 1000 m away. This exercise will predict the number of photons received from the target as a function of time if the receiver is set up with a range gate between 990 and 1010 m. The laser transmitter is assumed to be transmitting a Gaussianshaped pulse. The MATLAB code for this exercise is as follows: True_range=1000; % true range to target in meters PRF=10 % Pulses per second P_avg=0.4 % Average laser power in units of watts E_t = P_avg/PRF; % Energy per pulse in units of joules Sigma_w = 2e-9; % Pulse standard deviation in units of seconds theta_t=.01; % Transmit beam divergence in radians tau_atm=1; % Atmospheric transmission tau_opt=1; % Receiver optics transmission rho_t=.1 % Target reflectivity
36
Chapter 2
reciever_focal=.1 % Focal length of the LADAR receiver in meters delta=1e-4 % Physical size of the detector in the LADAR receiver in meters dA=(True_range*delta/reciever_focal)^2 % Target area in square meters theta_r=pi; % Reflection angle for Lambertian targets ap_diameter=.01; % Aperture diameter in units of meters Rmin=990; % Minimum range in the range gate minT=Rmin*2/3e8; % first time that the receiver will measure the return Rmax=1010; % Maximum range in the range gate maxT=Rmax*2/3e8; % last time that the receiver will measure the return deltat=Sigma_w/10; % Sample time to ensure good pulse shape sampling t=minT:deltat:maxT % Range of times over which the return signal is measured P_t=(E_t/(sqrt(2*pi)*Sigma_w))*exp(-((tTrue_range*2/3e8).^2)/(2*Sigma_w^2)); I_target = 4*tau_atm*P_t/(pi*(True_range^2)*(theta_t^2)); dA=(True_range*delta/reciever_focal)^2 % Area of the target P_ref = I_target*dA*rho_t; % Reflected power from the target in watts theta_r=pi; % Reflection angle for Lambertian targets I_receiver=tau_atm*P_ref/(theta_r*True_range^2); % Intensity at the aperture P_rec = tau_opt*(ap_diameter^2)*pi*I_receiver/4; % Received signal power plot(t,P_rec)
The code in this numerical example generates the plot of time versus received power shown in Fig. 2.3. This plot does not include noise sources or background radiation. 2.3 Pulse/Target Surface Interaction Models
In Sec. 2.2, the waveform measured at the receiver was assumed to have been the result of a reflection of the transmitted pulse off of a surface that was normal to
LADAR Waveform Models
37
Figure 2.3 Received signal power as a function of time calculated for Example 2.1.
the direction of propagation. This section will explore the effects of reflection off non-ideal surfaces. In the first case, the FOV contains two surfaces of equal area but different ranges, as shown in Fig. 2.4. In this case, the response from the first surface can be computed using the range equation using half the area of the FOV. In this case, the received waveform contains a response from both the first surface and the second surface. The presence of the first surface does not attenuate the amount of light reaching the second surface because they are adjacent to one another. The waveform seen by the receiver is the sum of the waveform returned from the first surface and the waveform returned from the second surface. The code in Example 2.2 was used to generate two separate waveforms to correspond to the scenario shown in Fig. 2.4. In this case, the surfaces were separated by 5 m so they would be easily resolved.
Figure 2.4 Diagram to demonstrate the case where two objects are within the FOV of the LADAR receiver. S1 is the first surface with an area equal to half of the area subtended by the IFOV at a range of R1 from the LADAR receiver. S2 is the second surface with an area equal to the first at a range of R2 from the receiver.
38
Chapter 2
Example 2.2
This example is identical to the scenario in Example 2.1 with the exception that the ranges to the targets are 1000 and 1005 m, and the area of the targets is reduced by half. The previous code can be used to obtain the received power from the first surface with the following changes: True_range=1000; % true range to first target in meters dA=((True_range1*delta/reciever_focal)^2)/2 % Target1 area in square meters P_rec1 = tau_opt*(ap_diameter^2)*pi*I_receiver/4; % Signal power from S1 The same code can be changed again to obtain the power from the second surface: True_range=1005; % true range to second target in meters P_rec2 = tau_opt*(ap_diameter^2)*pi*I_receiver/4; % Signal power from S2 The same code also can be used to generate plots of the received power from the two targets. The total received power is computed by summing the two waveforms shown in Fig. 2.5: P_rec = P_rec1+P_rec2; % Received signal power from both waveforms
Figure 2.5 Plot of the received power from two surfaces separated by 5 m.
LADAR Waveform Models
39
The result obtained in Example 2.2 can be generated in a different way by adding the concept of the target range profile to the model. The target range profile takes the place of the target area variable times the surface reflectivity. In situations where the area of the target is determined by the IFOV of the sensor, the areas of the different surfaces in the IFOV will sum together to be equal to the total area dA. If the dA parameter times the reflectivity t is removed from the range equation, the returned signal power from a target can be computed that has an area of 1 m2 with a unity reflectance:
Pdet (k , t )
o a2 DR2 Pt (k , t ) . R 2 R (t R) 2
(2.22)
The next step to modify the model is to introduce the target profile Tp(tkk). The index kk in the target profile vector is an integer corresponding to the time sample tkk, which is related to the TOF of the LADAR pulse from the system to the target. The target profile is used to model the range-dependent surface area of the target times the range-dependent reflectivity, as long as the range variation within the target is an order of magnitude less than the overall range from the LADAR to the target area. Example 2.3
In this example, Tp(tkk) = 0 except when t1 = 6.67 μs and t2 = 6.7 μs. At these times, Tp(t1) = Tp(t167) = 0.5 m2, which is half the value of dA computed in Example 2.1. In this case, the code in Example 2.1 is modified in the following way: P_ref = I_target; % Reflected power from a 1 sq meter target
With this modification, the received power is computed as it was in Example 2.1 with the exception that the signal is consistent with the power that would be produced from a target with an area of 1 m2 and a reflectance of 1. The target profile must be created for this model. The first target surface occurs at a range of 1000 m, and the second occurs at 1005 m. These ranges must be converted to times, and then the appropriate values for the areas of those targets must be recorded in the profile. This is accomplished by creating a vector with the same number of samples as the time vector associated with the range gate. The following lines of code can be used: T_p=zeros(size(t)); % Create the target profile with the same number of time samples as the range gate T_p(1)=dA*rho_t; % The first surface is at the range of 1000 meters
40
Chapter 2
T_p(round((2*5/3e8)/deltat))=dA*rho_t; % The second surface is at a range of 5 meters from the target range P_rec_tot=real(ifft(fft(P_rec).*fft(T_p))); % Convolution of P_rec and T_p.
The final step in Example 2.3 is to use the target range profile in conjunction with the pulse shape of the laser to produce the returned pulse from the target. Because the target area has been removed from the received power calculation, the calculated returned pulse needs to be multiplied by the area of each surface to produce the correct power at the receiver and shifted by the range from the location of the first surface to the location of the surface in question:
Pdet_tot (tk ) Pdet (tk t1 )Tp (t1 ) Pdet (tk t2 )Tp (t2 ).
(2.22)
If this procedure is repeated for any number of surfaces in the range gate, then this equation becomes Ns
Pdet_tot (tk ) Pdet (tk tkk )Tp (tkk ).
(2.23)
kk 1
The result in Eq. (2.23) is the convolution of the pulse received from a target that has an area of 1 m2 and the target range profile. In this equation, Ns is the number of samples in the time vector. The convolution model is an approximation of the original range equation, as shown in Fig. 2.6. In this figure, the reflection from the second surface is plotted as a function of time for the original model and the new model, which uses convolution. The differences in the waveforms are due to the fact that the convolution model computes one waveform corresponding to the power returned from a surface at one range in the target area rather than targets at multiple ranges. In this case, the waveform for the target at 1000 m would be the same whether or not the convolution model was used, because a range of 1000 m was used to compute the waveform used in the convolution model. The target at the range of 1005 m does not factor in the attenuation for the additional 5 m of range that the laser radiation must propagate over to return to the receiver. The difference between the waveforms is not great since the extra 5 m of range associated with the second surface is small compared to the 1000-m overall range to the target region. Now that the effect of two surfaces in the range gate has been demonstrated, we can look at the effect of adding more surfaces within the range gate. Figure 2.7 shows the function Tp(tkk) as a function of time from the first surface in the range gate.
LADAR Waveform Models
41
Figure 2.6 Example of the convolution model. The solid line is the pulse returned from the second surface computed from the convolution model in Example 2.3. The dashed line is the waveform computed from the original model in Example 2.2, which includes the range effects of the propagation to the second surface.
Figure 2.7 Surfaces S1 and S2 are located at positions within the range gate that correspond to the times shown on the time axis of this figure.
42
Chapter 2
The methodology for adding more surfaces is relatively simple. If the range gate has four distinct surfaces, the function Tp(tkk) needs to be modified to contain four distinct impulses. The locations of those surfaces must be arranged from the time the first surface appears at the range corresponding to the value of true range in the MATLAB model. Figure 2.8 shows both the function Tp(tkk) and a diagram of the surface in the sensor’s IFOV for a range gate with four surfaces. If the surface in the sensor’s IFOV is continuous and tilted so the surface changes in range by 5 m, the target profile takes on a rectangular shape, as shown in Fig. 2.9. Figure 2.9 also shows the corresponding waveform for this sloped surface. The total width of the pulse returned from an extended target can be computed by the convolution width property. In general, the width of a discrete signal that is the result of a convolution is the sum of the widths of the functions involved in the convolution minus one sample.15 This property can be demonstrated graphically by convolving two rectangle functions of widths W1
Figure 2.8 Target profile for a surface that has four distinct surfaces within a 5-m range difference. In this case the IFOV contains four surfaces that are in a stair-step pattern.
LADAR Waveform Models
43
(a)
(b) Figure 2.9 (a) The waveform produced by a tilted target with a range depth of 5 meters for LADAR parameters found in Example 2.1 with the corresponding target profile shown in part (b).
and W2. In Fig. 2.10(a) and (b), the two rectangle functions are shown having widths of 9 samples and 5 samples, respectively. Figure 2.10(c) shows these waveforms convolved to produce a waveform with a width of 13 samples.
44
Chapter 2
lo Figure 2.10 Rectangle functions with widths (a) 9 samples, and (b) 5 samples. (c) Result of the convolution with 13 samples.
LADAR Waveform Models
45
2.4 LADAR System Clock Frequency and Ranging Error The concept of the digital waveform was introduced in Sec. 2.3 to facilitate digital simulation of LADAR waveforms. Although the waveforms are continuous as they are transmitted to the target and reflected back, in most practical systems the waveforms are sampled digitally when measured by the detection circuit. The time between samples in the LADAR receiver is determined by the system clock frequency fclock. This component greatly influences the accuracy of range measurements, so the stability of the master clock is one of the most important attributes of a direct-detection LADAR system. The clock frequency is inversely proportional to the time between samples in the waveform, t; thus, if the clock frequency drifts, the range estimates will be adversely affected. The connection between clock frequency error and range error is demonstrated in Eqs. (2.24) and (2.25):
t terror 1 /( f clock f error )
(2.24)
R Rerror n(t terror )c / 2 .
(2.25)
and
In these equations, the target is assumed to return the laser pulse a total of n clock cycles at length t + terror after the laser is fired. The estimated range R is incorrect by the error Rerror due to the frequency error ferror.
2.5 Waveform Noise Models Many noise sources serve to corrupt the waveforms measured by LADAR systems. In effect, anything that changes the shape of the waveform as it is detected is a source of noise. Our concern with preserving the shape of the waveform will become more evident in the next chapter when we attempt to extract the range from the return signal. At this point it is sufficient to say that any changes to the shape of the waveform will have an impact on the range accuracy of algorithms designed to estimate the range to the target. The types of noise discussed in Chapter 1 are present in the waveform on a sample-by-sample basis, so the noise can be introduced to the waveform by inserting it in each sample. Some types of noise are signal-dependent and cannot be easily modeled at the sample level, so these are best introduced to the waveform as a whole. Waveform noise that can be introduced at the single-sample level will be discussed in Sec. 2.5.1. Another effect that influences the recorded shape of the waveform is improper sampling of the signal, otherwise known as aliasing. Section 2.5.2 discusses the sampling theorem as a criterion to avoid aliasing. The effects of aliasing are also demonstrated on the shape of the waveform.
46
Chapter 2
2.5.1 Waveform noise sources introduced at the single-sample level
This section applies the noise formulas described in Sec. 1.5 to waveforms. The following sources of noise are included: 1. 2. 3. 4.
Photon counting noise Laser speckle Thermal noise Background noise
As discussed in Chapter 1, photon counting noise and laser speckle are introduced into the waveform by observing the number of photoelectrons in each sample. The power in the return waveform is converted to units of photoelectrons via the following equation:
E[ N (k )]
Prec (k , t )t hc
.
(2.26)
Equation (2.26) is used to generate the expected number of photo-counts for waveform sample number k. Noise is added to the waveform by generating a negative binomial distributed random number with a mean equal to this number and a coherence parameter M dictated by the spatial and temporal coherence properties of the measurement. This random number replaces the value of the waveform, which is generated without noise. Example 2.4
The following code was used to generate the waveform in Example 2.1 with units of watts, and the signal was converted into units of photons per waveform sample: P_rec = tau_opt*(ap_diameter^2)*pi*I_receiver/4; % Received signal power quantum_eff=.075; % Quantum Efficiency of the detector h = 6.626e-34; % Planck's constant v = 3e8/1.55e-6; % frequency of light that has a 1.55 micrometer wavelength N = P_rec*deltat*quantum_eff/(h*v);
By drawing on the code used in Chapter 1 to generate a single random number with a negative binomial distribution and assuming that the measurement is fully coherent (M = 1), the following code can be used to generate a waveform that possesses negative binomial noise:
LADAR Waveform Models
47
M=ones(size(N)); % Coherence Parameter x=rand(size(N)); N_speckle=icdf('nbin',x,M,M./(M+N)); % Noisy waveform due to speckle
Figure 2.11 shows the waveform from Example 2.1 in units of photons without noise. Figure 2.12 shows the same waveform after the addition of photon and laser speckle noise. The appropriate amount of thermal noise is generated by adding a normally distributed random variable with the appropriate standard deviation to every point in the waveform. The method for adding thermal noise to a waveform is given below: kb=1.3806504*10^(-23); % Boltzmann's constant T=300; % Circuit temperature in Kelvin C=1e-12; % Capacitance of detector circuit Q_n_sq=kb*T*C/electron^2; % Variance of electron thermal noise N_thermal=sqrt(Q_n_sq)*randn(size(N)); % Gaussian thermal noise generation
Figure 2.11 Waveform in Example 2.1 in units of photons as a function of time in seconds.
48
Chapter 2
Figure 2.12 Result of adding laser speckle noise and photon noise with the waveform in Fig. 2.11 as the mean.
The background noise is added to waveforms by computing the background level using methods discussed in Secs. 1.5.4 and 1.6.1. The code introduced in Chapter 1 to simulate background noise can be used with the following modification: N_back=Poissrnd(N_b*ones(size(N))); % Background photons with noise added
The total waveform is then computed by adding the different sources of signal and noise to produce the data. In this case the code to produce the final data in units of photons would be data=N_speckle+N_back+N_thermal;
The data variable is a waveform that has the same number of samples as the time vector t. It can be displayed by plotting it versus time using the plot command. 2.5.2 Sampling criteria and the effect of aliasing on waveforms The Nyquist sampling theorem states that a signal can be reconstructed from its samples if it is sampled at a rate greater than twice the maximum frequency present in its Fourier transform. The maximum frequency present in the transform is the highest frequency for which the amplitude of the transform is nonzero. This implies that if one wishes to properly sample a pulse shape, the
LADAR Waveform Models
49
Fourier transform of the shape should be computed so that the maximum frequency can be identified. As an example, the frequency content of a Gaussian pulse is computed to determine the Nyquist frequency. The frequency content of the Gaussian pulse is computed by taking the Fourier transform of the Gaussian pulse shape. In this case, the Fourier transform of a Gaussian is a Gaussian and was presented in Eq. (2.10), so the Fourier transform of the transmitted pulse in Eq. (2.13) is given by
t 2
Et w
2π
e
2 w2
2 2
e j 2πft dt Et e 2 w π
f2
Et e f
2
/2(1/4 w2 π 2 )
.
(2.27)
The resulting Gaussian has a standard deviation of (1/2w). Although there is no exact cutoff frequency, the Gaussian function can be considered to be nearly zero outside of three standard deviations from its center. No choice of cutoff frequency can completely eliminate aliasing; however, at standard deviations, the Gaussian function nearly vanishes. Choosing this to be the effective cutoff frequency means fc = (1/2w), so twice the cutoff frequency is (1/w). This result implies the sampling period is w, or one standard deviation of the Gaussian pulse shape. We can examine the signal generated in Example 2.1 to demonstrate the effect of sampling on a LADAR waveform. In that example, the sample time was chosen to be one-tenth of the standard deviation of the pulse, which produced a waveform that was adequately sampled and unmistakably Gaussian in shape. Downsampling that signal by a factor of 10 results in a sample period equal to one standard deviation of the Gaussian. Figure 2.13 shows the waveforms produced by sampling the Gaussian shape with a sampling period of one standard deviation as well as the oversampled waveform converted into units of photoelectrons from Example 2.1. Figure 2.13 shows that sampling the Gaussian produces a decidedly nonGaussian shape if the samples are linearly interpolated for graphing purposes. However, if we perform an ideal interpolation on the sampled waveform, it becomes possible to recover the Gaussian shape. Ideal interpolation is a Fourier domain operation on the data designed to increase the sample rate of the data without changing the content of its Fourier transform. In this case the DFT of the data is taken and zero-padded to produce a transform with a large number of added samples. The zero-padded signal is then inverse transformed to produce the interpolated signal. This can be observed mathematically by computing the frequency resolution of the DFT via Eq. (2.3) as
50
Chapter 2
Figure 2.13 Plots of the aliased waveform sampled with a period equal to the standard deviation of the waveform, plus a waveform oversampled by a factor of 10. The Nyquistsampled waveform seems to show a deviation from the Gaussian shape.
fo
1 , N1 t1
(2.27)
where N1 and t1 are associated with the number of samples and sample period in the downsampled signal. When the transform domain signal is zero-padded, enough zeros are added to change the number of samples to N2. The new sample period t2 obtained upon performing the inverse transform is given by
t 2
1 N 2 fo
.
(2.28)
In this case the zero-padding operation adds samples in the frequency domain but does not change the frequency resolution fo. Equation (2.28) is obtained by solving Eq. (2.3) for the sample period, thus increasing the number of samples and decreasing the sample period, which achieves the effect of interpolation. The effects of downsampling the waveform and then interpolating it can be accomplished in MATLAB via these commands: Udata=N(1:10:max(size(N)); Idata=interpft(Udata,max(size(N)));
LADAR Waveform Models
51
The MATLAB syntax grabs every tenth sample from the waveform stored in the variable N to produce the variable Udata. The interpft function accomplishes the task of executing the DFT, zero-padding, and inverse transforming to produce the interpolated vector. Figure 2.14 plots the effect of interpolating the aliased waveform shown in Figure 2.13. As shown in Fig. 2.14, the interpolated signal is clearly Gaussian in shape, even though it is reconstructed from a waveform that appears to be nonGaussian. This method for processing LADAR data to interpolate it will be used in Chapter 3 to aid in the task of extracting range information from the data. Too much aliasing is detrimental to the interpolation algorithm. This effect can be demonstrated by repeating the process of undersampling the signal generated in Example 2.1 and then interpolating, except that this time the data are sampled with a period equal to two standard deviations of the Gaussian pulse instead of one. This should increase the effect of aliasing by a factor of 2. Figure 2.15 shows the undersampled data and the reconstructed waveforms plotted versus time. Although the interpolation algorithm can overcome some amount of aliasing, too much aliasing will produce errors in interpolation, as shown in the figure.
Figure 2.14 Plot of the interpolated version of the Gaussian waveform sampled at a rate equal to the inverse of the standard deviation of the shape. Although the sampled waveform appears to be non-Gaussian, the interpolation function is capable of reconstructing the Gaussian shape from the samples.
52
Chapter 2
Figure 2.15 Both the aliased data with a sampling period of 2 standard deviations of the Gaussian pulse and the waveform interpolated from the data are plotted versus time. Clearly the interpolated signal deviates significantly from the Gaussian shape.
2.6 Problems 2-1
Consider a unit amplitude Gaussian function with a standard deviation of 1 second. Compute the Fourier transform of the function, which is itself a Gaussian shape. What is the standard deviation of this function?
2-2
Two Gaussian pulse shapes to be convolved possess standard deviations and . Show that the convolution of these two Gaussian shapes also produces a Gaussian shape, and determine its standard deviation.
2-3
A rectangular pulse with a width of 1 ns is to be sampled. Compute the Fourier transform of this pulse shape, then examine the transform and define the lowest frequency for which the transform goes to zero as the cutoff frequency. Use this cutoff frequency to determine the sampling frequency. Does this seem like an appropriate sampling frequency? Why or why not?
2-4
A surface shown in Fig. 2.16 is illuminated with a LADAR pulse. Compute the range profile of this target as a function of time, assuming the target possesses 1 m2 of surface area.
LADAR Waveform Models
53
Figure 2.16 Geometry of the target for Problem 2-4.
2-5
Given the geometry of a LADAR target shown in Fig. 2.17, compute the target profile as a function of time. The target has a surface area of 2 m2.
Figure 2.17 Geometry of the target for Problem 2-5.
2-6
A LADAR system is ranging a target that is 10 km distant. If the clock frequency is 500 MHz, how many clock cycles pass between the time when the laser is fired and when the pulse is received? If the clock frequency drifts so the actual frequency is 50 kHz higher than the nominal frequency, how many clock cycles are measured? What is the range error?
2-7
The LADAR system described in Example 2.1 is used to illuminate a target 100 m distant. If the speckle parameter is M = 10, the capacitance of the receiver circuit is 1 pF, and the background radiation is 1000 W/m2, what is the SNR of the waveform as a function of time? In this problem, choose the sample period to be one standard deviation of the Gaussian pulse shape.
2-8
Repeat Problem 2-7 using the target shown in Problem 2.6 as the object being illuminated. Assume the target is illuminated such that both the front and back surfaces are equally visible.
2-9
Repeat Problem 2-7 with a negative parabolic pulse shape. Use a width parameter of the pulse equal to 2 ns.
Chapter 3
Wave Propagation Models This chapter discusses the fundamental laws governing the propagation of optical fields and uses them to help account for the spatial effects that occur as a laser beam propagates to a target and back to a LADAR receiver. Accounting for these spatial effects will improve the fidelity of the LADAR models introduced in Chapters 1 and 2. For example, in Chapter 1 a uniform illuminating beam was assumed for the development of the range equation. This assumption can be removed by using methods that account for the nonuniformity of the beam to simulate more realistic waveform data. Spatial effects present in a LADAR system model will be accounted for by expanding upon the waveform model introduced in Chapter 2. Instead of modeling the return from the target as a single waveform, we will model the return as a collection of waveforms at different spatial coordinates in the focal plane of the LADAR receiver. This model will assume that although the illumination is nonuniform spatially, it can be approximated as being uniform over a small spatial extent consistent with the size of the samples in our spatial model. Thus, the single-waveform architecture introduced in Chapter 2 will be replaced with a 3D waveform model. This new model operates on the principle that each spatial location obeys the waveform model introduced in Chapter 2. The detector within the LADAR receiver will be modeled as a single element that will integrate the spatially diverse waveforms present in the focal plane of the receiver optics. The 3D waveform architecture is not completely necessary in cases where the LADAR receiver has a single detector, but it will facilitate the introduction of true 3D LADAR models in Chapter 5. The sources for all of the spatial effects introduced in this chapter can be described by two distinct phenomena. The first arises from the fact that the illumination exiting the cavity of a laser transmitter is not spatially uniform. This nonuniformity will be covered in Sec. 3.1. The second is due to diffraction. Diffraction is the process by which light deviates from its geometrically predicted behavior. Geometric raytracing serves as a first-order model to predict where beams of light will propagate. If light is treated strictly as a particle, geometric raytracing would adequately describe how light moves from one location to another. However, light can also be described as a wave, and because waves can propagate in ways that cannot be predicted simply by geometric
55
56
Chapter 3
raytracing, the diffraction of waves must be taken into account to accurately model the propagation of light. For a general case, we can model an illuminating wave as having an amplitude distribution At ( xm,yn) at any discrete point (xm,yn) and a phase of ( xm,yn). In this case, m and n are integers that index which discrete point in the function is being referenced. If the radiation is monochromatic with frequency v in units of Hz, then the scalar field g in units of V/m can be described mathematically by16
g ( xm , yn , t ) At ( xm , yn )e j ( xm , yn ) e j 2 vt .
(3.1)
The treatment of wave propagation in this text is intended to be general enough to allow for the inclusion of a variety of optical effects. All optical effects modeled in this text will be approximated as phase screens and amplitude screens. The effect of a phase screen is to delay the light field passing through an optical device. The effect of a phase screen on a scalar field is illustrated mathematically by
g ( xm , yn , t ) g ( xm , yn , t )e j 2πvt ( xm , yn ) ,
(3.2)
where g is the field exiting the optic, and g is the field entering it. The time delay t can be multiplied by the frequency of the light in units of inverse seconds and multiplied by 2 to produce the phase delay φ
g ( xm , yn , t ) g ( xm , yn , t )e j ( xm , yn ) .
(3.3)
An amplitude screen changes the amplitude of the field that passes through it at that point. In this case, the relationship between the field entering the amplitude screen and exiting it is given by
g ( xm , ym , t ) T ( xm , yn ) g ( xm , ym , t ),
(3.4)
where T represents the transmission of the screen at each point in the plane of the screen. Both the amplitude and phase screen models can be used to describe the effects of a wave passing through an optical element as well as the effect of a field reflecting off of an opaque surface. In the latter case, the direction of propagation is reversed, but the field undergoes both a time delay, which depends on the height of the surface at each point, and an amplitude adjustment, which depends on the surface reflectivity. This set of simple field transformations allows us to model the various effects that the light in a LADAR system undergoes as it propagates from the laser transmitter to the target and back.
Wave Propagation Models
57
3.1 Rayleigh-Sommerfeld Propagation The primary field transformation discussed in this chapter is the propagation of a field from a source plane to a distant receiver plane. The material presented in this chapter draws heavily from the general solution of the wave propagation problem known as the Rayleigh-Sommerfeld diffraction integral.16 The field g at the source plane is propagated to a distant plane to produce the field f via the following propagation integral: j 2πv[ t R ( x , y , w , sq )/ c ]
m n p g ( xm , yn ) ze f ( wp , sq , t ) j R 2 ( xn , ym , wp , sq ) m 1 n 1
N
N
.
(3.5)
In this equation, R is the range from a point (xm,yn) in the source plane to a point (wp,sq) in the distant plane. The propagation attenuates the field by an amount inversely proportional to the range. The field is also multiplied by the cosine of the ray angle from (xm,yn,0) to the point (wp,sq,z) and the ray normal to the source plane, which is equal to the distance between the planes z divided by the range. Finally, the field is also modified by the phase delay equal to 2 times the range divided by the wavelength of the light. The phase-delayed and attenuated field at each point is summed up to produce the field at the distant plane. The final parameter needed to determine the field propagation is the sample size in the distant plane. The sample size in the distant plane y should be chosen to meet the Nyquist sample criterion. The maximum spatial frequency fc of the field in the distant plane can be approximated by the following equation:16
fc
Ls . 2z
(3.6)
In this equation, Ls is the extent of the field in the source plane in units of meters in either the horizontal or vertical direction. The proper sampling of the field leads to the following choice for sample spacing:
y
z Ls
.
(3.7)
The sample spacing can be different in the horizontal and vertical directions, depending on the extent of the source in those dimensions. The reciprocal relationship between the sample size in the distant plane and the extent of the source plane is identical to the relationship between the size of the distant plane and the size of the sample size in the source plane. If the sample size in the source plane is x and the size of the receiver plane is Lr, the maximum allowable size of the receiver plane is given by
58
Chapter 3
Lr
z x
.
(3.8)
The parameter Lr can be thought of as the period of the digital signal in the distant plane. Attempts to compute the signal in the distant plane over a distance greater than Lr will produce multiple copies of the signal, because the signal is periodic with a period of Lr.
3.2 Free-Space Propagation The first propagation encountered in the function of a LADAR system involves the projection of a beam from the laser transmitter to the target. The field produced by a laser cavity is generally modeled as a spatial Gaussian in two dimensions. In the general form, for a Gaussian beam at its point of origin,13 the field exiting the laser cavity gLC is described by the following equation:
xm 2 yn2
g LC ( xm , yn , t ) Ag e
o2
e j 2πvt ,
(3.9)
where is the beam waist parameter that defines the width of the beam at its point of origin, and Ag is the amplitude of the Gaussian beam in its center. The shape of a symmetric Gaussian beam as it propagates is well understood; however, these formulations do not consider the possibility that the laser beam is directed through shaping optics or propagates through a turbulent atmosphere. To this end, we will use the Rayleigh-Sommerfeld diffraction formula in Eq. (3.5) to propagate the Gaussian beam to a distant plane. First, the sample size in both the distant plane and the source plane must be specified. By using the results obtained for the proper sampling of a Gaussian shape determined in Sec. 2.5.2, we can determine that the minimum sampling period in the horizontal and vertical directions for the 2D Gaussian pulse shape is equal to the standard deviation of the Gaussian. This defines the sampling criteria in the source plane. The sampling criterion in the receiver plane is determined from Eq. (3.7). The extent of the Gaussian can be approximated from the three-standard-deviation point. Considering the total width of the Gaussian shape, the minimum total width of the Gaussian in each dimension would be six standard deviations. Substituting this result into Eq. (3.7) yields the following equation for the sample size in the receiver plane GB when propagating a Gaussian beam of wavelength along distance z:
GB
2 z . 6o
(3.10)
Wave Propagation Models
59
The following example demonstrates the use of the Rayleigh-Sommerfeld propagation technique for simulating Gaussian beam propagation. Example 3.1 The goal of this example is to compute the pattern formed by a propagated field and compare it to theoretical predictions of the size of the Gaussian spot size. A Gaussian beam with a beam waste of 9 mm and a wavelength of 1.55 μm propagates a distance of 10,000 m. The beam has a total of 1 J of energy. The width of the Gaussian beam is predicted to grow as a function of propagation distance.17 The following equation is used to compute the width parameter of the Gaussian beam as a function of propagation distance: 2
z ( z ) o 1 2 . πo
(3.11)
The beamwidth = 0.548 m for this particular example at a range of 10,000 meters. The standard deviation of the Gaussian beam is the width divided by 2 . Therefore, the standard deviation in this example is 0.3876 m. The total width of the Gaussian beam is approximately six standard deviations, which is approximately 2.33 m. The following MATLAB code can be used to create a Gaussian beam with the proper waist and sampling in an array that is 11 standard deviations wide: w_o=.009 % meters dx=w_o/(sqrt(2)); stdevx=1; stdevy=1; sz=11; mix=6; miy=6; beam=zeros(sz,sz); for i =1:sz for j = 1:sz beam(i,j)=(1/(2*pi*stdevx*stdevy))*exp(-((imiy)^2)/(2*(stdevy^2)))*exp(-((jmix)^2)/(2*(stdevx^2))); end end beam=beam.*ones(sz,sz)/sqrt(sum(sum(beam.*beam))); The array stored in the variable beam in the code above is displayed in Fig. 3.1. This figure shows the distribution of the field prior to the application of the
60
Chapter 3
propagation function. The beam was scaled so the sum of the beam squared is equal to 1, reflecting the fact that the beam carries 1 J of energy. The next step is to propagate the beam using Eq. (3.5). The maximum sample size that will avoid aliasing is computed using Eq. (3.9) and is found to be 0.406 m. Any sample size in the distant plane that is smaller than this would be acceptable, so the sample size of the target can be set by the Nyquist criterion or be based on the sampling obtained from an external 3D target model. In this example, we define the target of the laser illumination to be a square, flat panel with each side being 2.5 m in length. To obtain an integer number of samples across this target, a sample size of 0.05 m is chosen, which is less than the maximum sample size. This produces a target that is 51 pixels across (going from 0 to 2.5 in steps of 0.05 m). The following MATLAB code can be used to propagate the Gaussian beam to the 2.5 m2 target area: dxx=0.05; Z=10000; lam=1.55e-6; distant_array=zeros(51,51); j=sqrt(-1); for xx=1:51 xxc=(xx-26)*dxx; for yy=1:51 yyc=(yy-26)*dxx; for x=1:11
Figure 3.1 Field distribution of the Gaussian beam prior to propagation.
Wave Propagation Models
61
for y=1:11 xc=(x-6)*dx; yc=(y-6)*dx; R=sqrt(Z^2+(xc-xxc)^2+(yc-yyc)^2); distant_array(yy,xx)=distant_array(yy,xx)+dx*dx*Z*beam (y,x).*exp(2*pi*j*R/lam)/(j*lam*R^2); end end end end imagesc((1:51)*dxx,(1:51)*dxx,abs(distant_array)) norm_factor= sqrt(sum(sum(abs(distant_array).^2))); distant_array=distant_array.*ones(51,51)/norm_factor; ylabel(‘METERS’) xlabel(‘METERS’) This code generates a Gaussian-shaped beam at the surface of the target shown in Fig. 3.2. The beam is normalized to unit energy so that it can be scaled by the amount of energy in the beam computed from the laser range equation. The size of the beam in the distant plane is theoretically predicted from Eq. (3.11) to be 2.33 m. A cross section of the pattern in Fig. 3.2 indicates that the Gaussian shape is 2.3 m in width. This cross section is plotted in Fig. 3.3.
Figure 3.2 Gaussian-shaped beam at the target surface in the distant plane.
62
Chapter 3
Figure 3.3 A cross section of the Gaussian pattern in Fig. 3.2 shows that it is just over 2.3 m in width.
Now that a beam shape has been introduced into the model for the LADAR system, it becomes possible to account for spatial variations in the range equation. The range equation described in Chapter 1 assumed that the transmitted beam was uniform over some area. Example 3.1 demonstrates that transmitted beams often possess a nonuniform distribution of power in the area of the target. To facilitate a more realistic waveform model, a technique for simulating waveforms in the presence of nonuniform beams will be introduced next. In the context of the range equation, both the power detected by the receiver and the transmitted power become a function of position. The following equation is the power at the detector of the LADAR receiver reflected from a target normal to the propagation beam with a surface area of 1 m2 and a reflectance equal to 1:
Pdet (m, n, k )
o a2 DR2 Pt (m, n, k ) . R 2 R (t R ) 2
(3.12)
In this equation, (m,n) is a pair of integers that serves to identify which spatial sample is being indexed, the time index k tracks what sample in time is under consideration, and the 3D matrix Pt is the power transmitted to the target area as a function of position and time. In most cases the LADAR receiver views the target area by forming an image of it onto the detector through the use of a lens. The relationship between the sampling in the plane of the target illuminated by the transmit beam x and the detector plane is given by
Wave Propagation Models
63
det f l x / Z .
(3.13)
In this case, the focal length fl and the distance between the LADAR system and the target area Z geometrically predict the detector plane sample size det. The choice of how the target plane is sampled depends on two sets of sampling criteria. The first is the sample size required to adequately sample the intensity distribution of the transmitted beam. This maximum sample size for a Gaussian-shaped transmit beam can be computed via Eq. (3.10). The second sampling criterion arises from a specification of the spatial frequency content of the signal in the detector plane. Equation (3.6) states that the maximum spatial frequency of a field present in a distant plane is proportional to the physical size of the field in the source plane. The source of the field propagated to the detector plane is the field in the aperture of the LADAR receiver. Therefore, the aperture size can be used in Eq. (3.6) to compute the sample size needed in the detector plane, det. This sample size can be used in Eq. (3.13) to compute the desired sample size in the target plane. The maximum sample size from these two different sampling criteria can then be compared. The criterion that generates the smaller sample size defines the maximum sample size in the plane of the target. Once the signal in the plane of the target is defined, Eq. (2.23) can be used to compute the waveforms received from the different parts of the target as a function of time. Equation (3.14) expands upon Eq. (2.23) to include waveforms generated from different points of the target area: Ns
Pdet_tot (m, n, tk ) Pdet ( m, n, tk tkk )Tp (m, n, tkk ).
(3.14)
kk 1
Thus, the target profile for each spatial sample of the target is convolved with the waveform from the LADAR transmitter to produce a signal in the detector plane that is a function of both space and time. The 3D matrix in the detector can be integrated spatially to produce a single waveform in time that includes the effects of a nonuniform illumination pattern in the target area as well as a spatially diverse target profile. Note that Eq. (3.14) does not include the effect of atmospheric turbulence or diffraction from the LADAR receiver optics; those effects will be included in Sec. 3.4. Example 3.2 demonstrates how the nonuniform illumination of a LADAR transmitter beam can alter the shape of the waveform measured by a single-detector LADAR receiver. Example 3.2 In this example, a Gaussian beam is used to illuminate a target that possesses two distinct surfaces at different ranges that are normal to the propagation direction of the transmitted beam. The goal is to compute the shape of the waveform with the spatially distributed Gaussian transmitted beam and compare it to the shape that
64
Chapter 3
would be generated had the illuminating beam been spatially uniform. The transmitted beam has a wavelength of 1.55 μm and a waist of 1 mm at its origin (as in Example 3.1). The beam propagates 10,000 m to a target area that is 2.5 m in diameter. The target profile at each pixel in the target area features a surface with a reflectance of 0.1 at a range of 10,000 m and an area of 1.625 m2 in the center of the target area; another surface with the same reflectance has an area of 4.6875 m2 but at a range of 10,001.5 m. Figure 3.4 is an image of the target area showing the areas that correspond to the first and second surfaces within the target profile. The pulse shape is both temporally and spatially Gaussian with a standard deviation of 2 ns. The wavelength of the radiation is 1.55 μm and the laser produces 1 mJ per pulse. The receiver has a focal length of 1 m and a square detector that is 0.25 mm on a side, which samples the return waveform at a rate of 500 MHz. The aperture diameter of the receiver is 0.1 m. The target profile Tp is created by taking the range image shown in Fig. 3.4 and the target reflectivity and translating it into a waveform that will be convolved with the laser pulse returned from a target at a range of 10,000 m via Eq. (3.14). The following lines of MATLAB code create the target profile function in three dimensions: Rmin=9990; % Minimum range in the range gate minT=Rmin*2/3e8; % first time that the receiver will measure the return Rmax=10010; % Maximum range in the range gate
Figure 3.4 Image of the target area showing two distinct areas. The bar on the right identifies the shades associated with the range of the target.
Wave Propagation Models
65
maxT=Rmax*2/3e8; % last time that the receiver will measure the return deltat=Sigma_w; % Sample time in seconds t=minT:deltat:maxT % Range of times in the range gate target_area=ones(sz,sz)*5; % Define the area of the target at 10001.5 m target_area(14:38,14:38)=zeros(25,25); % Define the area of the target at 10 km rho_t=ones(sz,sz)*0.1; % Target reflectivity at each pixel for xn=1:sz for ym=1:sz T_p(ym,xn,:)=zeros(size(t)); % create a range vector per pixel indxx=target_area(ym,xn)+1; % Locate the range vector index T_p(ym,xn,indxx)=rho_t(ym,xn)*dxx*dxx; % Assign a dirac based % on target reflectivity and area of the spatial sample end end Figure 3.5 plots the target profiles at the surface 10,000 m from the LADAR in pixel (29,26) of the target area and the plot at the surface 10,001.5 m from the LADAR in pixel (9,6).
Figure 3.5 Plots of the target profile for a pixel on the surface at 10,000 m and a pixel on the surface at 10,001.5 m.
66
Chapter 3
Much of the code for accomplishing this simulation is borrowed from the range equation calculations in Chapter 1. The following lines of code define the transmission of the atmosphere and optics, which are assumed to be equal to 1 in this simulation, as well as the focal length of the receiver optics, the aperture diameter of the receiver optics, and the pulse width of the Gaussian-shaped transmit pulse; the solid angle of the target reflection is equal to π rad due to an assumed Lambertian reflection from the target surface: Sigma_w = 2e-9; % Pulse standard deviation in units of seconds tau_atm=1; % Atmospheric transmission tau_opt=1; % Receiver optics transmission reciever_focal=1 % Focal length of the LADAR receiver in meters theta_r=pi; % Reflection angle for Lambertian targets ap_diameter=.1; % Aperture diameter in units of meters The final step involved in computing waveforms returned from the different areas of a target is to create the quantity Pt ( m,n,k). This is the transmitted power that strikes the target surface as a function of position within the target (coordinate indices (m,n)) and time (indexed by k). It is therefore a 3D array that is a function of position in the target plane and range. The spatial distribution of the beam power is simulated using the code generated in Example 3.1 since all of the beam parameters of these two examples are identical. The MATLAB code below uses the results from the beam propagation Exercise 3.1 to create the 3D array of laser pulse waveforms striking the target area at a range of 10,000 m. The 10,000-m range was chosen because it is the first surface encountered in the target. The waveforms at 10,000 m are then convolved with the target profile generated in this example to produce the laser pulses returning from the target area for every pixel in the detector array. This code produces the 3D array of waveforms returning from the target area: E_t=.001*abs(distant_array).^2; % 1 mJ pulse distributed by diffraction P_t=zeros(sz,sz,max(size(t))); % Allocate memory for 3D pulse array for tk=1:max(size(t)) % Setup loop to visit each time in the range gate P_t(:,:,tk)=(E_t/(sqrt(2*pi)*Sigma_w))*exp(-((t(tk) Z*2/3e8).^2)/(2*Sigma_w^2)); % Images of the pulse at each range end
Wave Propagation Models
67
I_target = tau_atm*P_t/(dxx*dxx); % Pulse intensity at the target P_ref = I_target; % Reflected power from the target in units of watts I_receiver=tau_atm*P_ref/(theta_r*Z^2); % Intensity at the aperture P_rec = tau_opt*(ap_diameter^2)*pi*I_receiver/4; % Received signal power from a unit reflectance and area target at 10,000 meters P_rec_tot=real(ifft(fft(P_rec,max(size(t)),3).*fft(T_p ,max(size(t)),3),max(size(t)),3)); % Received signal power from every point in the target area at the correct range due to the convolution between the target profile and the waveform array. The convolution is carried out using the convolution property of the Fourier transform. The LADAR receiver described in this example isn’t capable of measuring these waveforms independently, but instead integrates them at the detector surface. Figure 3.6 shows the range slices from the 3D array that would be imaged by a 3D LADAR receiver with square pixels that are 5 μm in size. The images shown in Fig. 3.6 are integrated spatially to produce a single value for each measurement time in the range gate. The results of this spatial integration are shown in Fig. 3.7. The final step in this example is to show what the returned waveform would look like if the target area were illuminated by a spatially uniform beam. The power distribution in the illumination beam is adjusted by changing the following line of code: E_t=.001*ones(51,51)/51^2; % 1 mJ pulse distributed over the target area Other parameters in this example, such as the target profile and waveform shape at each pixel on the target, are not changed. The change in the illumination pattern causes the second surface at the 10,001.5-m range to be illuminated by more energy than in the Gaussian beam case. The result is that the second surface becomes much more prominent in the overall waveform measured by the LADAR receiver. Figure 3.8 shows the waveform generated by a uniform illumination pattern after spatially integrating the returns from each pixel in the target area. The waveform generated in the Gaussian illumination case is shown as well.
68
Chapter 3
Figure 3.6 Range images from the target shown in Fig. 3.4 illuminated by the beam shown in Fig. 3.2. The returns from the target at a range of 10,000 m are in the center of the FOV while the returns from the target surface at 10,001.5 m are in the ring around the center target. The ring is produced because the Gaussian beam shape illuminates an area that is primarily in the center of the target.
Figure 3.7 Waveform generated from target surfaces at 10,000 m and 10,001.5 m illuminated by a Gaussian-shaped beam pattern. The second surface at 10,001.5 m can be seen but is weakly illuminated.
Wave Propagation Models
69
Figure 3.8 Waveform measured by a LADAR receiver when the target is uniformly illuminated. The second surface, with a larger area than the first but illuminated equally, produces a larger return.
3.3 Atmospheric Turbulence Phase Screen Simulation The simulation presented in Example 3.2 contains the effects of wavefront propagation on a beam from a laser transmitter to a target, but it does not model the diffraction effects on the return path or the effect of atmospheric turbulence on the beam as it propagates to the target. The two primary sources of diffraction effects on the signal as it propagates from the target to the receiver are atmospheric turbulence and the optics of the receiver itself. This section presents a model for computing the optical delay introduced by atmospheric turbulence. The model discussed in this section allows us to compute the variation in apparent range from the field in one plane, the source plane, as it propagates to a distant plane, the receiver plane. The variation in apparent range from any point in the source plane to any point in the receiver plane can be computed via the variation in the TOF. The TOF from the source to the receiver tsr when propagating through an atmosphere is computed via the following equation:
tsr R / (c / nsr ).
(3.15)
In this equation, R is the range from the point from which the field propagates to the point where it arrives, c is the speed of light in a vacuum, and nsr is the average index of refraction in the medium along the path from the source point to the receiver point. The average index of refraction is computed by integrating the index of refraction of the atmosphere along a line from the point in the source plane to the receiver plane, and then dividing it by the length of the path. This computation is generated via the following integral:
70
Chapter 3 R
1 n( z )z , R 0
nsr
(3.16)
where the index of refraction along the path n is treated as a random variable with an unknown probability density function. If the path is sufficiently long and the index of refraction along the path is statistically independent from point to point on the path, the central limit theorem can be invoked to approximate the probability density of nsr as being Gaussian with some mean and variance. If the average index of refraction is Gaussian, then the TOF is Gaussian as well. By rearranging Eq. (3.15), we find that the TOF is equal to the integral of the index of refraction along the path divided by the speed of light in a vacuum:13 R
tsr n( z )z / c.
(3.17)
0
Turbulence effects introduce deviations in the TOF that cause deviations in the path the light travels as it propagates through the turbulence. The deviation in the TOF t can be computed by separating the mean of the index of refraction from the deviation: R
tsr to t [nmean n( z )]z / c,
(3.18)
0
where to is the TOF through the turbulence if it does not introduce random errors in the path; to is computed by R
to nmeanz / c nmean R / c.
(3.19)
0
By using this result and substituting it into Eq. (3.18), we can solve for t: R
t n( z )z / c.
(3.20)
0
A field propagating from a source plane to a distant receiving plane experiences a phase delay proportional to the time needed to traverse the distance as well as an attenuation that is proportional to the distance traveled. The RayleighSommerfeld diffraction equation can compute the field in the receiving plane f by delaying and attenuating all of the field points in the source plane g, and then additively combining their contributions:
Wave Propagation Models
71 j 2πv[ t t ( x , y , w , s ) t ( xm , yn , w p , sq )]
o m n p q g ( xm , yn ) ze f ( wp , sq , t ) j R 2 ( xn , ym , wp , sq ) m 1 n 1
N
N
.
(3.21)
We will define the phase error atm introduced by the perturbations in the index of refraction along any path between the source coordinates (xm,yn,0) and the coordinates in the receiving plane (wp,sq,z) as
atm ( xm , yn , wp , sq ) 2πvt ( xm , yn , wp , sq ).
(3.22)
The substitution of Eqs. (3.22) and (3.19) into Eq. (3.21) yields N
N
f ( wp , sq , t )
g ( xm , yn ) ze
j 2πv[ t nmean R ( xm , yn , w p , sq )/ c ] j atm ( xm , yn , w p , sq )
e j R ( xn , ym , wp , sq )
. (3.23)
2
m 1 n 1
Propagating through the turbulence in three dimensions is numerically challenging and beyond the scope of this text. To simplify the effect of turbulence in this propagation model, the isoplanatic assumption is invoked. The use of this assumption implies that the phase delay along the propagation path is independent of the angle through which it is viewed from the receiving plane. In this case the assumption would imply that the phase delay introduced by the turbulence is independent of the point in the source plane from which it is computed, which removes any dependence of the phase error on coordinates in the source plane (wp,sq) and produces the following simplification of Eq. (3.23):
f ( w p , sq , t ) e
j atm ( w p , sq )
j 2πv[ t n
R ( x , y , w p , sq )/ c ]
mean m n g ( xm , yn ) ze j R 2 ( xn , ym , wp , sq ) m 1 n 1
N
N
(3.24)
The above result implies that atmospheric turbulence in the isoplanatic case simply scales the phase of the result of the Rayleigh-Sommerfeld propagation. The phase term introduced by the variations in atmospheric turbulence is a phase screen, as described by Eq. (3.1). To use the propagation model described in Eq. (3.24), we must be able to generate random realizations of atmospheric turbulence that possess the correct statistics. Many models have been put forward that can be used to describe the statistics of the phase atm.13, 17, 18 In this text the phase introduced by the atmosphere will be modeled as a tilted plane in units of radians with slopes in the horizontal and vertical directions of and waves, which are unitless quantities. The atmosphere introduces many higher-order phase aberrations into the wavefront; however, tilt is the strongest aberration and the one that most affects the performance of direct-detection LADAR. The tilt is multiplied by 2 divided by the diameter of the receiver’s aperture D:
72
Chapter 3
atm ( wp , sq )
2π 2π wp sq . D D
(3.25)
It is assumed that any LADAR measurement will be completed in a short enough time that the atmosphere does not have time to change while the pulse is traversing back and forth through the turbulence. This is generally true for all scenarios except those involving space-based LADAR systems that feature very long TOFs for the pulse. The process of generating a realization of atmospheric turbulence for the isoplanatic viewing case degenerates into the problem of generating a series of horizontal and vertical tilt parameters for each pulse fired by the LADAR system. In the simplest scenario, the laser pulses are sufficiently separated in time, and the tilt is statistically uncorrelated and can be generated independently for each pulse. The tilt parameters are modeled as zero-mean, unitvariance, Gaussian random variables with a variance in the horizontal ( ) and vertical ( ) directions equal to18 5/3
D 0.448 r , ro 2
2
(3.26)
where ro is Fried’s seeing parameter,13 and Dr is the diameter of the receiver aperture. Fried’s parameter has units of meters; a large value indicates weak turbulence, while a small value corresponds to strong turbulence. Thus, if Fried’s parameter is large compared to the aperture diameter, the tilt variance becomes equal to a constant times a ratio that is less than 1 raised to the 5/3 power, making the ratio even smaller. If Fried’s parameter is less than the aperture diameter, the tilt variance is equal to a constant (0.488) times a ratio that is greater than 1, and when raised to the 5/3 power, it becomes larger. When Fried’s parameter is greater than or equal to the aperture diameter, it is generally considered the point at which the system’s performance is becoming affected by the turbulence. The tendency for the atmospheric tilt parameters to be statistically correlated in time is an important aspect of turbulence that must be computed to accurately simulate the temporal behavior of a LADAR system. In general, the tilt is computed from the atmospheric phase screen via the following calculation:19
Dr
(t1 )
atm
( w, s, t1 ) wA( w, s )ws ,
2π
(3.27)
A(w, s)w ws 2
where time t1 corresponds to the time at which the first pulse is passing through the turbulence, and A is the aperture’s transmittance function (being 1 where the aperture is open and 0 where it is blocked). At the next time the tilt is computed
Wave Propagation Models
73
in the same way by substituting time t2 = t1 + t into Eq. (3.27). The atmospheric phase screen is assumed to evolve in time via Taylor’s frozen flow hypothesis.19 This hypothesis states that over short periods of time, the perturbations in the index of refraction do not change in time but are simply translated by the motion of the air in the path. It is also assumed that the path is long enough that any air motion in the direction of the path will not change the path average. The only translation that is of concern is the motion of the air transverse to the path, which makes the relationship between the atmospheric phase at times t1 and t2 equal to
atm ( w, s, t1 t ) atm ( w vx t , s v y t , t1 ).
(3.28)
The effect of the wind is to translate the phase screen a distance in the horizontal and vertical directions equal to the wind velocities vx and vy multiplied by the time between laser pulse shots, t. Using this simple model to describe how the atmosphere evolves in time, the numerator of the tilt correlation can be computed using the following equation:
E[ (t1 ) (t2 )]
A(w , s ) A(w , s )w w 1
1
2
2
1
2
(3.29)
E[ atm ( w1 , s1 ) atm ( w2 vx t , s2 v y t )]w1s1w2 s2 . This equation can be simplified by noting that the expected value of the atmospheric phase times a shifted version of itself is the phase correlation function R. The tilt correlation will be defined throughout this chapter as R(t) = E[t1)t2)]:
R (t )
A(w , s ) A(w , s )w w 1
1
2
2
1
2
(3.30)
R ( w2 w1 vx t , s2 s1 v y t )w1s1w2 s2 . Equation (3.30) can be further simplified by substituting in the terms x = w2 – w1 and y = s2 – s1, which results in
R (t )
A(w , s ) A( 1
1
x
w1 , y s1 ) w1 ( x w1 )
(3.31)
R ( x vx t , y v y t ) x y w1s1. If we define the tilted pupil function P as the inner integral in Eq. (3.31), the final form of the tilt correlation function becomes
74
Chapter 3
P (
R (t )
x
, y )R ( x vx t , y v y t ) x y .
(3.32)
The variance of the tilt is equal to R ( ), and a method for computing it is found in Eq. (3.26). The tilt correlation for any time separation t is generally difficult to compute, because the phase correlation function is difficult to define at both large and small separations. An alternate means for determining the tilt correlation function uses the tilt structure function D defined as13
D (t ) 2 R (0) 2 R (t ).
(3.33)
The tilt structure function in integral form can be expressed by substituting Eq. (3.32) into Eq. (3.33):
D (t ) 2
P (
x
, y )[ R ( x , y ) R ( x vx t , y v y t )] x y . (3.34)
This structure function can be expressed in terms of the phase structure as opposed to the phase correlation function. The phase structure function is defined in a way that is similar to the tilt structure function:
D ( x , y ) 2 R (0, 0) 2 R ( x , y ).
(3.35)
By adding and subtracting the phase variance into the integral in Eq. (3.34), differences in the phase correlation function can be produced like those in the structure function in Eq. (3.35):
D (t ) 2
P (
x
, y )[ R ( x , y ) R (0, 0) R (0, 0)
(3.36)
R ( x vx t , y v y t )] x y . By placing the 2 in the above equation in front of the integral inside and pairing the phase variances with the appropriate phase correlations, the tilt structure function can be expressed in terms of differences in the phase structure function:
D (t )
P (
x
, y )[ D ( x vx t , y v y t ) D ( x , y )] x y , (3.37)
where P is a function only of the aperture function and is simple to compute. The phase correlation function for Kolmogorov turbulence is13
Wave Propagation Models
75
x2 y2 D ( x , y ) 6.88 r 2 o
5/6
.
(3.38)
The tilt structure function is difficult to compute analytically, but it can be calculated numerically. The tilt structure function in Eq. (3.37), together with Eqs. (3.33) and (3.26), allows R to be computed for different values of pulse repetition rates and wind velocities. If, after evaluating the tilt correlation function for a given LADAR system and atmospheric condition, the correlation value is large enough to be of concern, the tilt parameters can be statistically correlated but cannot be generated independently. In this case, the atmospheric tilt parameters can be generated with the appropriate degree of correlation. The conditional probability of the tilt at time t1 + t given the tilt at time t1 is a Gaussian random variable with a mean equal to19
E[ 2 | 1 ] 1
R (t )
2
, (3.39)
while the conditional variance is given by 2 4 R 2 ( t ) R (t ) E 2 1 | . 1 2 2
(3.40)
These equations imply that a sequence of tilt parameters can be created by generating a zero-mean Gaussian random variable with a variance of 2 to represent the tilt parameter for the first pulse, then generating subsequent Gaussian random variables with means computed via Eq. (3.39) and variances determined by Eq. (3.40).
3.4 LADAR System Point Spread Function The point spread function (PSF) of a LADAR system is the spatial impulse response of the system. The PSF accounts for the diffraction effects of the optics and the atmospheric turbulence. In Sec. 3.3, the tilt component of the atmosphere was modeled to simulate the primary source of temporal fluctuations contributed by the atmosphere and produce shot-to-shot variation in the simulated LADAR data. This section will discuss the static diffraction effects of the optical system as well as the average diffraction effects of the atmospheric turbulence, excluding the effect of the tilt. The diffraction effects of the LADAR system and the atmosphere are assumed to produce an impulse response htot that is part of a linear shift-invariant system. This system processes the images of the target predicted by the geometric optics Pdet_tot to produce the image Pdet_dif :
76
Chapter 3 N
N
Pdet_dif (m1 , n1 , tk ) Pdet_tot (m, n, tk )htot (m1 m, n1 n, tk ).
(3.41)
m 1 n 1
The optical system focuses the light from the distantly illuminated target of the LADAR system onto the detector array. The impulse response of the optics is computed by conducting a propagation experiment using Eq. (3.5). A point source is placed at the target location, and the source field is propagated to the LADAR receiver aperture. The field is then modified by the focusing optics and propagated to the detector array. The focusing optics can be modeled as a phase screen tlens with the following form:17 jπ( w2p sq2 )
tlens ( wp , sq ) e
fl
.
(3.42)
After the field from the point source is propagated to the receiver aperture, the phase screen is multiplied by the field in the plane of the receiver aperture and the atmospheric tilt phase screen atm. After the field at the aperture is modified by the phase screen of the lens, it is propagated to the detector plane. The square magnitude of the field in the detector plane is the PSF of the system. This function is normalized so it sums to 1 times the transmission of the optical system. The average PSF of the atmosphere is modeled using the short-exposure transfer function. The short-exposure transfer function is derived by removing the tilt component of the atmosphere and computing the Fourier transform of the average shape of a point source when it is focused onto a detector after propagation through an atmosphere with a seeing parameter of ro. The shortexposure transfer function is the 2D Fourier transform of an image of a point source viewed through the turbulence averaged over time. This is not what would be seen by a short-pulsed LADAR system; however, it describes the average effect of the higher-order phase aberrations. The short-exposure transfer function is computed using Kolmogorov statistics and takes the form13
H atm ( f x , f y ) e
2 fl 2 ( f x2 f y2 ) 3 44 ro 2
5/6
2 fl 2 ( f x2 f y2 ) 1 Dr 2
.
(3.43)
In this equation, the spatial frequencies in two dimensions are parameterized by (fx, fy). The total transfer function Htot is formed by taking the Fourier transform of the optical PSF and multiplying it by the short-exposure transfer function:13
H tot ( f x , f y ) H atm ( f x , f y ) H opt ( f x , f y ).
(3.44)
Wave Propagation Models
77
The total PSF htot is computed by taking the inverse Fourier transform of Htot. The following example demonstrates the use of the total PSF. Example 3.3 In this example, the LADAR system described in Example 3.2 illuminates the same target, but now atmospheric turbulence is present between the LADAR system and the target with a seeing parameter of ro = 5 cm. Diffraction effects from the receiver optics are also included. The addition of diffraction effects from the atmosphere and the optics creates a PSF for the system, which will be used via Eq. (3.41) to create the image of the diffracted power at the detector, Pdet_dif. Propagating a point source a distance of z = 10,000 m to the LADAR receiver creates the PSF. By setting g(xm,yn) = xm,yn) and substituting into Eq. (3.5), the following expression is produced for the field at the aperture fa: j 2πv[ t R ( w , s )/ c ]
p q ze f a ( wp , sq , t ) . j R 2 ( wp , sq )
In this equation, R is computed by
R ( wp , sq ) w2p sq2 z 2 ,
(3.45)
where p and q = (–N/2, –N/2+1,… 0,… N/2); also, wp= pa and sq= qa, where a is the sample spacing in the aperture plane. The sample spacing in the aperture plane is chosen to be small enough so that the phase difference between any two points in the aperture never exceeds rad. The phase sampling is a concern because it appears in the propagation equation as a complex exponential. A phase change of more than rad implies that the cosine and sine terms that comprise the complex exponential will undergo more than half a period of phase change. Since the Nyquist sampling theorem dictates that all components of the signal should be sampled at a rate that allows for at least two samples per period, the phase change cannot be greater than . The phase functions in the aperture plane comprise the atmospheric tilt as well as the lens phase function in Eq. (3.42). The phase is also made up of the range-dependent phase in the Rayleigh-Sommerfeld propagation equation. The tilt component is very low frequency and generally not difficult to sample properly. A single wave of tilt across the aperture introduces a phase change of 2 rad and therefore requires only two samples across the aperture to adequately sample in each dimension. The lens-dependent phase, being quadratic, requires more samples, since the phase changes at an ever-increasing rate at the aperture edge. This quadratic phase is canceled by the range-dependent phase, because this cancellation is what brings the image into focus in the detector plane. This is demonstrated by applying the binomial approximation to Eq. (3.45):
78
Chapter 3
w2p sq2 ( w2p sq2 ) 2 R ( wp , sq ) w s z z 1 ... . (3.46) 2z2 8z 2 2 p
2 q
2
The second term in the above approximation is quadratic, with the same magnitude as the lens-dependent phase but the opposite sign. This annihilation removes the quadratic components from the sampling consideration. The next prime contributor to the phase, the spherical component, changes at a faster rate than the quadratic component, but its overall magnitude can be smaller than the 1/8 factor. Most LADAR systems will not have apertures greater than 1 m in diameter, making the actual numbers that are being computed smaller as they are taken to the 4th power. The phase change in the aperture plane due to the spherical term φ must be less than :
π[r 4 (r a ) 4 ] π. 4 f 3
(3.47)
In this equation, a is the sample size in the aperture plane and is equal to the radius of the receiver optic r divided by the number N of samples across the aperture times 2; f is the focal length of the optical system. For a given choice of N, the phase change can be computed and tested to determine whether it meets the sampling criteria. For this example, N is chosen to be 50 and the phase change is approximately equal to The following MATLAB code is used to compute the phase change: lam=1.55e-6 % wavelength of the light z=1; % propagation distance from entrance aperture to focal plane N=50; D=.1 % aperture diameter in units of meters dx=D/N; dy=dx; delta_phase=((D/2)^4-(D/2-dx)^4)/(4*lam*z^3) The PSF is created by multiplying the field fa by both the lens phase and the wavefront tilt and propagating it to the detector plane. The field fa is computed in MATLAB via the following code: x=-D/2:dx:D/2-dx; xx_mat=ones(N,1)*x; yy_mat=x'*ones(1,N); z1=10000;
Wave Propagation Models
79
range=sqrt(z1^2+xx_mat.^2+yy_mat.^2); Ap_field=exp(sqrt(-1)*2*pi*range/lam); % field at the aperture Two components to the impulse response must be computed: the optical PSF with the tilt from the atmosphere, and the average impulse response created from all of the higher-order terms in the atmospheric phase error computed from Eq. (3.43). The random atmospheric tilt must be generated as an input to the simulation. The following code is used to generate atmospheric tilts for 10 laser pulses, assuming a wind speed of 10 m/s across the receiver aperture and laser firing pulses at a rate of 10 pulses/s: r1=N/2; r2=0; mi = floor(N/2)+1; apeture = zeros(N,N); for i = 1:N for j = 1:N dist = sqrt((i-mi)^2+(j-mi)^2); if(dist<=r1) if(dist>=r2) apeture(i,j) = 1; end end end end r_o=.05 % seeing parameter in units of meters windy=0;% wind velocity in the vertical direction in the aperture plane windx=10;% wind velocity in the horizontal direction deltat=0.1; % time between laser pulses y_ac=(D*D/(4*pi*pi))*real(ifft2(abs(fft2(apeture.*yy_m at)).^2))/sum(sum(aperture.*yy_mat.*yy_mat)); phase_structure=3.44*(((yy_mat+windy*deltat).^2+(xx_ma t+windx*deltat).^2).^(5/6)((xx_mat).^2+yy_mat.^2).^(5/6))/r_o^(5/3); tilt_structure=2*sum(sum(y_ac.*fftshift(phase_structur e))) tilt_correlationy=0.488*(D/r_o)^(5/3)-tilt_structure/2
80
Chapter 3
if(tilt_correlationy<0) tilt_correlationy=0 end tilt_stdy=sqrt(((0.488*(D/r_o)^(5/3))^2tilt_correlationy^2)/(0.488*(D/r_o)^(5/3))); x_ac=(D*D/(4*pi*pi))*real(ifft2(abs(fft2(apeture.*xx_m at)).^2))/sum(sum(aperture.*xx_mat.*xx_mat)); phase_structure=3.44*(((xx_mat+windx*deltat).^2+(yy_ma t+windy*deltat).^2).^(5/6)((xx_mat).^2+yy_mat.^2).^(5/6))/r_o^(5/3); tilt_structure=2*sum(sum(x_ac.*fftshift(phase_structur e))); tilt_correlationx=0.488*(D/r_o)^(5/3)-tilt_structure/2 if(tilt_correlationx<0) tilt_correlationx=0 end tilt_stdx=sqrt(((0.488*(D/r_o)^(5/3))^2tilt_correlationx^2)/(0.488*(D/r_o)^(5/3))); tiltx(1)=sqrt(0.488*(D/r_o)^(5/3))*randn; tilty(1)=sqrt(0.488*(D/r_o)^(5/3))*randn; N_frames=10 for its=2:10 tiltx(its)=tiltx(its1)*tilt_correlationx/(0.488*(D/r_o)^(5/3)) +tilt_stdx*randn; tilty(its)=tilty(its1)*tilt_correlationy/(0.488*(D/r_o)^(5/3)) +tilt_stdy*randn; end Figure 3.9 shows the random tilt parameters generated for the above example. With the tilt computed, the optical PSF for the 10 frames is computed using the following code: N2=51; receiver_array=zeros(N2,N2,N_frames); lens_phase=-pi*(xx_mat.^2+yy_mat.^2)/(f*lam);
Wave Propagation Models
81
source_array=apeture.*exp(j.*lens_phase); for its=1:N_frames Z=1 its source_array_tilt=source_array.*exp(2*pi*sqrt(1)*(tiltx(its)*xx_mat/D+tilty(its)*yy_mat/D)); for xx=1:N2 xxc=(xx-ceil(N2/2))*dxx; for yy=1:N2 yyc=(yy-ceil(N2/2))*dyy; R=(Z^2+(xx_mat-xxc).^2+(yy_mat-yyc).^2).^(0.5); receiver_array(yy,xx,its)=sum(sum(dx*dy*source_array.* exp(2*pi*j.*R./lam)))./(lam*j*Z); end end end The average impulse response of the atmosphere is computed via the following MATLAB function call make_short_otf. The following MATLAB code uses this function call to compute the transfer function, which contains both the effects of the atmosphere and optical system.
Figure 3.9 Plot of the atmospheric tilt as a function of time in the horizontal and vertical directions, with a wind velocity of 10 m/s in the horizontal direction across the aperture.
82
Chapter 3
avg_otf = make_short_otf(D/2,dx,N2,r_o); for its=1:N_frames psf=abs(receiver_array(:,:,its)).^2; psf=psf/sum(sum(psf)); tot_otf(:,:,its)=fftshift(avg_otf).*(fft2(fftshift(psf ))); end Next, the total optical transfer function (OTF), which is the 2D Fourier transform of the PSF, is computed for each pulse. Figure 3.10 shows the impulse responses associated with the average OTFs for pulse numbers 1 and 8. The OTFs for each pulse can be used to filter the images obtained from Example 3.2, shown in Fig. 3.6. The following code executes the convolution required for a given pulse: Pulse=1 for indx=1:max(size(t)) P_rec_diff(:,:,indx)=conv2(P_rec_tot(:,:,indx),fftshif t(real(ifft2(tot_otf(:,:,Pulse)))),'same'); end Figure 3.11 shows two images of the LADAR return taken 20 ns apart. Because the tilt is not large enough to move the target out of the FOV, these diffraction effects will not change the waveform produced by the detector.
Figure 3.10 Images of the total PSFs for the first and eighth pulses, respectively, fired by a LADAR system. The atmospheric tilt is the only component that changes in time.
Wave Propagation Models
83
Figure 3.11 Two images of a beam returning from the target for the first pulse taken 20 ns apart. In the the first image, the beam is returning from the front surface, and in the second image, the beam is returning from the back surface of the step target.
3.5 Problems 3-1
Compute the size of a Gaussian beam as it propagates 1000 m to a distant target if the beam waist is 1 cm at the laser transmitter.
3-2
How large should the beam waist of a Gaussian transmit beam be to cover a 1-m-diameter circular target (the Gaussian beam at the target should be half the size of the target) if the propagation distance is 100 m?
3-3
Revisit Example 3.2 with a target that is half the size of the target at 10,000 m. Plot the detected waveform and find the relative height of the peaks generated by the front and back surfaces.
3-4
Compute the tilt variance for a LADAR receiver with a 1-m aperture diameter viewing through an atmosphere with a seeing parameter of 10 cm. The laser light has a wavelength of 1.06 μm.
3-5
Compute the conditional tilt variance for the case where a LADAR receiver with a 10-cm aperture diameter is viewing an object through turbulence with a seeing parameter of 5 cm. The wind velocity across the aperture is 1 m/s, and the wavelength of the light is 1.55 μm. The time between LADAR pulses is 0.1 s.
3-6
For the case described in problem 3.5, compute the mean of the conditional tilt in the next pulse if the current tilt is equal to zero.
84
Chapter 3
3-7
Propagate a point source a distance of 1000 m to a LADAR receiver aperture with a diameter of 10 cm and a focal length of 2 m. Form images of the point source in the presence of turbulence with a seeing parameter of 10 cm by simulating 100 images with uncorrelated tilt from image to image (assume the time between pulses is longer than the coherence time of the atmosphere). Average the images and compare them to the average impulse response of the system. Comment on the differences. Assume the laser light is 1.06 μm in wavelength.
3-8
Compute the waveform generated by the system simulated in Example 3.3 and compare it to the waveform generated in Example 3.2. Should the waveforms be the same or different? Why or why not?
Chapter 4
Detection and Estimation Theory Applied to LADAR Signal Detection This chapter presents techniques for processing LADAR data to accomplish the tasks of detection and range estimation. Section 4.1 introduces the theory of Bayesian reasoning for making an optimal detection decision from a single photocount measurement. The decision process requires criteria for making the decision, which is discussed in Sec. 4.2. Section 4.3 covers methods of detecting targets from a collection of measurements or a waveform. Section 4.4 describes a method for comparing the performance of different LADAR target detectors known as the receiver operating characteristic. Finally, Sec. 4.5 discusses range estimation algorithms.
4.1 Simple Binary Hypothesis Testing Many LADAR applications involve the detection of a target in air or space. In these cases, the laser pulse is transmitted upward into the sky and the receiver waits for a signal to return to determine if a target is present. This scenario is known as the simple binary hypothesis problem, because at any time there are two possible conclusions from the signal measured by the receiver. The first possibility is that a target is present, and the second possibility is that no target is present. If a signal is present, it may not be detectable, given the noise that is also present in the measurement. The probability mass function (PMF) of the number of photoelectrons in the measurement D is denoted as P(D|H1), where H1 denotes hypothesis number one, in which the target is present. When the signal in its raw form (no data processing) is present and amplified by an APD, the noise will usually be dominated by laser speckle and photon noise. In this case, the PMF will have a negative binomial form, as shown in Eq. (4.1):
( M Ds ) S P ( Ds ) 1 ( M )( Ds ) M
85
M
M 1 S
Ds
.
(4.1)
86
Chapter 4
In this equation, denotes the Gamma function, S is the mean of the photoelectron count value reflected from the target due to the laser pulse, and Ds is the random number of photons measured from the returned laser pulse. The use of this PMF requires knowledge of the coherence parameter M (introduced in Chapter 1) of the measurement as well as the expected number of photoelectrons. The laser light is not the only source of light hitting the target; in many practical cases, the sun also illuminates the target and provides B photoelectrons on average. The PMF of these photoelectrons will be Poisson since they arise from natural light. The random number of photons contributed by this background light is DB and has an associated PMF of
P( DB )
B DB e B . DB !
(4.2)
The measured signal D is the sum of the signal due to reflected light from the laser plus light from natural illumination sources such as the sun. The measured signal D = DS + DB. The conditional PMF, P(D|H1), is determined by computing the PMF that results from adding the discrete random variables DS and DB. To determine this PMF, the joint PMF of DS and DB must be determined. These two random variables are assumed to be statistically independent. This assumption is offered without proof; however, the source of randomness in these two measurements is the random arrival times of photons and the surface roughness of the reflecting object. Both variables are random because the actual arrival times of photons are random. These arrival times are thought to be statistically independent of one another.20 The laser speckle noise is due to constructive and destructive interference effects caused by the reflection of the laser pulse from a target with surface roughness on the order of or greater than the wavelength of the light. Since there is no reason to believe either of these random processes— surface roughness and random arrival times of the photons—should be related to one another, the assumption of statistical independence may be valid. The joint PMF P(DS,DB) can therefore be expressed as
( M DS ) S P ( DS , DB ) 1 ( M )( DS ) M
M
M 1 S
DS
B DB e B . DB !
By substituting DB = D – DS and summing over all possible values for DS between 0 and D, the following expression is obtained for the PMF of D:
e B B D S P ( D | H1 ) 1 ( M ) M
M
( M DS ) M 1 ( DS ) S DS 0 D
DS
B DS . (4.3) ( D Ds )!
Detection and Estimation Theory Applied to LADAR Signal Detection
87
This expression is not readily simplified but can be computed for any value of D. Thus, Eq. (4.3) serves as a valid expression for the conditional PMF of the measured data under hypothesis one. In the absence of a signal, the raw measurement D possesses a PMF that is dominated by background light, especially if a high APD gain is utilized with the detection circuit. The noise associated with the natural background is Poisson in nature, hence the PMF of the measurement P(D|Ho) is given by Eq. (4.2). In the case of the null hypothesis Ho, the parameter B is the expected number of photoelectrons contributed by the background and the dark current in the detection circuit during the measurement time. In general, this background value can be directly measured in the environment when the laser is not fired or the receiver aperture blocked. Using the PMFs for the measurement under both hypotheses, it is possible to determine which hypothesis is more likely conditioned on the raw measured data. Bayes’ theorem allows for the conditional probability of the event that hypotheis H1 is true, given the measured data to be computed via the following relationship:20
P ( H1 | D )
P ( D | H1 ) P ( H1 ) . P( D)
(4.4)
In this equation, P(H1) is the probability that the target is present, and P(H0) = 1 – P(H1) is the probability that the target is absent. These probabilities are usually unknown since they relate to the relative frequency that the targets occur versus the presence of no targets. Without foreknowledge about how often the target will be present, a blind detector will generally assume a target is present with the same probability that it is absent. If reliable target statistics are known, they can be used to improve the detector performance. The PMF P(D) is the unconditional PMF of the measured data. Although this PMF can be computed from the conditional probabilities P(D|H1) and P(D|H0) in conjunction with the prior probabilities P(H0) and P(H1), it will be shown in the coming analysis that knowledge of this PMF is unnecessary. The goal of our detector design effort is to produce a method that chooses the hypothesis (H0 or H1) that is more likely to be correct. Bayes’ theorem provides the means to accomplish this, because it allows for the computation of the probability of either hypothesis, given the measured data. Our detector will be designed to choose the hypothesis with the largest probability based on the given data. A mathematical expression of this decision process is given by
P ( H1 | D) P( H 0 | D) say H1. Otherwise say H0.
(4.5)
Equation (4.5) gives us a rule that allows us to decide which hypothesis should be selected using the given data. An equivalent expression uses the natural logarithm of both sides. The use of the natural logarithm is valid in this case because both sides of Eq. (4.5) are non-negative functions and the natural
88
Chapter 4
logarithm is a monotonic function, so if y > x, then log(y) > log(x). If we use Bayes’ theorem to substitute into Eq. (4.5) for the conditional densities and take the natural logarithm, we obtain the expression
ln( P ( D | H1 )) ln( P ( H1 )) ln( P ( D )) ln( P ( D | H 0 )) ln( P ( H 0 )) ln( P ( D )) say H1. Otherwise say H0. (4.6) This expression can be simplified by adding the log of the unconditioned PMF of the data from both sides of the equation. If the prior probabilities P(H1) = P(H0), they will cancel out of both sides of Eq. (4.6) and allow it to be simplified into the likelihood ratio test (LRT) :20
ln( P( D | H1 )) 1 say H1. Otherwise say H0. ln( P( D | H o ))
(4.7)
The reversal of the > symbol shown in Eq. (4.7) relative to Eq. (4.6) is caused by the division of both sides of the equation by a negative number. As an example, consider the inequality –1 > –2. If both sides of the equation are divided by –2, then we have ½ > 1, which is false unless the inequality is reversed. Because P(D|H0) is a PMF, it must always have a value between 1 and 0. The natural logarithm of a fractional number is always negative, so in the case of LADAR systems with discrete signals, Eq. (4.7) will always hold. To use the LRT, we must be able to define the PMF of the data when the target is both present and absent. In some applications it may be possible to define both P(D|H1) and P(D|H0) and therefore determine the optimal test. This is explored in the following example. Example 4.1
A LADAR system is fired upward into the air at a target. When the LADAR pulse returns, it provides an average of S photoelectrons to the receiver. Before the LADAR pulse returns, B photoelectrons are measured from the reflected light of the target. In this example, approximately 74 photoelectrons are expected back from the target, 46 photoelectrons of background light are expected, and the thermal noise is expected to be 396 electrons. If the APD gain used to detect the photoelectrons is on the order of 400, the thermal noise becomes less than a single photoelectron, because each photon produces 400 photoelectrons upon detection. With an appropriately high APD gain, the thermal noise can be ignored in this example. If the dark current noise is ignored as well, the PMF of the measured data P(D|H1) is equal to Eq. (4.1), with S = 74 photoelectrons. The background PMF P(D|H0) is equal to Eq. (4.2), with B = 46 photoelectrons. Substituting these terms into Eq. (4.7) yields an expression for the first LRT derived in this text, 1:
Detection and Estimation Theory Applied to LADAR Signal Detection M B D S e B 1 M ln ( M ) 1 ( D )
D
( M DS )
DS 0
M 1 S
( DS )
DS
D ln( B ) B ln( D !)
89
B ( D Ds )! 1 DS
(4.8) say H1; otherwise, say H0. 1 for this example can be computed as a function of the photocount of the measured data via the following MATLAB code: M=1; % Coherence parameter S=74; % Signal photons under hypothesis 1 B=46; % Background photons under hypothesis 0 for D=1:200 den(D)=D*log(B)-B-gammaln(D+1); Ds=0:D; num1(D)=-B-M*log(1+S/M)-gammaln(M); num2(D)= log(sum(exp(-Ds.*log(1+M/S)+(D-Ds).*log(B)gammaln(D-Ds+1)))); LRT(D)=(num1(D)+num2(D))./den(D); end plot(1:200,LRT,1:200,ones(1,200),':')
The plot produced by this code in Fig. 4.1 shows that for any photocount value D greater than 58, the detector will say that a target is present. This example demonstrates the LRT’s use for making a decision based on the measured data, but it requires extensive knowledge of the signal’s PMF when the target is present. This is problematic since the mean number of signal photoelectrons S used to define the PMF of the signal when the target is present is only known if the range to the target and its reflectivity are known. These parameters are generally not known at the time the target is initially acquired, thus making the LRT impractical for most real-world applications. The detection method presented in this section is optimal, but it requires many nonlinear operations that do not lend themselves to real-time processing applications. An alternate technique approximates the noise in both the background and the signal as being Gaussian with the appropriate mean and variance. Using the Gaussian approximation, the PMF for the discrete data, assuming the target is present, is given by
P ( D | H1 )
1
1 2π
[ D ( S B )]2
e
212
.
(4.9)
90
Chapter 4
Figure 4.1 LRT plot as a function of the measured data in units of photoelectrons for Example 4.1. The dotted line is the threshold value under which the detector will decide the target is present. For values over the threshold, the detector will indicate the target is absent.
In this case, P(D|H1) is meant to be a PMF, so Eq. (4.9), being a continuous Gaussian PDF, is multiplied at each discrete point D by the area of a rectangle of width equal to 1 photocount. This produces the effect of concentrating probability mass at each discrete value for the random variable D. In Eq. (4.9), is the standard deviation of the waveform noise in units of photoelectrons when the laser pulse is present. These noise photoelectrons are often back-propagated through the electronics, detector, and optical system to model the noise equivalent photon (NEP) level. The NEP provides a ready value that can be compared with the level of the reflected photons incident on the aperture for estimating factors such as maximum range. That comparison is commonly referred to as the SNR. The signal photocount S can be used to estimate the variance of the laser speckle noise via Eq. (1.25). The speckle noise variance can then be added to the background photocount B, which is equal to the variance of the Poisson noise. The square root of the sum of these variances is equal to the variance of the data. The PMF of the data when the target is absent approximates the Poisson PMF as being Gaussian. Using the fact that the mean is equal to the variance, it has the following form: 2
( DB) 1 P ( D | H1 ) e 2B . 2πB
(4.10)
Detection and Estimation Theory Applied to LADAR Signal Detection
91
The LRT, defined as 2, is produced by substituting Eq. (4.9) into the numerator of Eq. (4.7) and Eq. (4.10) into the denominator is given by
( D S ) 2 ln 1 2π 2 12 2 ( D) ( D B ) 2 ln 2πB 2B
1 say H1, otherwise say H0. (4.11)
The LRT in Eq. (4.11) features fewer computations—a number that is orders of magnitude less than the LRT shown in Eq. (4.8). Equation (4.11) uses only two natural logarithm operations, and the remaining computations are multiplication and addition operations. This is in stark contrast to gamma function calculations that are computed on every waveform data point found in Example 4.1. For this reason, the Gaussian noise approximation of the data produces a detection algorithm that is much more readily computed in real time. If the variance of the data under hypothesis H1 is approximated as being equal to the variance under hypothesis H0, the LRT becomes 3:
( D S ) 2 ln B 2 3 ( D) ( D B ) 2 ln 2B
2πB
2πB
1 say H1, otherwise say H0. (4.12)
3 can be simplified by multiplying both sides of Eq. (4.12) by the denominator so that the inequality becomes
( D S ) 2 ( D B ) 2 ln( 2πB ) ln( 2πB ). 2B 2B In this special case, the natural logarithms can be eliminated from both sides of the equation. Then both sides of the resulting equation can be multiplied by 2B to yield the inequality
( D S ) 2 ( D B ) 2 . We can further reduce the inequality by expanding the squares on both sides and removing terms that are the same. After simplification, 3 becomes
3 ( D) D
(S B) say H1, otherwise say H0. 2
(4.13)
92
Chapter 4
This is the classic LRT obtained when the signals under both hypotheses are Gaussian random variables with equal variances. In Sec. 4.2, methods for designing a detector are explored that use criteria other than the likelihood criteria used in this section.
4.2 Decision Criteria In this section, performance metrics are introduced that can be used as criteria for designing detection systems. The first performance metric is the probability of detection Pd , which is the chance that a detector will find the target when it is present in the LADAR return.20 It is defined mathematically for discrete PMFs as
Pd
DDtarget
P( D | H1 ).
(4.14)
In this equation, Dtarget is the set of photocount values for which the LRT is less than 1. If the probability of detection will be used to design the LADAR system, it serves to define the number of mean signal photoelectrons S needed, given the number of photoelectrons generated from the background light B. Example 4.1 demonstrated that, for given values S and B, the set of photoelectron values that caused the LRT to be less than 1 could be determined. In this way, Dtarget is determined as a function of these signal and background parameters. With Dtarget identified, Eq. (4.14) can be used to compute the probability of detection. This process is repeated for all possible values of S, given the set of ranges and reflectivity that the target can possess, to generate a plot of the probability of detection versus S. This is demonstrated in Example 4.2. Example 4.2
A LADAR system defined in Example 4.1 is to achieve a probability of detection of 0.9 using the LRT to obtain an optimal detector design. At the given laser power it was calculated that S = 74 photoelectrons while the background B = 46 photoelectrons. The probability of detection obtained in this case from the results of Example 4.1 and Eq. (4.14) is 0.84. In order to raise the probability of detection and still use the optimal detector, the laser power needs to be adjusted. To determine the required laser power, we will allow S to vary from 74 photoelectrons up to as much as 200 photoelectrons. Figure 4.2 shows the probability of detection as a function of S. This figure shows that the desired probability of detection is achieved when the detection threshold is approximately 141 photoelectrons. This implies that a laser power increase of 190 percent is required to achieve the design criteria. The following MATLAB code was used to obtain these results: M=1; % Coherence parameter B=46; % Background photons under hypothesis 0
Detection and Estimation Theory Applied to LADAR Signal Detection
93
for S=74:200 % Search the set of signal photon levels for D=1:2000 den(D)=D*log(B)-B-gammaln(D+1); Ds=0:D; num1(D)=-B-M*log(1+S/M)-gammaln(M); num2(D)= log(sum(exp(-Ds.*log(1+M/S)+(D-Ds).*log(B)gammaln(D-Ds+1)))); LRT(D)=(num1(D)+num2(D))./den(D); end xx=find(LRT<1); Dthresh=xx(1); DD=Dthresh:2000 Pd(S)=sum(exp(num1(DD)+num2(DD))) end plot(74:200,Pd(74:200))
Figure 4.2 Probability of detection plot of the optimal detector as a function of S, which is the mean number of signal photoelectrons returned from the LADAR pulse. This curve crosses the desired level of 0.9 at S = 141 photoelectrons. The curve has a jagged shape because the set of values for Dtarget changes as S changes.
94
Chapter 4
Practical decision criteria cannot generally be determined based on targetdependent parameters such as the range to the target or its reflectivity. Although it might be possible to integrate Eq. (4.3) over all values of S, thus making the PMF independent of the unknown range and reflectivity of the target, this would make the LRT very difficult to compute. In this section, a practical detection technique based on the statistics of the background is introduced and explored. Instead of defining the receiver by its probability of detection, this technique specifies its probability of false alarm. The probability of false alarm Pf a is the chance that if the target is not present, a target will be falsely detected.20 It is computed via the following equation:
Pfa
P( D | H
D D NB
o
).
(4.15)
In this equation, the set of the photocount threshold DNB is chosen so that the right side of the equation is equal to the specified probability of false alarm Pfa. This design strategy involves summing over the probability of the measured data, given that no target is present. This PMF is a function of only the average number of background photoelectrons B. Unlike the target-dependent parameter S, the background can be measured if taken without firing the laser. This makes it possible to identify the PMF P(D|H0) so that the set DNB that produces the proper probability of false alarm can be computed. The following example demonstrates this concept. Example 4.3
A target-detection scheme must be designed for the same LADAR system described in Example 4.1 using the probability of false alarm criteria. In this case, the desired Pf a < .001. The PMF of the measurement when no target is present, P(D|H0), is given by Eq. (4.2) with B = 46 photoelectrons. Again, the background signal can be measured directly by the LADAR system when no laser pulse is fired, making this measurement attainable in a wide variety of LADAR applications. With P(D|H0), it is possible to calculate Pfa directly by using Eq. (4.15). Equation (4.15) can be readily identified as being equal to 1 minus the cumulative distribution of a Poisson random variable, because it is a sum from some photocount value out to infinity of the PMF. The expression for Pfa becomes equal to one minus the sum of the Poisson PMF from zero to the photocount value DNB as the sum over the entire PMF is equal to one. The following MATLAB code is used to compute Pfa as a function of DNB: D_NB=1:100; plot(1-CDF('poisson',D_NB,46))
Detection and Estimation Theory Applied to LADAR Signal Detection
95
Figure 4.3 contains a plot of Pfa for values near the desired 0.001 design point. It shows that a DNB = 68 will ensure that the probability of false alarm is less than 0.001. Using all of the information available from the target from Example 1.1, including the range to the target, laser power, and target reflectivity, we can use DNB to compute the probability of detection for the receiver we have specified. This is because DNB will act as a threshold value. Thus, any photocount value detected by the system that meets or exceeds DNB will be classified as a target measurement; any value below this threshold will be classified as a nontarget measurement. Using Eq. (4.14) and all values greater than or equal to DNB as the set Dtarget, we find that the probability of detection is equal to 0.7474. In summary, three criteria for receiver design have been discussed in this chapter. The first was the optimal receiver design. In this design, the set of photodetector values that corresponds to the target being present and the set that corresponds to the target being absent are determined via the inequality in Eq. (4.7). The second design criteria involved specifying the probability of detection. The third design criteria used the probability of false alarm. All of these methods were designed to operate on a single-sample basis that would allow decisionmaking based on a single photodetector measurement.
Figure 4.3 Plot of Pfa for values near the desired 0.001 design point in Example 4.3. It shows that a DNB = 68 will ensure that the probability of false alarm is less than 0.001.
96
Chapter 4
4.3 Detection Methods Using Waveform Data This section introduces waveform models and their associated joint probability mass functions for detection purposes. The general discrete waveform Dk is the number of photoelectrons per time sample tk, where k is an integer. The measured waveform data Dk will have a mean of Nb with a PMF that is Poisson if the target is not present. If the target is present, then each sample of the data waveform will have the PMF described in Eq. (4.3) with the parameter S = N(k) and the parameter B = Nb. In both cases, we will assume that the waveform data at any two points are statistically independent of one another. This will make the PMF of the data in the nontarget case equal to Ns
P( Dk | H ok (1, N s )) k 1
N bDk e N b . Dk !
(4.16)
In the case where the target is present, the joint PMF for all of the waveform data is equal to the following expression:
P ( Dk | H1k (1, N s )) e Nb N b Dk ( M ) k 1 Ns
N (k ) 1 M
M
( M DS ) N b DS M 1 N (k ) DS 0 ( DS )( Dk Ds )! Dk
DS
.
(4.17)
The optimal detector will choose the hypothesis that has the higher probability given the measured waveform data. The LRT described in Eq. (4.7) can be used to decide which hypothesis is more likely to be true, given the waveform data Dk. The use of a LRT to perform detection using waveform data is demonstrated in the following example. Example 4.4
In this exercise, we return to the code in Example 2.4 that was used to generate waveform data with a negative binomial distribution. We will also simulate background noise. In Example 2.4, the waveform returned from the laser pulse was generated with no background photocount. The code for simulating the background photocount is shown below: S_irr= 1000; % Watts per square meter per micrometer delta_lam=.001 % Bandwidth of receiver in units of micrometers Pbk= S_irr*delta_lam*dA*rho_t*ap_diameter^2/(4*R*R) % Background power i_dark=100e-9; % 1 nano-amps of dark current delta_t=0.2e-9; % 1 sample period from Example 2.3
Detection and Estimation Theory Applied to LADAR Signal Detection
97
electron=1.619*10^(-19); % Elementary charge of electrons N_dark=i_dark*delta_t/electron; N_b=Pbk*delta_t*quantum_eff*tau_atm*tau_opt*Pulse_widt h/(h*v)+N_dark; % Number of photoelectrons from the background N_back=Poissrnd(N_b); % Background number of photons with noise
The background photocount is added to the waveform with speckle noise to produce the data measured by the simulated LADAR system. Figure 4.4 shows a set of typical LADAR data produced by adding the waveform with laser speckle noise to photocounts contributed by the background. Waveforms like those found in Fig. 4.4 are processed through the LRT described in Eq. (4.7), where both the background Nb and the mean of the laser-speckled waveform N(k) are assumed to be known. This non-noisy waveform is placed within the range gate at the proper range. In practice, a waveform without noise or an exact location within the range gate would not be available; however, Sec. 4.5 will show that these parameters may be estimated from the data. The LRT indicates the presence of a target if the ratio is less than 1. Figure 4.5 shows the plot of the LRT for 100 realizations of the waveform exemplified in Fig. 4.4. Figure 4.5 demonstrates that the LRT is less than 1 for every waveform generated. It can be concluded from these figures that the LRT correctly classifies the waveform as either possessing a returned pulse or not in each and every case. The code used to compute the LRT from the data is shown below:
Figure 4.4 Typical waveform data with background added from Example 4.4.
98
Chapter 4
Figure 4.5 LRT plot for 100 noisy waveforms when the signal is present in Example 4.4. This shows that the LRT properly detected the signal, because the value is always < 1.
N_speckle=icdf('nbin',x,M,M./(M+N)); % Noisy waveform due to speckle N_back=Poissrnd(N_b*ones(size(N_speckle))); % Background photons with noise data=N_speckle+N_back; B=N_b; % Background Photons under Hypothesis 0 sum_num=0; sum_den=0; for k=1:max(size(data)) % Search the set of signal photon levels S=N(k); D=data(k); Ds=0:D; num1=-B-M*log(1+S/M)-gammaln(M); num2= log(sum(exp(-Ds.*log(1+M/S)+(D-Ds).*log(B)gammaln(D-Ds+1)))); den=D*log(B)-B-gammaln(D+1); sum_num=num1+num2+sum_num; sum_den=sum_den+den; end LRT(trial)=sum_num/sum_den;
The final exercise in this example involves feeding sets of data to the LRT when the target is not present. In this case, we will input only the background
Detection and Estimation Theory Applied to LADAR Signal Detection
99
data when the target is not present by modifying the following single line of the code presented previously: data=N_back;
With this modification, the data become a Poisson waveform of identically distributed random numbers that all have a mean equal to 123.5 photoelectrons. The LRT for 100 trials using this data in Fig. 4.6 shows that the LRT does a good job of identifying the lack of LADAR pulse in the data. As with the single sample detector discussed in Sec. 4.1, the LRT for detecting the presence of a target with waveform data can also be simplified by assuming Gaussian statistics. Following the method presented in Sec. 4.1, the joint probability of the waveform data samples under hypothesis H1 when the data are approximated as a Gaussian random vector with zero covariance between the elements is given by Ns
1 P ( Dk k (1, N s ) | H1 ) e k 1 ( k ) 2π
{ Dk [ N ( k ) Nb ]}2 2 ( k ) 2
.
(4.18)
In this equation, (k) is the standard deviation of the waveform data under hypothesis H1 for each sample, which can be computed via Eq. (1.25).
Figure 4.6 LRT plot for 100 noisy waveforms of background data when the target is not present. The LRT is always >1, indicating the target is not present.
100
Chapter 4
The PMF for the waveform data under hypothesis H0, assuming statistical independence between the samples, is given by Ns
P ( Dk k (1, N s ) | H o ) k 1
1 e 2πN b
( Dk Nb ) 2 2 Nb
.
(4.19)
The LRT for the Gaussian case is computed by substituting Eqs. (4.18) and (4.19) into Eq. (4.7). This LRT can be computed for the same sample waveform data used in Example 4.4. Figure 4.7 shows the LRT plotted for 100 sample waveforms under both hypotheses H1 and H0. These results, based on the actual PMF of the signal, show that the Gaussian approximation makes more errors than the detector. The probability of detection and probability of false alarm are more difficult to compute for the waveform data, because a simple threshold cannot be set for the data’s photocount value. In this case, the probability of detection and the false alarm rate can be computed via a Monte Carlo simulation, as described in Sec. 4.4.
Figure 4.7 LRT plot for 100 trial waveforms when the signal is present (solid curve) and when only the background is present (broken curve). The detector correctly identified that the background waveforms did not contain a target every time, but the detector failed to detect a target one time in 100 trials.
Detection and Estimation Theory Applied to LADAR Signal Detection
101
4.4 Receiver Operating Characteristics In this section, methods for computing receiver operating characteristics (ROC) curves are explored for both single-sample and waveform detectors. The ROC curve is a plot of the probability of detection versus the probability of false alarm.20 These plots are used to compare the performance of one detector or detection scheme versus another. In general, a detection method with an ROC value that is higher than another method’s ROC value indicates that for the same probability of false alarm, the first method has a higher probability of detection than the second. In cases where a simple threshold can be set for the received signal, the probability of detection is computed via Eq. (4.14), and the false alarm probability can be computed via Eq. (4.15). Both probabilities are a function of the threshold that defines the set of photodetector values corresponding to the presence of a target and the set that defines the presence of the background only. Since this threshold varies, the probability of detection and the probability of false alarm change. A graph can be constructed to demonstrate the probability of detection versus the probability of false alarm. Example 4.5
In this example, we revisit the scenario described in Example 4.3. For this case, the background contains 46 photoelectrons on average while the returning laser pulse provides 74 photoelectrons. The photoelectron detection threshold DNB in Eq. (4.15) is swept from 1 to 1000. For each threshold value, the probability of false alarm is computed via Eq. (4.15) using Eq. (4.2) for the PMF P(D|H0). The probability of detection is computed via Eq. (4.14) using the PMF P(D|H1) found in Eq. (4.3). These two quantities are then graphed versus each other to obtain the ROC curve. Figure 4.8 shows the ROC curve obtained from this example. The formula for determining the probabilities of detection and false alarm for waveform data is not simple to express in closed form, because there is no simple threshold to set for the photocount value that defines how targets are delineated from the background. For this case, it is possible to compute the probabilities of detection and false alarm by generating a large collection of noisy waveforms that sometimes contain the target and sometimes do not. The probability of detection can be computed by counting the number of times the detector successfully detects the target divided by the total number of times it was present. The probability of false alarm can be computed in a similar fashion by counting the number of times a target is detected in a waveform that does not contain the target waveform. This process, although theoretically possible, is not practical since very large amounts of simulated data may need to be generated to compute reliable estimates of the probabilities of detection and false alarm. An alternate method for determining the detection and false alarm probabilities is to treat the LRT itself as a random variable. The LRT described in Eq. (4.7) computes the ratio of
102
Chapter 4
Figure 4.8 ROC curve obtained from Example 4.5 showing that a probability of detection of approximately 0.75 can be obtained with a very small false-alarm probability. The ROC curve shown here has a threshold going from a high value to a low value as the curve moves from left to right.
the natural logarithm of P(D|H1) divided by the natural logarithm of P(D|H0). The numerator of the LRT is computed in the following equation by taking the natural logarithm of Eq. (4.17): ln( P ( D | H1 ))
N e N Dk ln( Nb ) ln k 1 s
b
1 N ( k ) M ( M )
M
D D ( M DS ) N b D M ln 1 D 0 ( DS )( Dk Ds )! N ( k ) k
S
S
S
.
If the number of samples in the waveform is large enough, this summation of a function of the random variables Dk will produce a random variable that has a Gaussian distribution via the central limit theorem.20 The denominator can be factored into this summation in the numerator, because it will be a single number that serves to divide every element of the summation in the numerator. By including the LRT denominator in this way, we can approximate the LRT itself as a Gaussian random variable that can be described by its mean and variance under either hypothesis H1 or H0. The LRT’s mean and variance under either hypothesis can be computed by simulating a limited set of data from which the sample mean and variance can be computed. Once the Gaussian statistics are computed, the probability of detection and false alarm can be computed by the following equations:20
Detection and Estimation Theory Applied to LADAR Signal Detection
Pd
103
T
P( | H ) , 1
(4.20)
and
Pfa
T
P( | H
o
) .
(4.21).
These equations integrate the PDF of the LRT from minus infinity to the threshold T, because the LRT must be below the threshold value for hypothesis H1 to be chosen [see Eq. (4.7)]. Example 4.6
In this example, we will compute the ROC curve for a case where the mean and variance of the LRT under hypothesis H1 is computed from data that have a sample mean equal to 0.98 and a sample standard deviation of 0.0125. When the signal is not present, the LRT under hypothesis H0 has a sample mean of 1.01 and a standard deviation of 0.0125. The ROC curve is computed using Eqs. (4.20) and (4.21) so that the probabilities of detection and false alarm can be computed as a function of the threshold value TROC. In this case, the threshold is varied between 0 and 2. The following MATLAB code is used to compute the probabilities of detection and false alarm as a function of the threshold: mn1=.98 % Mean of the LRT under hypothesis 1 std1=.0125 % Standard deviation for hypothesis 1 mn2=1.01 % Mean of the LRT under hypothesis 0 std2=.0125 % Standard deviation for hypothesis 0 thresh=0:.00001:2; Pd=cdf('norm',thresh,mn1,std1); Pfa=cdf('norm',thresh,mn2,std2); plot(Pfa,Pd)
Figure 4.9 shows a plot of the ROC curve obtained by plotting the probability of detection versus the probability of false alarm.
4.5 Range Estimation This section will explore different methods for estimating the range to the target from waveform data. These methods include the following: 1. Peak estimator 2. Cross-correlation range estimator (matched filter) 3. Leading edge detector
104
Chapter 4
Figure 4.9 ROC curve for waveform data obtained from a Gaussian approximation of the LRT in Example 4.6.
4.5.1 Peak estimator
The peak estimator is a range estimator that assigns the target position to correspond to the time in the returned waveform where the maximum value occurred. This operation can be carried out on either the raw data or on interpolated waveform data, thus allowing a finer estimate of the target range than would be possible using the time between digital samples of the signal. Chapter 2 discussed a method for interpolating signals that used the DFT with a zero-padding operation. This same method can be used to interpolate the waveform data recovered by a LADAR system. The peak estimator finds the maximum waveform value and uses the time corresponding to the maximum value divided by two times the speed of light as the target range. The following example demonstrates the use of the peak estimator on waveform data. Example 4.7
In this example, waveform data generated in Example 4.4 is processed using the peak estimator. Subsample range estimation will be attempted by using the Fourier interpolation method introduced in Chapter 2. The following code is used to compute 100 separate noisy waveforms that can be used to estimate the average range and standard deviation of range estimates obtained from the peak estimator:
Detection and Estimation Theory Applied to LADAR Signal Detection
105
Sigma_w = 2e-9; % Pulse standard deviation in units of seconds Rmin=990; % Minimum range in the range gate minT=Rmin*2/3e8; % First time that the receiver will measure the return Rmax=1010; % Maximum range in the range gate maxT=Rmax*2/3e8; % Last time that the receiver will measure the return deltat=Sigma_w/10; % Nyquist sample time in seconds t=minT:deltat:maxT % Range of times in the range gate R_vec=990:.01:1010; NN=max(size(R_vec)); for trial=1:100 x=rand(size(N)); N_speckle=icdf('nbin',x,M,M./(M+N)); % Noisy waveform due to speckle N_back=Poissrnd(N_b*ones(size(N_speckle))); % Background number of photons with noise data=N_speckle+N_back; idata=interpft(data,NN); Xx=find(idata==max(idata)); Est_range(trial)=R_vec(Xx); trial end
The mean of the range estimates from 100 trials is found to be equal to 999.96 m and the standard deviation is equal to 0.347 m. Noise can affect the performance of this kind of estimator by causing other samples of the waveform to appear larger in amplitude than the sample that would have been the largest if noise were not present. The mean-squared error in range PE due to noise can be computed by the following expectation operation: Ns
2 PE (rk rtrue ) 2 P(rk ) ,
(4.22)
k 1
where P(rk) is the probability that range rk is selected. This probability can be computed for the peak estimator because it is the probability that sample k in the waveform has the largest value. If the noise in the waveform is statistically independent at every point, then this calculation can be accomplished pair-wise, meaning that the probability that sample number k is larger than sample number
106
Chapter 4
q can be computed by itself. The total probability that sample k is larger than or equal to any other sample in the waveform is
P(rk )
P( D
Dq 0
k
Dq | Dq ) P( Dq ) .
(4.23)
The use of the > or = sign simplifies this expression but allows for the case where multiple sample values could theoretically be the peak value. This event should be fairly rare and should not skew the mean square error calculation to any great degree. Equation (4.23) is simplest to compute in the case where the pulse shape produced by the laser is a Dirac delta function. In this case, P(rk) is computed for two cases: when the target is not present in sample k, and when the target is present in the sample. The probabilities in these two cases will be a function of the PMFs of the signal and the background. Example 4.8 demonstrates how these calculations are made. Example 4.8 In this example, the target is illuminated such that 50 photoelectrons are returned in a pulse that is short enough to be considered a Dirac delta function in time. The background radiation and dark current together generate 25 photoelectrons per measurement time. The waveform contains 20 samples with the time between samples being 10/3 ns. The time corresponding to waveform sample number one is 1 μs. The target is 155 m from the LADAR system. The light returning from the target has a coherence parameter of M = 1. To compute the mean-squared range error when the peak estimator is used to estimate the range, two distinct probabilities must be computed. The first is the probability that the tenth sample of the waveform, which corresponds to the correct target range, has a greater intensity than all of the other samples. This probability is computed via Eq. (4.23) using the Poisson density for the background samples shown in Eq. (4.2) with B = 25. The conditional density P(D10 Dq| Dq) is computed via the following equation:
P ( D10 Dq | Dq )
D10 Dq
e B B
D10
S 1 ( M ) M
M
D10
DS 0
( M DS ) ( DS )
M 1 S
DS
B
DS
. ( D10 Ds )!
This equation can be recognized in Eq. (4.3) as the sum of the PMF, which describes the data when the signal is present and background radiation is added to the returning laser pulse. This double summation can be computed more easily by realizing that the sum is equal to one minus the sum over photocount values from zero up to Dq – 1:
Detection and Estimation Theory Applied to LADAR Signal Detection
Dq 1
P ( D10 Dq | Dq ) 1
B
1 S ( M ) M
e B
D10 0
D10
M
D10
( M DS )
DS 0
( DS )
107
1 S M
DS
B
DS
. ( D10 Ds )!
The above equation is substituted into Eq. (4.23) to yield the complete expression for P(r10):
D 1 e B B D S M P ( r10 ) 1 1 M D 0 D 0 ( M )
q
q
10
10
D10
DS 0
( M DS ) ( DS )
1 S M
DS
B D e B . ( D10 Ds )! Dq ! B
DS
q
Thus, the probability that the wrong range is found, P(rk) when k is not equal to 10, is equal to one minus the probability that the correct range is found. Because all of the other samples in the waveform are identically distributed random variables with a mean equal to the background photocount, they will all have the same probability of being chosen by the peak estimator as the estimated range. This implies that the following calculation can be made to compute P(rq):
P (rq )
1 P (r10 ) . Ns 1
With the probabilities for each range in the range gate computed, the meansquared range error can be computed via Eq. (4.22). In this example, the meansquared range error is equal to 1.66 m. A practical implementation of a peak detection approach is illustrated in Fig. 4.10. One source of ranging error occurs when the slope of the curve near the peak flattens out as the peak amplitude is lowered. The flatter the curve, the more difficult it is for the circuit to detect the peak that causes jitter in the range determination. Also, noise can cause a premature trigger. The threshold portion of the circuit disables the peak detection portion until the signal level exceeds the threshold. The threshold level is a compromise between the prevention of triggering non-noise, and capturing as many true returns as possible. 4.5.2 Cross-correlation range estimator
A maximum-likelihood estimator for the range to the target is one that chooses the range R to maximize the PMF P(D|H1). The PMF for the data when the target is present that is shown in Eq. (4.17) is too cumbersome to solve for the range directly. Instead, the Gaussian approximation for the PMF shown in Eq. (4.18) is used to derive a practical estimator for the range. This PMF is a function of photoelectrons returned from the target N(k). It is assumed that a background of the background level Nb as well as the mean of the waveform in units of measurement can be obtained by taking a measurement without firing the laser.
108
Chapter 4
Figure 4.10 Functional diagram of peak-detecting ranging circuit.
To determine N(k), recall that if the interaction between the pulse and the target does not significantly change the shape of the transmitted pulse, then the returned pulse will be the same shape as the transmitted one except that it will be attenuated. Therefore, the noiseless pulse model will be equal to the normalized shape of the transmitted pulse Pt(k) times a gain factor Gs such that
N (k ) Gs Pt [k 2 R / (ct )].
(4.24)
This model for the noiseless waveform is a function of the factor Gs, which is equal to the total number of photoelectrons in the returned pulse, and the range to the target converted into time samples. The algorithm used to estimate the range and gain of the return pulse is constructed using a maximum-likelihood approach. To simplify Eq. (4.18), the natural logarithm is taken. Since the natural logarithm is a monotonic function, the choice of gain and range that will maximize Eq. (4.18) will also maximize the natural logarithm of this equation: Ns
ln[ P( Dk k (1, N s ) | H1 )] k 1
{Dk [ N (k ) N b ]}2 ln (k ) 2π . 2 (k ) 2
If we further assume that the variance of the waveform is approximated as being a constant for all samples and substitute Eq. (4.24) for N(k), this equation simplifies to
ln[ P( Dk k (1, N s ) | H1 )] Ns
k 1
D G P [k 2 R / (ct )] N k
s t
b
2 2
2
ln 2π .
(4.25)
It is evident that the second term in Eq. (4.25) will not affect the choice of range and gain that will maximize this expression, since this term is not a function of these parameters. The remaining term in the log-likelihood function can be
Detection and Estimation Theory Applied to LADAR Signal Detection
109
expanded to produce a function proportional to the original log-likelihood function: Ns
ln[ P ( D | H1 )] ( Dk N b ) 2 2Gs Dk Pt k 2 R / (ct ) 2 N b Pt k 2 R / (ct ) k 1
Gs2 Pt 2 k 2 R / (ct ) .
Some of the terms in the log-likelihood function are not dependent on the range or do not change when the range changes. Therefore, these terms will not affect the choice of range that maximizes the log-likelihood function. If we remove terms that are constant as a function of range, we can produce another function that is proportional to the log-likelihood: Ns
ln[ P ( D | H1 )] Dk Pt k 2 R / (ct ) .
(4.26)
k 1
The operation described in Eq. (4.26) is the cross-correlation function.20 It will produce a different value for each possible value of R. The set of range values for which the cross correlation is computed depends on how finely the range estimate is to be computed. Example 4.9 demonstrates the use of the crosscorrelation function in computing the range from a set of waveform data. Note that the gain parameter Gs in this example is not needed to compute the range estimate. Example 4.9
In this example we revisit the waveform data with background noise generated in Example 4.4. The waveform shape Pt (k) is assumed to be Gaussian with a standard deviation of 2 ns. The range to the target is 1000 m. The simulated LADAR system has a range gate from 990 to 1010 m. The following code is used to compute the cross correlation of the waveform data shown in Fig. 4.4 with the Gaussian reference waveform: Sigma_w = 2e-9; % Pulse standard deviation in units of seconds Rmin=990; % Minimum range in the range gate minT=Rmin*2/3e8; % First time that the receiver will measure the return Rmax=1010; % Maximum range in the range gate maxT=Rmax*2/3e8; % Last time that the receiver will measure the return deltat=Sigma_w/10; % Nyquist sample time in seconds
110
Chapter 4
t=minT:deltat:maxT % Range of times in the Range gate counter=0; for R=990:.01:1010 counter=counter+1; P_t=(1/(sqrt(2*pi)*Sigma_w))*exp(-((tR*2/3e8).^2)/(2*Sigma_w^2)); Cross_corr(counter)=sum(P_t.*data); end R_vec=990:.01:1010; Xx=find(Cross_corr==max(Cross_corr)); Est_range=R_vec(Xx);
This code produces a cross correlation that is a function of the range. Figure 4.11 shows the cross-correlation function obtained using the data shown in Fig. 4.4. The peak of the function occurs at a range of 1000.1 m, thus producing an estimated range that is in error by 10 cm. The mean and standard deviation of the range estimates can be computed from a simulation such as this by generating a large collection of waveforms and estimating ranges from those waveforms. In this case, the mean of the range estimates for 100 trials is found to be 999.9969
Figure 4.11 Plot of the cross-correlation function obtained in Example 4.8. The estimated range is 1000.1 m, which is evident from the location of the peak of the cross-correlation in this graph.
Detection and Estimation Theory Applied to LADAR Signal Detection
111
m, and the standard deviation of the range estimates is found to be equal to 0.0791 m. These results imply that the choice to sample the cross-correlation function with a range increment of 1 cm causes an oversampling of the function. A sample spacing of one-half the standard deviation would produce the same overall accuracy of the range estimates in the presence of noise. It is clear from this example that for the same data set, the cross-correlation range estimator possesses a standard deviation for its range estimates that is nearly one-fifth of the standard deviation obtained using the peak estimator. Unlike the previous approach, the cross-correlation range estimator, which is also known as the matched-filter receiver, is best suited for a post-capture analysis. Therefore, this estimator requires more sophisticated circuitry to capture the entire reflected waveform for later analysis. An example of this circuit is shown in Fig. 4.12. A threshold circuit is used to signal the arrival of a return pulse, then that signal is used to initiate the waveform sampling circuit. This circuit could be designed to store a portion of the return signal prior to the point where the threshold was exceeded, as well as a portion for a fixed time after the signal. That fixed time would be selected to ensure that the return pulse was fully captured and would be based upon the pulse width of the transmitted signal. As a practical matter, the circuit would also include a timeout feature to disable the function if no return pulse was detected within a set time; the amount of time would correspond to the maximum range of interest and possibly some coarse timing function to account for the majority of the range.
Figure 4.12 Diagram of a waveform-capturing receiver circuit.
112
Chapter 4
4.5.3 Leading-edge detectors
Leading-edge detectors accomplish the ranging function by finding the time at which the LADAR signal exceeds a threshold value. One of the simplest circuits to physically implement a leading-edge detector is shown in Fig. 4.13. In this type of circuit, the counter is reset at the beginning of a measurement and then enabled when the laser pulse is transmitted (represented by T0 in Fig. 4.13). The counter continues to count up until the signal amplitude out of the amplifier exceeds the preset threshold. At that time, the Schmitt trigger fires a pulse that stops the counter. This operation is illustrated in Fig. 4.14. The counter value Nc can then be read out and the range to the surface determined using the following equation:
R cN c / (2 f c ),
Figure 4.13 Leading-edge detection ranging circuit.
Figure 4.14 Illustration of leading-edge detection circuit function.
(4.27)
Detection and Estimation Theory Applied to LADAR Signal Detection
113
where fc is the system clock frequency. Note that the range precision obtained by this circuit is limited by the clock frequency. Since the counter would normally update once per clock frequency, the smallest timing interval would be the period 1/fc. Variations on this circuit might include multiple cascading counters interconnected in series so that a counter would only stop and hold the count in the circuit when the counter before it has triggered, and the amplifier signal had dropped below the threshold and then risen back over the threshold, thus indicating another return. This type of circuit is often implemented in LADAR systems designed to capture multiple returns, which can be useful when ranging through obscurants such as foliage. Another common variation of the circuit allows the counter to capture a count, continue counting, then reset the counter buffer each time a new return is detected. At the end of the measurement gate interval, the output of the counter would indicate the range to the last return with sufficient amplitude to trip the trigger. This version is often used in systems designed for missions such as terrain mapping, where the primary interest would be in the shape of the earth below obscurants such as a forest canopy rather than in the shape of the canopy itself. The effect of the threshold setting on the measured range is illustrated in Fig. 4.15. For the threshold setting in this example, T(A) indicates the stop time for a signal with amplitude A. For curve B, the rise time is the same as for A, but because the amplitude is lower, the trigger trips higher on the leading edge of the pulse. As a result, T(B) is longer and would result in a longer calculated range. Finally, return C would never exceed the threshold and would never be detected by the circuit. This effect is known as “range walk” and can result in variation in the measured time from the same surface on the order of the rise time of the leading edge of the transmitted pulse. As discussed previously, noise in the system can significantly affect the performance of a system. To illustrate, examine Fig. 4.16. In this noisy signal,
Figure 4.15 Effect of pulse amplitude and threshold setting on range measurement.
114
Chapter 4
Figure 4.16 Threshold setting effect on noisy signals.
threshold A is set low enough to ensure that most if not all of the return pulses would trip the detection circuit. However, because of the signal noise, there is also a high probability that noise would trip the circuit too, resulting in a range error. If the level is set at threshold B, none of the noise spikes will trip the circuit but the highest return pulses will, creating a high probability of dropped or missed pulses.
4.6 Range Resolution and Range Accuracy The performance of different range estimation techniques presented in this chapter was reported in terms of the range accuracy, which is the LADAR system’s ability to consistently report the correct range of a target. Range resolution is the ability of a system to distinguish two targets at different ranges. The algorithms presented in this chapter are designed to estimate range to a single target within the range gate. Although more-complex algorithms that are capable of distinguishing multiple targets within the range gate are not covered within this chapter, they could easily be incorporated into the schemes of peak detection by choosing the second largest value in the waveform to correspond to a return from a second surface. The same is true for the cross-correlation method. The second highest value in the cross correlation would be associated with the presence of a second surface. The time that the values occur would be related to the range of the second surface. Decision-making methods for determining whether a second surface is present are beyond the scope of this text, as are methods for computing range resolution for a given system. The algorithms that possess better range accuracy may also possess better range resolution, since they can more accurately localize the presence of a single surface. They may also aid
Detection and Estimation Theory Applied to LADAR Signal Detection
115
in the ability to localize the second surface. This ability to localize surfaces should translate into better estimates of the separation between surfaces, and thus, better range resolution.
4.7 Problems 4-1
Redo Example 4.1 using the LRT described in Eq. (4.11). Plot the new LRT versus the photocount value and determine the photocount threshold of the new LRT.
4-2
If a target returns 100 photons to the receiver and the background photocount value is 50, what is the threshold predicted by the LRT defined in Eq. (4.11), assuming the coherence parameter M = 1? What is the threshold predicted by the LRT in Eq. (4.13)?
4-3
Compute the probabilities of detection and false alarm for the detector described in Example 4.1. Using the threshold obtained from the LRT described in Eq. (4.13), compute the probabilities of detection and false alarm for the scenario described in Example 4.1 and compare them to the detection and false alarm probabilities obtained using the threshold from that example.
4-4
If the background contributes 46 photoelectrons per measurement, compute the threshold needed to obtain a probability of false alarm of 0.0001. For this threshold value, compute the probability of detection as a function of the signal strength, assuming the coherence parameter M = 1. Plot the probability of detection for signal levels between 1 and 100 photoelectrons.
4-5
Compute the number of successful detections and false alarms for the waveform detector described by Eqs. (4.15) and (4.16) by simulating 1000 waveforms from Example 4.4 when the target is both present and absent. Use those same waveforms to compute the probability of the number of successful detections and false alarms for the detector described in Eqs. (4.17) and (4.18). Which is better and why?
4-6
Generate a ROC curve for the scenario described in Example 4.1. Does the ROC curve depend on which LRT is used to make a decision?
4-7
Using the scenario in Example 4.4, compute the ROC curve for the LRT described in Eqs. (4.15) and (4.16) if the dark current is 200 nA. Also generate the ROC curve for the detector described by Eqs. (4.17) and (4.18). Plot them on the same graph and determine which waveform detector is superior based on the shape of the ROC curves.
4-8
Compute the sample standard deviation of the range error for the peak detector when used on the waveform data generated in Example 4.4, allowing the coherence parameter M to vary between 1 and 10. Use the peak detector with the appropriate amount of interpolation necessary to quantify the range error. Estimate the standard deviation of the range error
116
Chapter 4
from at least 100 noisy waveforms for each value of M. Plot the standard deviation as a function of M. Comment on the loss of coherence and its effect on the error in the range measurements. 4-9
Repeat Problem 4-8 using the cross-correlation range estimator.
Chapter 5
LADAR Imaging Systems This chapter describes 2D and 3D LADAR systems that perform an imaging function in addition to ranging. A 2D system is one that captures an image of the target area between a minimum and maximum range. This process of selecting a set of ranges through which to form an image is referred to as gated viewing. A true 3D imaging system is one that forms images of the target area at multiple range gates. As with digital cameras or computer monitors, the size of the 3D image is determined by the size of the array containing the picture elements, or pixels (e.g., 256 × 256 indicates an image with 256 rows and columns each). For each pixel, the measured range provides the third dimension in the image. The basic function of 3D imagers is finding the range to each of the smaller elements or pixels that will be combined to make the final, multipixel image. The factors affecting the accuracy of single-range measurements were discussed in previous chapters. This chapter will discuss some additional factors that must be considered when producing an image using different imaging system concepts.
5.1 Single-Pixel Scanning Imagers For a single-point imaging system used to build a 3D image, a single-pixel range is obtained for each laser pulse. This type of system is also referred to as a 1D LADAR system. Some mechanism must be employed to move or “scan” the aim point of the sensor over the area of interest, fill in the array, and create the image. The most common method of scanning the scene is with mirrors attached to mechanical actuators, such as electromagnetic coils, where the scan angle is proportional to the applied voltage. For convenience, we will use a spherical coordinate system centered at the LADAR system with the vertical angles (“up” in reference to the LADAR system) denoted as δ and the horizontal (“left to right”) as ε. The angular step sizes Δδ and Δε define the spatial resolution of the LADAR system, and the total angle (the number of pixels in the horizontal and vertical directions, n and n , times either Δδ or Δε) across the scene is the FOV. It follows that the framing rate fr of such a LADAR system would be a function of the number of pixels in the image and the pulse repetition frequency (PRF) of the laser:
PRF n n f r . 117
(5.1)
118
Chapter 5
The accuracy or fidelity of the image obtained with a scanning LADAR system is determined by several factors. First, the accuracy of each range measurement is dependent upon the SNR. How and why the SNR might fluctuate is a function of the system and target parameters discussed earlier in this text. For this type of system, each shot or measurement is uncorrelated, and shot-to-shot fluctuations in the measured range would appear as, for example, variations or roughness of surfaces in the image. The receiver optics and size of the active area of the photodetector define the viewing angle IFOV of the LADAR system. For single-detector LADAR systems, the beamwidth of the transmitted LADAR pulse is matched to the IFOV of the detector. In practice, the beamwidth is usually designed to be smaller than the step angles Δδ and Δε to ensure that each pixel represents an independent measurement. The beamwidth is also usually set to be smaller than the receiver IFOV to minimize the loss of received power that might otherwise result from the image of the return pulse wandering off of the detector active area due to atmospheric refractive tilting of the return pulse. Keeping the footprint of the beam in the target plane as small as practical also reduces the possible reflectivity effects and shape variations across the target surface. The spatial (δ and ε dimensions of the image) fidelity of the image is driven by the accuracy to which the scanning mechanism can point to the intended angle. For most systems, the error can be smaller than the step angles but can degrade as the step size and scanning speed increase. One possible drawback of a scanning LADAR system is the finite time that such a system takes to capture a complete image. Any relative motion between the LADAR system and the target produces range and possibly orientation changes between the first and last pixel captures in an image. This effect can produce distortion in the resulting 3D image. For example, if a scanning LADAR system captures images at 10 fps for a target that is approaching at 100 km/hr (~25 m/s), the target would move more than 2.5 m in the time taken to capture a single image frame. Any lateral motion relative to the LADAR could similarly distort the image.
5.2 Gated Viewing Imagers Another class of imagers that is in practical use is the laser range-gated imager (LRGI), also sometimes referred to as burst illumination LADAR (BIL). In LRGI operations, the receiver detection circuitry is “gated” by delaying its activation at some time Tg, but only for a short period, g. If we rewrite Eq. (1.1), we can see that the receiver would only respond to signals that were reflected from targets within the range bin or “gate” between ranges Rmin and Rmax:
Rmin cTg / 2
(5.2)
Rmax c (Tg g ) / 2.
(5.3)
and
LADAR Imaging Systems
119
The sensor would ignore any signals from objects closer than Rmin or farther away than Rmax. One of the biggest advantages of this range bin selection is that signals reflected from weather effects such as fog, snow, or rain, which would normally obscure the target, are gone before the sensor is activated. It is even possible to “see” through the holes in more solid obscurants, such as foliage and camouflage netting. By replacing the single-detector element found in a simple rangefinder system with an array of detectors at the focal plane of the receiver, and controlling all of the elements with the same timing, we can produce a 2D image of the target. Normally each detector collects all of the photons that fall on it during the gate “on” time. The resulting image is then a spatially resolved intensity of the return from the target at ranges within the range bin. The basic LRGI concept is illustrated in Fig. 5.1. Range binning and the effect of varying the gate “on” time are illustrated in three LRGI images. In Fig. 5.2, the gate was set so the front of the target vehicle was just within the range bin. The front target and the area in front of the target were illuminated. This case could be useful when potential targets are located very close to background objects, such as foliage or a wall, that could interfere with visibility of the target. The effect of further delaying the “on” time is shown in Fig. 5.3, where the gate time was adjusted so that the middle portion of the target was within the range bin. Finally, in Fig. 5.4, the gate time was delayed so that the target was within the front of the range bin. With the background area illuminated, the target was shown in silhouette with few or no details visible on the target. This mode would be especially useful when viewing very low-reflectivity or highly angular and specular targets, where little or no energy is returned from the object. In order to create a 3D image of a scene, it is necessary to create multiple gated images or “slices” of the scene. Each slice requires a separate laser pulse.
Figure 5.1 LRGI concept.
120
Chapter 5
Figure 5.2 Gate adjusted to place the front of the target just within the range bin.
Figure 5.3 Gate adjusted to illuminate the middle of the target vehicle.
LADAR Imaging Systems
121
Figure 5.4 Gate adjusted to be behind the target vehicle.
Stacking these slices produces a time history of the laser energy reflected by the target scene in a “super range” that starts at the beginning of the first gate and extends through the end of the last gate, with a step size equal to the range bin. This stacking procedure is illustrated in Fig. 5.5. The time history of a single pixel can be treated as the reflected waveform, and using one of the techniques described in Chapter 4, can be processed to determine a range to surfaces within the scene. Since the return energy is integrated over the entire gate time/range, it is not possible to know the exact point within that time period that the photons arrived; therefore, the range precision is limited to about one-half of the range bin.
Figure 5.5 Stacked multiple LRGI frames.
122
Chapter 5
While the orientation of all of the pixels within a slice remains properly aligned from slice to slice, other alignment and timing issues affect the quality of the image and the composite image stack. First, any timing jitter, inaccuracy between when the pulse leaves the LADAR (T0) and the range counter starts, or inaccuracy in turning on the LRGI receiver will in turn result in an inaccurate determination of the range to that bin. Pointing inaccuracies similarly affect the accuracy of the angular location. While these effects have a minimal impact on a single slice, they can be considerably more troublesome when stacking multiple slices. Shot-to-shot pointing errors cause the pixels from stack to stack to be misaligned; even the best such case would require additional processing to correct the alignment. In the worst such case, the alignment could be poor enough that the through-stack alignment could not be readily corrected. In a similar manner, pulse timing errors degrade the alignment of the slices in the range direction. For example, consider the consequences if the timing was always deficient by 10% so that the frame was captured from a bin that was actually 10% closer than what was expected. Instead of subsequent slices being adjacent to each other, they would actually overlap; but if that error was unknown, the operator could not correct for the error. The resulting waveform along a line (in a range) of pixels would look longer, and any resulting range estimation would also be too long. In a similar manner, if the ranges were always too long, the resulting waveform would be compressed and the range estimations too short. Timing errors are normally scattered on both sides of the correct time, and each waveform is the result of a somewhat random mix of both short and long errors. However, all of the waveforms are then comprised of the same errors in the same order. 5.2.1 Design and modeling considerations In the sensors previously described, all of the pulse laser energy is aligned with the IFOV of the single detector. It is relatively straightforward to design the laser beamwidth and optical system in those systems to compensate for minor turbulence-induced beam wander or spreading and for mechanical misalignments. For a staring-type LADAR like the LRGI, where the pulse energy must be distributed over all of the pixels within the image, that distribution can directly impact the image quality and is therefore a critical element when evaluating or predicting LADAR system performance. If the laser beam profile (e.g., Gaussian) is maintained, it can be mapped onto the target area in two ways. First, the beam divergence can be set so that the edges of the beam—for example, the 1/e intensity points—coincide with the corners of the receiver FOV. This same distribution is then mirrored in the pixels of each range slice and must be considered when estimating the sensor performance. As discussed previously, the quality or fidelity of the captured image is directly affected by the amplitude of the captured signal (more correctly, the SNR). If performance estimations are modeled based upon the signals at the center of the image (the area illuminated by the peak of the beam profile), image quality could significantly degrade toward the edges, and the laser power requirements to meet the desired
LADAR Imaging Systems
123
performance could be underestimated. If these estimates are modeled based on the edge intensities, the signal levels near the center will be much higher, possibly even saturated, and the power requirements overestimated. Some form of beam intensity control is needed to even the intensity across the FOV of the receiver. One method of such beam shaping would be to simply increase the divergence of the transmitted beam so that the roll-off near the edges is less pronounced. The obvious problem with this approach is that most of the energy falls outside the FOV and is wasted. Because energy is incident on areas in the target region outside of the FOV, the potential exists for some of that energy to be coupled back into the receiver and thus create another type of noise interference or blurring in the image. A better approach, one that makes optimum use of the transmitted energy and minimizes crosstalk and the coupling of extraneous energy, is to use a diffractive optical element (DOE) to reform the transmitted beam into individual beamlets. Using the DOE, the number and distribution of these beamlets is matched to those of the pixels in the receiver. The laser energy is more or less evenly distributed among the beamlets. The energy from each of these beamlets is reflected back from the target and imaged onto the corresponding detector element in the receiver. If the optical system of the transmitter is designed so the divergence of these beamlets is less than the IFOV of the receiver elements, each beamlet/pixel pair can be treated in the same manner as was discussed in Sec. 5.1. This approach is suitable for relatively short ranges with a low receiver pixel count (cases with up to 32 × 32 elements have been demonstrated21). However, as the range and pixel count increases, the stability of the beamlet pattern can degrade, so the ability to set and maintain the alignment of the returning signal on the receiver elements becomes more problematic. A third approach overcomes some of the alignment problems with beamlets, although not without creating some issues itself. In this approach, a DOE is again used to shape the transmitted beam. However, that beam is shaped into a single beam with a divergence and a shape or footprint that closely matches that of the receiver’s FOV. The intensity profile is also modified to be nearly uniform across that footprint. As a result, the areas in the target scene that are within the pixels’ IFOVs in the receiver are uniformly and maximally illuminated. With this configuration, misalignments might cause some loss of signal along the edge of the sensor but would not affect the majority of the pixels. However, because of the uniform illumination of the target plane, a poor-quality or out-of-focus receiver optical system could cause energy that should be captured by one pixel to spill into adjacent pixels. This issue would be manifest as a blurring of the image along with a loss of signal in the “correct” pixel.
5.3 Staring or FLASH Imagers The final type of LADAR imager that we consider in this chapter is a staring or FLASH system. In a FLASH LADAR, the transmitted laser beam is handled very
124
Chapter 5
much like that in an LRGI imager—that is, the outgoing laser beam is formed so that the beam footprint on the target area closely matches the FOV of the receiver. Depending on the range and number of pixels in the receiver, the outgoing beam may be reformed into beamlets or into a uniform profile, as described in Sec. 5.2.1. This snapshot approach freezes each pixel in relation to the others, reducing the need for image preprocessing to correct pixel registration. Pointing speed, accuracy, and agility requirements are reduced to what is needed to track the target. Because the entire scene is illuminated, the per-pulse energy requirements are similar to those for the range-gated approach. But because only one pulse is needed for each frame, the average laser power required is less—comparable to the power needed for the single-pixel scanned approach. This approach requires some method of monitoring the output of each pixel. Conventional imaging sensor arrays operate with long “on” times and the returns are integrated during the entire period, so any temporal information within the frame time is lost. Previous attempts have been made to monitor each pixel individually, but except for cases with very small numbers of pixels, tedious wiring to the pixels, and banks of individual high-speed counters, these attempts created systems that were too cumbersome to be of any practical use. One of the first practical FLASH or staring 3D imaging LADAR systems was developed by Advanced Scientific Concepts, a small company located in Santa Barbara, California.22 Originally, the processor chip, called the laser RADAR processor (LRP), held an array of unit cells that acted as independent rangefinder processors. This LRP was bump-bonded behind a detector array chip such that each detector was directly connected to a corresponding unit cell. The concept is illustrated in Fig. 5.6. In use, the LRP provided common timing to each of the unit cells; and after each frame was collected sequentially, it read out the range information from each cell and sent it on to the image processing and display system. The block unit cell circuitry is displayed in Fig. 5.7. Detector Array Chip
Bump Interconnects
Wire Bond Pad Unit Cell (LRP-UC) Time Out
Readout/Signal Processor Chip
Amplitude Out
Figure 5.6 Processor chip basic concept.
LADAR Imaging Systems
125
circuit Figure 5.7 LRP unit cell.
At the beginning of each frame cycle, the unit cells were reset and the common timing circuit was started. This timing, labeled RAMP-IN in Fig. 5.7, was a stable, well-defined voltage ramp. The analog memory in the unit cell was a capacitor that acted as a voltage follower. The signal from the attached detector was conditioned and monitored via a threshold detector. When the signal from the detector reached the trigger amplitude, the switch was opened and the capacitor ”held” the voltage. The time for the detector signal to exceed threshold was directly proportional to the TOF between the sensor and the target element imaged in that pixel; and because the ramp slope was known, the voltage held on the capacitor indicated the range. The voltages on all of the unit cells were read out serially, and the data was used to construct the 3D image. The original LRP circuit was designed and fabricated using integrated circuit fabrication technology that was current at the time it was developed (a 1-μm process). The chip was fabricated as a 32 × 32 pixel array with pixel-to-pixel spacing of 400 μm. This chip was then indium bump-bonded to a matching array of silicon PIN photodiodes to form a hybrid sensor chip. Circuitry to clock the output from each of the cells (both timing and peak voltages) was also incorporated onto the chip, and suitable drive and readout electronics were designed and fabricated. As integrated circuit fabrication processes advanced, the design of this LADAR receiver evolved. The current version uses digital circuitry for timing and has been coupled with a variety of detectors such as Si and InGaAs PIN (P-type intrinsic and N-type) photodiodes and APDs. Current versions have been produced in configurations up to 128 × 128 pixels with a pixel-to-pixel pitch or spacing of 100 μm. At the time of this writing, work is continuing to shrink the pitch to 40 μm or less with configurations of 256 × 256 and larger.
126
Chapter 5
The beam shaping and other discussions in Sec. 5.2.1 also apply to a staring 3D imager. As with the range-gated imager, a DOE can be used to generate a uniform intensity pattern that matches the receiver form and FOV. This is even more important for a staring 3D imager, where individual pixels respond independently to the reflected signal. As discussed in Chapter 4, thresholds and probabilities of detection and false alarm are dependent on signal noise, and thresholds are usually universally set in the receiver to be the same for all of the pixels. Therefore, providing a uniform illumination of the target scene becomes an important factor in flattening or smoothing a LADAR system’s response across the FOV.
5.4 Modeling 2D and 3D FLASH LADAR Systems The 3D FLASH LADAR system is modeled in a way that is similar to the 1D LADAR system, but differs in that the detector does not spatially integrate the entire scene. Example 3.1 demonstrated how the LADAR system transmitted a beam with some spatial and temporal shape to the target. In Example 3.2, this beam interacted with the target and was reflected back to the LADAR receiver. This produced images of the target as a function of range that were spatially integrated by the LADAR receiver’s detector. In this section, the operation of both a 2D and 3D LADAR imager will be simulated with the ability to spatially resolve the target. The difference between these systems and the 1D system is the degree to which the scene is spatially filtered by the detector array. In Example 3.2, the entire scene was imaged onto a single detector. When a scene is larger than the detector, the impulse response of the detector must be considered. A focal plane array can be modeled as a device that spatially integrates the signal and then performs an ideal sampling operation. In the same way that the atmospheric and optical impulse responses were combined in Eq. (3.43), the impulse response of the detector is factored into the total impulse response by taking the 2D Fourier transform of the detector shape, Hdet, and multiplying it by the total transfer function of the system and atmosphere combined. The resulting transfer function is used to account for the spatial blurring effects of the LADAR receiver:
H tot ( f x , f y ) H atm ( f x , f y ) H opt ( f x , f y ) H det ( f x , f y ).
(5.4)
After the spatial effects of the LADAR system are accounted for in the images formed on the detector array, the focal plane array performs a sampling operation using a sample taken at a set of grid points in the focal plane. These points correspond to the centers of the detectors in the array. Example 5.1 illustrates this concept with 2D and 3D LADAR systems. Example 5.1 In this example, we return to the LADAR system simulated in Example 3.3. The detector array contains square pixels that are 40 μm on a side. We will simulate a
LADAR Imaging Systems
127
small array with only 36 detectors. The impulse response of the detectors is exactly 8 times larger than that of the 5-μm samples used in the original simulation. This means that the impulse response will be simulated by an 8 × 8 square. The following MATLAB function call generates the impulse response of the detector: for its=1:N_frames psf=abs(receiver_array(:,:,its)).^2; psf=rectblur2(psf/sum(sum(psf)),8,8); tot_otf(:,:,its)=fftshift(avg_otf).*(fft2(fftshift(psf ))); end With the impulse response of the detectors included in the total impulse response, the following code is used to compute the diffracted and sampled images: Pulse=1 for indx=1:max(size(t)) P_rec_diff(:,:,indx)=conv2(P_rec_tot(:,:,indx),fftshif t(real(ifft2(tot_otf(:,:,Pulse)))),'same'); P_rec_samp(:,:,indx)=P_rec_diff(5:8:51,5:8:51); end Figure 5.8 shows the front and back surfaces of the target in the downsampled image.
Figure 5.8 Images of the first and second surfaces, respectively, when the detectors are 40 μm on a side. The images are separated by 20 ns in time.
128
Chapter 5
5.5 Speckle Mitigation for Imaging LADAR Systems The speckle effects caused by the coherent radiation used in most LADAR systems are often among the largest sources of noise in the system. The method for speckle reduction discussed here relies on the possibility of averaging many images of the same scene taken over a short period of time (short enough that the scene will not have enough time to change). In this type of scenario, the only type of change that the scene will undergo is a global translation. This translation can be due to the motion of the system’s line of sight or the tilt introduced by changes in the atmospheric turbulence discussed in Chapter 3. This section discusses methods for tracking and removing this motion so that multiple frames of image data can be averaged to remove noise that may change between shots. The laser speckle itself does not necessarily change in time, but the mechanisms that cause the apparent motion between pulses also cause the speckle to decorrelate in time. Although the illuminating beam is also vulnerable to these kinds of motion, it is assumed that the beam is larger than the LADAR receiver’s FOV. If this is true, the tracking algorithm will be able to follow the motion of the receiver’s line of sight and not the motion of the illuminating beam. Methods for separating beam motion from line-of-sight motion are beyond the scope of this text. The algorithm for tracking a sensor’s line-of-sight motion discussed in this section assumes the contribution of the speckle noise is much greater than that of the readout noise of the imaging sensor. The method introduced here is specifically designed to deal with speckle noise and includes the possibility of incorporating knowledge of the motion statistics. A Bayesian estimator for the tilt parameters is derived by forming the likelihood function for the tilt parameter conditioned on the data and then maximizing it with respect to the tilt parameter.16 The likelihood function can be cast as the conditional a posteriori density, or the conditional probability of the tilt parameters, given the measured image data and the tilt parameter from the last frame. To formulate the a posteriori density function, it is assumed that the mean of the data in the kth frame, i(m,n), can be approximated by the data from the previous frame, dk–1(m,n). In this relationship, i(m,n) is the unshifted or reference image, and w(m,n) is a 2D window function that has a value of 1 inside the region where the image is observed by the sensor and zero outside it. Thus, the image i(m,n) may represent an image that is larger than the FOV of the sensor used to perform the tracking operation, so new scene information may be ignored as parts of the scene outside the sensor FOV become translated into the FOV. If the observed data dk(m,n) at pixel location (m,n) is dominated by speckle noise, it has a probability mass function that is dictated by negative binomial statistics in the following way:
LADAR Imaging Systems
129
P[d k (m, n)] ( M d k (m, n)) w(m, n)i (m, n) 1 ( M )(d k (m, n)) M
-M
M 1 w(m, n)i (m, n)
dk ( m,n )
.
(5.5)
The joint density Dk for the set of all of the pixels in the image is formulated by assuming that the speckle noise is statistically independent between any two pixels in the array. Although this is not generally true, the assumption allows for an approximate joint probability mass function for the entire array that is conditioned on the tilt parameters to be formed:
P[ Dk | αk , βk ] ( M d k (m, n)) w(m, n)i (m αk , n βk ) 1 M n 1 m 1 ( M ) ( d k ( m, n)) N
N
M 1 w(mn, )i (m αk , n βk )
–M
(5.6)
dk ( m,n )
The maximum a posteriori (MAP) estimator attempts to maximize the probability of the tilt parameters, f D, conditioned on the data and motion parameters that have been previously estimated. We can use the Bayes rule to gain insight into the relationship between the a posteriori density for the realizations of the motion parameters in the current frame, k and k, conditioned on the data Dk and the motion parameters from the previous frame.16 In particular,
f Α, Β|D (αk , βk | Dk,αk 1 , βk 1 ) P[ Dk | αk , βk ] f Α, Β (αk 1 , βk 1 ) / P( Dk ) (5.7) The PDF for the motion parameters in the current frame is equal to the PMF for the data in the current frame conditioned on the motion parameters times the conditional PDF for the motion parameters, given the motion parameters from the previous frame. This conditional density was shown to be Gaussian in Chapter 3 with a mean and variance determined by Eqs. (3.39) and (3.40), respectively. P(Dk) is the unconditioned PMF of the data, which is not known. The MAP estimator chooses the values for k and k that maximize Eq. (5.7) by evaluating this function for a range of values for the motion parameters. The function P(Dk) is not a function of the motion parameters and is therefore constant for all combinations of k and k. When the motion parameters for each frame are estimated, the frames can be co-registered and averaged to produce an image of the scene that is less affected by speckle noise. This tracking algorithm can be implemented by computing the natural logarithm of Eq. (5.7). This logarithm is a monotonically increasing function, so
130
Chapter 5
the functions f ( x) and ln(f ( x)) are both maximized for the same choice of x. The choices of k and k that maximize Eq. (5.7) will also maximize the natural logarithm of Eq. (5.7). In evaluating the natural logarithm of Eq. (5.6), it has been found that the motion estimator is vulnerable to new information entering the scene; for example, if a bright area moves into the window area w, then the overall value of Eq. (5.6) increases, thus raising the value of Eq. (5.7) regardless of how well the features in the two images match. In general the mean i(m,n) is computed in the following way for all combinations of k and k in the search range:
i (m k , n k ) d k 1 (m k , n k )
(5.8)
The following example demonstrates the use of the tracking algorithm for finding the motion parameters between frames of image data. Example 5.2 In this example, speckle image data is generated for a 2D camera with pixels that are 10 μm in size and a coherence parameter of M = 10. The target that the LADAR system is viewing is the same as in Example 5.1 with the exception that the reflectivity of the target has a bar pattern, as shown in Fig. 5.9.
Figure 5.9 Reflectivity of the target shown in the color bar on the right. The black background area has a reflectivity of 5%.
LADAR Imaging Systems
131
The impulse response of the detectors in the array must be adjusted from that in Example 5.1 to reflect the 10-μm pixel size in this 2D imager. The following code accomplishes this adjustment: for its=1:N_frames psf=abs(receiver_array(:,:,its)).^2; psf=rectblur2(psf/sum(sum(psf)),2,2); tot_otf(:,:,its)=fftshift(avg_otf).*(fft2(fftshift(psf ))); end for Pulse=1:N_frames for indx=1:max(size(t)) P_rec_diff(:,:,indx)=conv2(P_rec_tot(:,:,indx),fftshif t(real(ifft2(tot_otf(:,:,Pulse)))),'same'); P_rec_samp(:,:,indx)=P_rec_diff(2:2:51,1:2:51,indx); end Img(:,:,Pulse)=sum(P_rec_samp,3); end Figure 5.10 shows an image obtained from a single pulse. Figure 5.11 shows the average of 10 pulses of data, and Fig. 5.12 shows the average of 10 pulses with the tracking algorithm applied.
Figure 5.10 A single image of a target with speckle noise where M = 10.
132
Chapter 5
Figure 5.11 Image of 100 pulses with no tracking algorithm applied. Note the elongation of the target in the diagonal direction caused by motion blur.
Figure 5.12 Image of the 100 averaged frames with the tracking algorithm applied. The center is brighter than in Fig. 5.10 due to the tracking algorithm’s ability to better concentrate the returned power, thus increasing the SNR. The pattern is also more symmetrical, indicating motion blur has been reduced.
LADAR Imaging Systems
133
Many other types of processing challenges exist with LADAR data. One such challenge is to identify targets from LADAR imagery, while another involves combining LADAR images taken at different times of dynamically moving scenes. These topics are well outside the bounds of this introductory text and remain important challenges in the area of LADAR data processing.
References 1.
M. Denny, Blip, Ping & Buss, Johns Hopkins University Press, Baltimore, MD (2007).
2.
J. D. Houston and A. I. Carswell, “Four-component polarization measurement of lidar atmospheric scattering,” Appl. Optics 17(4), pp. 614– 620 (1978).
3.
M. Griffen, “Complete Stokes parameterization of laser backscattering from artificial clouds,” M.S. thesis, University of Utah, Salt Lake City, UT (1983).
4.
K. Sassen, “LiDAR backscatter depolarization techniques for cloud and aerosol research,” Light Scattering by Nonspherical Particles: Theory, Measurements, and Geophysical Applications, M. I. Mishchenko, J. W. Hovenier, and I. D. Travis (Eds.), Academic Press, San Diego, CA, p. 393– 416 (2000).
5.
W. C. Stone, M. Juberts, N. Dagalakis, J. Stone, and J. Gorman (Eds.), Performance Analysis of Next-Generation LADAR for Manufacturing, Construction, and Mobility, NISTIR 7117, National Institute of Standards and Technology, Gaithersburg, MD (2004).
6.
M. I. Skolnick (Ed.), RADAR Handbook, McGraw-Hill, New York (1970).
7.
W. L. Stutzman and G. A. Thiele, Antenna Theory and Design, Wiley, New York (1981).
8.
C. G. Bachman, Laser RADAR Systems and Techniques, Artech House, Inc., Dedham, MA (1979).
9.
A. V. Jelalian, Laser RADAR Systems, Artech House, Inc., Boston, MA (1992).
10. M. Elbaum and R. C. Harney, “Relative merit of coherent vs noncoherent laser RADARs,” Proc. SPIE 300, pp. 130–139 (1981). 11. R. D. Richmond, S. W. Henderson, and C. P. Hale, “Atmospheric effects on laser propagation: Comparisons at 2 and 10 microns,” Proc. SPIE 1633, pp. 74–85 (1992). 12. HITRAN 2004 high-resolution transmission molecular absorption database, Ontar Corp., North Andover, MA (2004). 135
136
References
13. J. W. Goodman, Statistical Optics, Wiley, New York (1985). 14. S. Cain, R. Richmond, and E. Armstrong “Flash LADAR range accuracy limits for returns from single opaque surfaces via Cramer-Rao bounds,” Appl. Optics 45(24), pp. 6154–6162 (2006). 15. B. P Lahti, Signal Processing & Linear Systems, Oxford University Press, New York (1998). 16. J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill, New York (1968). 17. A. Kolmogrov, “The local structure of turbulence in incompressible fluids for very large Reynolds numbers,” Turbulence, Classic Papers on Statistical Theory, S. K. Friedlander and L. Topper (Eds.), Wiley, New York pp. 151–161 (1961). 18. M. C. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Boca Raton, FL (1996). 19. S. C. Cain and M. M. Hayat, “Exploiting the temporal statistics of atmospheric tilt for improved short-exposure imaging,” Technical Digest of the OSA/IEEE Signal Recovery and Synthesis and Integrated Computational Imaging Systems, Albuquerque, NM, pp. 65–67 (2001). 20. H. L. van Trees, Detection, Estimation and Modulation Theory, Wiley, New York (1968). 21. R. M. Marino and W. R. Davis, Jr., “Jigsaw: A foliage-penetrating 3D imaging laser RADAR system,” Lincoln Laboratory J. 15(1), pp. 23–26 (2005). 22. R. Stettner, H. Bailey, and R. D. Richmond, “Eye-safe laser RADAR 3D imaging,” Proc. SPIE 4377, pp. 46–56 (2001).
Index 1D LADAR system, 117 absorption, 10 amplitude screen, 56 angular divergence, 9 angular step sizes, 117 atmospheric transmission, 10 atmospheric turbulence, 70 avalanche photodiode, 25 background, 19 bandwidth, 17 Bayes’ theorem, 87 beam waist, 58 beamwidth, 5 Beer’s Law, 10 bidirectional reflectance distribution function (BRDF), 13 binary hypothesis, 85 binomial approximation, 78 block unit cell circuitry, 125 burst illumination LADAR (BIL), 118 central limit theorem, 70 conditional PMF, 86 convolution, 29 cross-correlation function, 109 cross-correlation range estimator, 111 cumulative distribution function, 22 dark current, 19 degrees of freedom, 18 detector, 14 DFT, 28
diffraction, 56 diffractive optical element (DOE), 123 direct-detection, 7 effective cutoff frequency, 50 far-field condition, 9 FLASH LADAR, 124 focal plane array, 127 Fourier transform, 27 framing rate, 117 free-space propagation, 58 frequency, 15 Fried’s seeing parameter, 73 gain, 25 gamma function, 86 Gaussian, 30 Gaussian beam, 58 Gaussian pulse shape, 31 Geiger mode, 26 global translation, 128 heterodyne, 7 hybrid pulse model, 33 index of refraction, 70 InGaAs, 126 instantaneous field of view, 12 integrated circuit fabrication technology, 126 integration time, 15 interpolated signal, 50 isoplanatic, 71
137
138
Index
Kolmogorov statistics, 77
pulse timing errors, 122
LADAR, 1 LADAR equation, 8 Lambertian, 13 laser, 2 laser RADAR processor (LRP), 125 laser range-gated imager (LRGI), 118 laser speckle, 17 laser transmitter, 8 leading-edge detectors, 112 LiDAR, 1 likelihood ratio test (LRT), 88
quantum efficiency, 14
matched-filter receiver, 111 maximum a posteriori (MAP), 130 negative binomial, 17 negative parabolic pulse, 32 noise equivalent photon (NEP), 90 Nyquist criterion, 34 Nyquist sampling theorem, 49 optical bandpass filter, 19 optics transmission, 14 optimal test, 88 peak estimator, 104 phase screen, 56 photo-current variance, 17 photoelectrons, 15 photomultiplier tube, 25 photons, 15 pixels, 16 Planck’s constant, 15 point spread function, 76 pointing inaccuracies, 122 Poisson, 16 polished surface, 11 probability mass function, 85 probability of detection, 92 probability of false alarm, 94 pulse repetition frequency (PRF), 117
RADAR, 1 range accuracy, 115 range binning, 119 range equation, 8 range gate, 35 Rayleigh scattering, 10 Rayleigh-Sommerfeld diffraction integral, 57 Rayleigh-Sommerfeld propagation, 57 raytracing, 55 receiver’s field of view, 11 rectangle function, 30 reflected angle, 12 resolution, 4 short-exposure transfer function, 77 signal-to-noise ration (SNR), 23 single-point imaging system, 117 solid angle, 13 spatial effects, 55 specular surfaces, 12 structure function, 75 sunlight, 19 surface area, 11 system clock frequency, 46 target profile, 39 target reflectance, 11 thermal noise, 19 threshold, 113 tilt correlation, 74 tilt structure function, 75 tilted plane, 72 time-of-flight, 15 time-shift property, 29 timeout feature, 111 tracking, 129 transmission loss, 10 vacuum, 70
Index
wavelength, 5 zero-padded, 50
139
Richard Richmond is currently the technical director focusing on the development and applications of laser radar technology at ITT/AES. Prior to his retirement from government civil service in February 2009, Mr. Richmond worked in the electro-optics technology division of the Air Force Research Laboratory. He was the team leader for laser radar technology in the multifunction electro-optics branch. Mr. Richmond has been the project engineer or program manager on numerous laser radar development and research efforts. Application areas of the various efforts have included both ground-based and airborne wind sensing, imaging and vibration sensing of hard targets, and remote chemical sensing. He has over 30 years of experience in the development and application of laser-based remote sensing. Stephen Cain is currently an associate professor of electrical engineering at the Air Force Institute of Technology (AFIT) where he teaches courses in laser radar, image processing, and semiconductor devices. His research focus is in the area of LADAR signal and image processing as well as near-earth-orbiting asteroid detection methodology. He has also served as a senior engineer at ITT aerospace and communications division, a senior scientist at Wyle Laboratories, and a captain in the United States Air Force. He received his bachelor’s degree in electrical engineering from Notre Dame, his master’s degree in electrical engineering from Michigan Tech, and his Doctoral degree in electrical engineering from the University of Dayton.